https://github.com/opencsgs/csghub-dataflow
OpenCSG dataflow is a one-stop data processing platform designed to leverage large model technology and advanced algorithms to optimize the entire data processing lifecycle, enhancing efficiency and precision, while addressing enterprise challenges in data management such as inefficiency, adaptability gaps, and security and compliance issues.
https://github.com/opencsgs/csghub-dataflow
Last synced: 3 months ago
JSON representation
OpenCSG dataflow is a one-stop data processing platform designed to leverage large model technology and advanced algorithms to optimize the entire data processing lifecycle, enhancing efficiency and precision, while addressing enterprise challenges in data management such as inefficiency, adaptability gaps, and security and compliance issues.
- Host: GitHub
- URL: https://github.com/opencsgs/csghub-dataflow
- Owner: OpenCSGs
- License: gpl-3.0
- Created: 2025-06-26T02:45:00.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2026-01-13T10:15:08.000Z (3 months ago)
- Last Synced: 2026-01-26T18:47:55.354Z (3 months ago)
- Language: Python
- Size: 24.9 MB
- Stars: 6
- Watchers: 1
- Forks: 7
- Open Issues: 18
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# csghub-dataflow
OpenCSG dataflow is a one-stop data processing platform designed to leverage large model technology and advanced algorithms to optimize the entire data processing lifecycle, enhancing efficiency and precision, while addressing enterprise challenges in data management such as inefficiency, adaptability gaps, and security and compliance issues.
**DataFlow** is an open-source platform engineered to streamline end-to-end data processing within the AI/ML lifecycle. By unifying data workflows and model optimization, it transforms fragmented pipelines into a cohesive, automated systemβideal for enterprises tackling data complexity at scale.
**π Key Features**
1. **Full Lifecycle Management**
- Unified handling of data ingestion, transformation, modeling, and evaluation.
2. **Seamless CSGHub Integration**
- Directly ingest datasets from CSGHub and push refined data back for model retraining, creating a continuous feedback loop .
3. **Modular & Extensible Design**
- Plug-and-play operators for custom pipelines (e.g., NLP, image, audio processing).
4. **Distributed Computing**
- Scale workloads across clusters via Kubernetes integration .
5. **Multi-Agent Task Orchestration**
- Dynamically allocate complex tasks (e.g., data validation, anomaly detection) to collaborative agents.
6. **MinerU Engine**
- Convert PDFs to structured Markdown/JSON for LLM-friendly datasets .
7. **Growing Operator Library**
- Expandable support for multimodal data (text, image, video) and domain-specific transformations.
## π Acknowledgements
This project is built upon **[Data Juicer](https://github.com/modelscope/data-juicer)**. We sincerely thank the Data Juicer team for their impactful work in data engineering.
### π License
This project inherits the [Apache License 2.0](LICENSE) from Data Juicer.
# π Quick Start
## Building data-flow from Source
```
docker build -t dataflow . -f Dockerfile
docker buildx build --provenance false --platform linux/amd64 -t dataflow . -f Dockerfile
docker buildx build --provenance false --platform linux/arm64 -t dataflow . -f Dockerfile
```
## Prerequisites
Launch postgres container
```bash
docker run -d --name dataflow-pg \
-p 5433:5432 \
-v /tmp/data_flow/pgdata:/var/lib/postgresql/data \
-e POSTGRES_DB=data_flow \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
opencsg-registry.cn-beijing.cr.aliyuncs.com/opencsghq/csghub/postgres:15.10
```
Launch mongoDB container
```bash
docker run -d --name dataflow-mongo \
-p 27017:27017 \
-v /tmp/data_flow/mongodata:/data/db \
-e MONGO_INITDB_ROOT_USERNAME=root \
-e MONGO_INITDB_ROOT_PASSWORD=example \
opencsg-registry.cn-beijing.cr.aliyuncs.com/opencsghq/mongo:8.0.12
```
Launch redis container
```bash
docker run -d --name dataflow-redis \
-p 16379:6379 \
-v /tmp/data_flow/redisdata:/data \
opencsg-registry.cn-beijing.cr.aliyuncs.com/opencsghq/redis:7.2.5
```
## Installation data-flow
```bash
docker run -d --name dataflow-api -p 8000:8000 \
-v /tmp/data_flow/apidata:/data/dataflow_data \
-c "uvicorn data_server.main:app --host 0.0.0.0 --port 8000" \
-e DATA_DIR=/data/dataflow_data \
-e CSGHUB_ENDPOINT=https://hub.opencsg.com \
-e MAX_WORKERS=99 \
-e RAY_ADDRESS=auto \
-e RAY_ENABLE=False \
-e RAY_LOG_DIR=/data/ray_output \
-e API_SERVER=0.0.0.0 \
-e API_PORT=8000 \
-e ENABLE_OPENTELEMETRY=False \
-e DATABASE_DB=data_flow \
-e DATABASE_USERNAME=postgres \
-e DATABASE_PASSWORD=postgres \
-e DATABASE_HOSTNAME=127.0.0.1 \
-e DATABASE_PORT=5433 \
-e STUDIO_JUMP_URL=https://data-label.opencsg.com \
-e REDIS_HOST_URL=redis://127.0.0.1:16379 \
-e MONG_HOST_URL=mongodb://root:example@127.0.0.1:27017 \
dataflow
```
## Installation data-flow-celery
```bash
docker run -d --name celery-work -p 8001:8001 \
-v /tmp/data_flow/celery-data:/data/dataflow_celery \
-c "celery -A data_celery.main:celery_app worker --loglevel=info --pool=gevent" \
-e DATA_DIR=/data/dataflow_celery \
-e CSGHUB_ENDPOINT=https://hub.opencsg.com \
-e MAX_WORKERS=99 \
-e RAY_ADDRESS=auto \
-e RAY_ENABLE=False \
-e RAY_LOG_DIR=/data/ray_output \
-e API_SERVER=0.0.0.0 \
-e API_PORT=8001 \
-e ENABLE_OPENTELEMETRY=False \
-e DATABASE_DB=data_flow \
-e DATABASE_USERNAME=postgres \
-e DATABASE_PASSWORD=postgres \
-e DATABASE_HOSTNAME=127.0.0.1 \
-e DATABASE_PORT=5433 \
-e REDIS_HOST_URL=redis://127.0.0.1:16379 \
-e MONG_HOST_URL=mongodb://root:example@127.0.0.1:27017 \
dataflow-celery
```
## Run data-flow server in development mode locally
### Create a Virtual Environment
```bash
uv venv --python 3.10
source .venv/bin/activate
# or
conda create -n dataflow python=3.10
```
```bash
# Install dependencies
#pip install '.[dist]' -i https://pypi.tuna.tsinghua.edu.cn/simple/
#pip install '.[tools]' -i https://pypi.tuna.tsinghua.edu.cn/simple/
#pip install '.[sci]' -i https://pypi.tuna.tsinghua.edu.cn/simple/
#pip install -r docker/requirements.txt
uv pip install -r docker/dataflow_requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
# Run the server locally
uvicorn data_server.main:app --reload
```
## Run data-flow-celery server in development mode locally
```bash
# Run the celery server locally
celery -A data_celery.main:celery_app worker --loglevel=info --pool=gevent
```
Notes:
- `kenlm`, `simhash-pybind`, `opencc==1.1.8`, `imagededup` in file `environments/science_requires.txt` are only support X86 platform. Remove them if you are using ARM platform.
- The configuration information of `REDIS_HOST_URL` and `MONG_HOST_URL` in `data-flow` and `data-flow-celery` must be consistent.
- If you want to use the data annotation service, please install and enable the **[Label Studio](https://github.com/OpenCSGs/label-studio)** service. Additionally, you need to set the `STUDIO_JUMP_URL` variable of the `data-flow` service to the address of the `Label Studio` service.
## π£οΈ Roadmap
Upcoming:
- Enhanced real-time data streaming
- AutoML integration for automated model tuning
- Cross-cloud synchronization
- Support more data sources
## π€ Contributing
We welcome contributions!
## π Contact
For support or queries:
- Email: [community@opencsg.com](mailto:community@opencsg.com)
- GitHub: [OpenCSG/DataFlow](https://github.com/OpenCSGs)