Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/intel/ai-workflows
A repository of Dockerfiles, scripts, yaml files, Helm Charts, etc. used to build and scale the sample AI workflows with python, kubernetes, kubeflow, cnvrg.io, and other frameworks on Intel platforms in the cloud and on-premise.
https://github.com/intel/ai-workflows
ai ai-workflows cnvrg containerization containers docker docker-compose kubernetes one-api pipelines
Last synced: 7 days ago
JSON representation
A repository of Dockerfiles, scripts, yaml files, Helm Charts, etc. used to build and scale the sample AI workflows with python, kubernetes, kubeflow, cnvrg.io, and other frameworks on Intel platforms in the cloud and on-premise.
- Host: GitHub
- URL: https://github.com/intel/ai-workflows
- Owner: intel
- License: apache-2.0
- Archived: true
- Created: 2022-10-05T04:51:34.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-02-22T21:41:08.000Z (9 months ago)
- Last Synced: 2024-08-09T00:28:13.611Z (3 months ago)
- Topics: ai, ai-workflows, cnvrg, containerization, containers, docker, docker-compose, kubernetes, one-api, pipelines
- Language: Makefile
- Homepage:
- Size: 4.64 MB
- Stars: 11
- Watchers: 5
- Forks: 6
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: .github/CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
README
PROJECT NOT UNDER ACTIVE MANAGEMENT
This project will no longer be maintained by Intel.
Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
Intel no longer accepts patches to this project.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
Contact: [email protected]
# AI Workflows Infrastructure for Intel® Architecture## Description
On this page you will find details and instructions on how to set up an environment that supports Intel's AI Pipelines container build and test infrastructure.## Dependency Requirements
Only Linux systems are currently supported. Please make sure the following are installed in your package manager of choice:
- `make`
- `docker.io`A full installation of [docker engine](https://docs.docker.com/engine/install/) with docker CLI is required. The recommended docker engine version is `19.03.0+`.
- `docker-compose`
The Docker Compose CLI can be [installed](https://docs.docker.com/compose/install/compose-plugin/#installing-compose-on-linux-systems) both manually and via package manager.
```
$ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
$ mkdir -p $DOCKER_CONFIG/cli-plugins
$ curl -SL https://github.com/docker/compose/releases/download/v2.7.0/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
$ chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose$ docker compose version
Docker Compose version v2.7.0
```## Build and Run Workflows
Each pipeline will contain specific requirements and instructions for how to provide its specific dependencies and what customization options are possible. Generally, pipelines are run with the following format:```git submodule update --init --recursive```
This will pull the dependent repo containing the scripts to run the end2end pipeline's inference and/or training.
```= ... = make ```
Where `KEY` and `VALUE` pairs are environment variables that can be used to customize both the pipeline's script options and the resulting container. For more information about the valid `KEY` and `VALUE` pairs, see the README.md file in the folder for each workflow container:
|AI Workflow|Framework/Tool|Mode|
|-|-|-|
|Chronos Time Series Forecasting|Chronos and PyTorch*|[Training](./big-data/chronos/DEVCATALOG.md)
|Document-Level Sentiment Analysis|PyTorch*|[Training](./language_modeling/pytorch/bert_large/training/)|
|Friesian Recommendation System|Spark with TensorFlow|[Training](./big-data/friesian/training/) \| [Inference](./big-data/friesian/DEVCATALOG.md)|
|Habana® Gaudi® Processor Training and Inference using OpenVINO™ Toolkit for U-Net 2D Model|OpenVINO™|[Training and Inference](https://github.com/intel/cv-training-and-inference-openvino/tree/v1.0.0/gaudi-segmentation-unet-ptq)|
|Privacy Preservation|Spark with TensorFlow and PyTorch*|[Training and Inference](./big-data/ppml/DEVCATALOG.md)|
|NLP workflow for AWS Sagemaker|TensorFlow and Jupyter|[Inference](./classification/tensorflow/bert_base/inference/)|
|NLP workflow for Azure ML|PyTorch* and Jupyter|[Training](./language_modeling/pytorch/bert_base/training/) \| [Inference](./language_modeling/pytorch/bert_base/inference/)|
|Protein Structure Prediction|PyTorch*|[Inference](./protein-folding/pytorch/alphafold2/inference/)
|Quantization Aware Training and Inference|OpenVINO™|[Quantization Aware Training(QAT)](https://github.com/intel/nlp-training-and-inference-openvino/tree/v1.0/question-answering-bert-qat)|
|Ray Recommendation System|Ray with PyTorch*|[Training](./big-data/aiok-ray/training/) \| [Inference](./big-data/aiok-ray/inference)|
|RecSys Challenge Analytics With Python|Hadoop and Spark|[Training](./analytics/classical-ml/recsys/training/)|
|Video Streamer|TensorFlow|[Inference](./analytics/tensorflow/ssd_resnet34/inference/)|
|Vision Based Transfer Learning|TensorFlow|[Training](./transfer_learning/tensorflow/resnet50/training/) \| [Inference](./transfer_learning/tensorflow/resnet50/inference/)|
|Wafer Insights|SKLearn|[Inference](./analytics/classical-ml/synthetic/inference/)|### Cleanup
Each pipeline can remove all resources allocated by executing `make clean`.