An open API service indexing awesome lists of open source software.

https://github.com/zzstoatzz/oreilly-workflow-orchestration


https://github.com/zzstoatzz/oreilly-workflow-orchestration

Last synced: 5 months ago
JSON representation

Awesome Lists containing this project

README

        

# Getting Started with Workflow Orchestration



**Build, run, and monitor data pipelines at scale**

Prepared for [O'Reilly Media](https://www.oreilly.com/live-events/getting-started-with-workflow-orchestration/0636920069056/0636920069055/)

## Instructors:

* [Kalise Richmond](https://www.linkedin.com/in/kaliserichmond/) - Sales Engineer, Prefect
* [Nathan Nowack](https://www.linkedin.com/in/nathan-nowack-a6b59b143/) - Solutions Engineer, Prefect

## About this course:

Data engineers and scientists spend most of their time on negative or defensive engineering, writing code to handle unpredictable failures such as resources going down, APIs intermittently failing, or malformed data corrupting data pipelines. Workflow orchestration tools help eliminate negative engineering, allowing engineers and scientists to focus on the problems they are solving. Modern data applications have evolved, and orchestrators such as Prefect are providing more runtime flexibility and the ability to leverage distributed compute through Dask.

Discover how workflow orchestration can free you up to build solutions, not just avert failures. You’ll learn about basic orchestration features such as retries, scheduling, parameterization, caching, and secret management, and you’ll construct real data pipelines.

## Let's get our development environment set up! 🚀

For this course you will need:

### **Python**

Python greater than version 3.6 is required (version 3.6 is reaching end of life soon).

* Packages in the `requirements.txt` file
* `prefect==2.0b2` - workflow orchestration
* `beautifulsoup4` - web scraping
* `jupyter` - interactive notebooks

Ideally, you should create a virtual environment ([conda](https://docs.conda.io/en/latest/miniconda.html), [pipenv](https://pipenv.pypa.io/en/latest/install/), [poetry](https://python-poetry.org/docs/)) to install the dependencies.

To install the requirements with pip:

```console
pip install -r requirements.txt
```


### **Docker**



[Docker](https://www.docker.com/) is a great entrypoint (pun somewhat intended) into world of engineering! We'll be using it to provide reproducible environments to execute our workflows in. We also have a section devoted to Docker.

- [installing Docker](https://docs.docker.com/engine/install/)

### Optional Dependencies

These are optional dependencies but were added in the `requirements.txt` for convenience.





For the advanced section of this course, we will use a couple of common data engineering tools:
- your own [Airbyte](https://docs.airbyte.com/#quick-start) instance
- [Snowflake trial account](https://signup.snowflake.com)
- install [dbt](https://docs.getdbt.com/dbt-cli/install/overview) to run transforms on your warehouse objects

## Cloning the repo

To clone the repo and run locally

```
git clone https://github.com/zzstoatzz/oreilly-workflow-orchestration.git
```

And then each notebook can be viewed and executed. Some of the code will extend beyond the notebooks because data workflows glue other tools (sometimes non-Python) together.

## Contact Us

For any questions, feel free to reach to out us!

* Kalise - [email protected]
* Nate - [email protected]

The [Prefect Slack](https://www.prefect.io/slack/) is also a good resource for Prefect and Workflow Orchestration questions.

## Further Resources

Listed below are the documentation pages for the tools used:

Data Movement

* [Prefect](https://orion-docs.prefect.io/)
* [Airbyte](https://docs.airbyte.com/)
* [dbt](https://docs.getdbt.com/)
* [Snowflake](https://resources.snowflake.com/)

Distributed Computing

* [Dask](https://docs.dask.org/en/latest/)
* [Ray](https://docs.ray.io/en/latest/)