{"id":13565420,"url":"https://github.com/runabol/piper","last_synced_at":"2025-04-03T22:31:18.548Z","repository":{"id":41837220,"uuid":"60993650","full_name":"runabol/piper","owner":"runabol","description":"piper - a distributed workflow engine","archived":true,"fork":false,"pushed_at":"2023-08-09T02:18:16.000Z","size":1261,"stargazers_count":487,"open_issues_count":9,"forks_count":86,"subscribers_count":31,"default_branch":"master","last_synced_at":"2024-09-09T22:59:44.901Z","etag":null,"topics":["apache2","ffmpeg","java","pipeline","springboot","video","workflow-engine"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/runabol.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2016-06-12T23:07:51.000Z","updated_at":"2024-07-30T09:22:10.000Z","dependencies_parsed_at":"2023-08-08T00:46:53.892Z","dependency_job_id":"e3f8401f-c644-4487-abcd-768d44aeb91f","html_url":"https://github.com/runabol/piper","commit_stats":null,"previous_names":["runabol/piper","okayrunner/piper","guitarcade/piper"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/runabol%2Fpiper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/runabol%2Fpiper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/runabol%2Fpiper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/runabol%2Fpiper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/runabol","download_url":"https://codeload.github.com/runabol/piper/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247090152,"owners_count":20881925,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apache2","ffmpeg","java","pipeline","springboot","video","workflow-engine"],"created_at":"2024-08-01T13:01:46.650Z","updated_at":"2025-04-03T22:31:17.311Z","avatar_url":"https://github.com/runabol.png","language":"Java","readme":"# Project Status\n\nI no longer maintain this project. You might want to check out [tork](https://github.com/runabol/tork) as an alternative solution.\n\n# Introduction\n\nPiper is an open-source, distributed workflow engine built on Spring Boot, designed to be dead simple.\n\nPiper can run on one or a thousand machines depending on your scaling needs.\n\nIn Piper, work to be done is defined as a set of tasks called a Pipeline. Pipelines can be sourced from many locations but typically they live on a Git repository where they can be versioned and tracked.\n\nPiper was originally built to support the need to transcode massive amounts of video in parallel. Since transcoding video is a CPU and time instensive process I had to scale horizontally. Moreover, I needed a way to monitor these long running jobs, auto-retry them and otherwise control their execution.\n\n# Tasks\n\nTasks are the basic building blocks of a pipeline. Each task has a `type` property which maps to a `TaskHandler` implementation, responsible for carrying out the task.\n\nFor example here's the `RandomInt` `TaskHandler` implementation:\n\n```\n  public class RandomInt implements TaskHandler\u003cObject\u003e {\n\n    @Override\n    public Object handle(Task aTask) throws Exception {\n      int startInclusive = aTask.getInteger(\"startInclusive\", 0);\n      int endInclusive = aTask.getInteger(\"endInclusive\", 100);\n      return RandomUtils.nextInt(startInclusive, endInclusive);\n    }\n\n  }\n```\n\nWhile it doesn't do much beyond generating a random integer, it does demonstrate how a `TaskHandler` works. a `Task` instance is passed as an argument to\nthe `TaskHandler` which contains all the Key-Value pairs of that task.\n\nThe `TaskHandler` is then responsible for executing the task using this input and optionally returning an output which can be used by other pipeline tasks downstream.\n\n# Pipelines\n\nPiper pipelines are authored in YAML, a JSON superset.\n\nHere is an example of a basic pipeline definition.\n\n```\nname: Hello Demo\n\ninputs:                --+\n  - name: yourName       |\n    label: Your Name     | - This defines the inputs\n    type: string         |   expected by the pipeline\n    required: true     --+\n\noutputs:                 --+\n  - name: myMagicNumber    | - You can output any of the job's\n    value: ${randomNumber} |   variable as the job's output.\n                         --+\ntasks:\n  - name: randomNumber               --+\n    label: Generate a random number    |\n    type: random/int                   | - This is a task\n    startInclusive: 0                  |\n    endInclusive: 10000              --+\n\n  - label: Print a greeting\n    type: io/print\n    text: Hello ${yourName}\n\n  - label: Sleep a little\n    type: time/sleep        --+\n    millis: ${randomNumber}   | - tasks may refer to the result of a previous task\n                            --+\n  - label: Print a farewell\n    type: io/print\n    text: Goodbye ${yourName}\n```\n\nSo tasks are nothing but a collection of key-value pairs. At a minimum each task contains a `type` property which maps to an appropriate `TaskHandler` that needs to execute it.\n\nTasks may also specify a `name` property which can be used to name the output of the task so it can be used later in the pipeline.\n\nThe `label` property is used to give a human-readble description for the task.\n\nThe `node` property can be used to route tasks to work queues other than the default `tasks` queue. This allows one to design a cluster of worker nodes of different types, of different capacity, different 3rd party software dependencies and so on.\n\nThe `retry` property can be used to specify the number of times that a task is allowed to automatically retry in case of a failure.\n\nThe `timeout` property can be used to specify the number of seconds/minutes/hours that a task may execute before it is cancelled.\n\nThe `output` property can be used to modify the output of the task in some fashion. e.g. convert it to an integer.\n\nAll other key-value pairs are task-specific and may or may not be required depending on the specific task.\n\n# Architecture\n\nPiper is composed of the following components:\n\n**Coordinator**: The Coordinator is the like the central nervous system of Piper. It keeps tracks of jobs, dishes out work to be done by Worker machines, keeps track of failures, retries and other job-level details. Unlike Worker nodes, it does not execute actual work but delegate all task activities to Worker instances.\n\n**Worker**: Workers are the work horses of Piper. These are the Piper nodes that actually execute tasks requested to be done by the Coordinator machine. Unlike the Coordinator, the workers are stateless, which by that is meant that they do not interact with a database or keep any state in memory about the job or anything else. This makes it very easy to scale up and down the number of workers in the system without fear of losing application state.\n\n**Message Broker**: All communication between the Coordinator and the Worker nodes is done through a messaging broker. This has many advantages:\n\n1. if all workers are busy the message broker will simply queue the message until they can handle it.\n2. when workers boot up they subscribe to the appropriate queues for the type of work they are intended to handle\n3. if a worker crashes the task will automatically get re-queued to be handle by another worker.\n4. Last but not least, workers and `TaskHandler` implementations can be written in any language since they decoupled completely through message passing.\n\n**Database**: This piece holds all the jobs state in the system, what tasks completed, failed etc. It is used by the Coordinator as its \"mind\".\n\n**Pipeline Repository**: The component where pipelines (workflows) are created, edited etc. by pipeline engineers.\n\n# Control Flow\n\nPiper support the following constructs to control the flow of execution:\n\n## Each\n\nApplies the function `iteratee` to each item in `list`, in parallel. Note, that since this function applies iteratee to each item in parallel, there is no guarantee that the `iteratee` functions will complete in order.\n\n```\n- type: each\n  list: [1000,2000,3000]\n  iteratee:\n    type: time/sleep\n    millis: ${item}\n```\n\nThis will generate three parallel tasks, one for each items in the list, which will `sleep` for 1, 2 and 3 seconds respectively.\n\n## Parallel\n\nRun the `tasks` collection of functions in parallel, without waiting until the previous function has completed.\n\n```\n- type: parallel\n  tasks:\n    - type: io/print\n      text: hello\n\n    - type: io/print\n      text: goodbye\n```\n\n## Fork/Join\n\nExecutes each branch in the `branches` as a seperate and isolated sub-flow. Branches are executed internally in sequence.\n\n```\n- type: fork\n  branches:\n     - - name: randomNumber                 \u003c-- branch 1 start here\n         label: Generate a random number\n         type: random/int\n         startInclusive: 0\n         endInclusive: 5000\n\n       - type: time/sleep\n         millis: ${randomNumber}\n\n     - - name: randomNumber                 \u003c-- branch 2 start here\n         label: Generate a random number\n         type: random/int\n         startInclusive: 0\n         endInclusive: 5000\n\n       - type: time/sleep\n         millis: ${randomNumber}\n```\n\n## Switch\n\nExecutes one and only one branch of execution based on the `expression` value.\n\n```\n- type: switch\n  expression: ${selector} \u003c-- determines which case will be executed\n  cases:\n     - key: hello                 \u003c-- case 1 start here\n       tasks:\n         - type: io/print\n           text: hello world\n     - key: bye                   \u003c-- case 2 start here\n       tasks:\n         - type: io/print\n           text: goodbye world\n  default:\n    - tasks:\n        -type: io/print\n         text: something else\n```\n\n## Map\n\nProduces a new collection of values by mapping each value in `list` through the `iteratee` function. The `iteratee` is called with an item from `list` in parallel. When the `iteratee` is finished executing on all items the `map` task will return a list of execution results in an order which corresponds to the order of the source `list`.\n\n```\n- name: fileSizes\n  type: map\n  list: [\"/path/to/file1.txt\",\"/path/to/file2.txt\",\"/path/to/file3.txt\"]\n  iteratee:\n    type: io/filesize\n    file: ${item}\n```\n\n## Subflow\n\nStarts a new job as a sub-flow of the current job. Output of the sub-flow job is the output of the task.\n\n```\n- type: subflow\n  pipelineId: copy_files\n  inputs:\n    - source: /path/to/source/dir\n    - destination: /path/to/destination/dir\n```\n\n## Pre/Post/Finalize\n\nEach task can define a set of tasks that will be executed prior to its execution (`pre`),\nafter its succesful execution (`post`) and at the end of the task's lifecycle regardless of the outcome of the task's\nexecution (`finalize`).\n\n`pre/post/finalize` tasks always execute on the same node which will execute the task itself and are considered to be an atomic part of the task. That is, failure in any of the `pre/post/finalize` tasks is considered a failure of the entire task.\n\n```\n  - label: 240p\n    type: media/ffmpeg\n    options: [\n      \"-y\",\n      \"-i\",\n      \"/some/input/video.mov\",\n      \"-vf\",\"scale=w=-2:h=240\",\n      \"${workDir}/240p.mp4\"\n    ]\n    pre:\n      - name: workDir\n        type: core/var\n        value: \"${temptDir()}/${uuid()}\"\n      - type: io/mkdir\n        path: \"${workDir}\"\n    post:\n      - type: s3/putObject\n        uri: s3://my-bucket/240p.mp4\n    finalize:\n      - type: io/rm\n        path: ${workDir}\n```\n\n## Webhooks\n\nPiper provide the ability to register HTTP webhooks to receieve notifications for certain events.\n\nRegistering webhooks is done when creating the job. E.g.:\n\n```\n{\n  \"pipelineId\": \"demo/hello\",\n  \"inputs\": {\n    ...\n  },\n  \"webhooks\": [{\n    \"type\": \"job.status\",\n    \"url\": \"http://example.com\",\n    \"retry\": {   # optional configuration for retry attempts in case of webhook failure\n      \"initialInterval\":\"3s\" # default 2s\n      \"maxInterval\":\"10s\" # default 30s\n      \"maxAttempts\": 4 # default 5\n      \"multiplier\": 2.5 # default 2.0\n    }\n  }]\n}\n```\n\n`type` is the type of event you would like to be notified on and `url` is the URL that Piper would be calling when the event occurs.\n\nSupported types are `job.status` and `task.started`.\n\n# Task Handlers\n\n[core/var](src/main/java/com/creactiviti/piper/taskhandler/core/Var.java)\n\n```\n  name: pi\n  type: core/var\n  value: 3.14159\n```\n\n[io/createTempDir](src/main/java/com/creactiviti/piper/taskhandler/io/CreateTempDir.java)\n\n```\n  name: tempDir\n  type: io/create-temp-dir\n```\n\n[io/filepath](src/main/java/com/creactiviti/piper/taskhandler/io/FilePath.java)\n\n```\n  name: myFilePath\n  type: io/filepath\n  filename: /path/to/my/file.txt\n```\n\n[io/ls](src/main/java/com/creactiviti/piper/taskhandler/io/Ls.java)\n\n```\n  name: listOfFiles\n  type: io/ls\n  recursive: true # default: false\n  path: /path/to/directory\n```\n\n[io/mkdir](src/main/java/com/creactiviti/piper/taskhandler/io/Mkdir.java)\n\n```\n  type: io/mkdir\n  path: /path/to/directory\n```\n\n[io/print](src/main/java/com/creactiviti/piper/taskhandler/io/Print.java)\n\n```\n  type: io/print\n  text: hello world\n```\n\n[io/rm](src/main/java/com/creactiviti/piper/taskhandler/io/Rm.java)\n\n```\n  type: io/rm\n  path: /some/directory\n```\n\n[media/dar](src/main/java/com/creactiviti/piper/taskhandler/media/Dar.java)\n\n```\n  name: myDar\n  type: media/dar\n  input: /path/to/my/video/mp4\n```\n\n[media/ffmpeg](src/main/java/com/creactiviti/piper/taskhandler/media/Ffmpeg.java)\n\n```\n  type: media/ffmpeg\n  options: [\n    -y,\n    -i, \"${input}\",\n    \"-pix_fmt\",\"yuv420p\",\n    \"-codec:v\",\"libx264\",\n    \"-preset\",\"fast\",\n    \"-b:v\",\"500k\",\n    \"-maxrate\",\"500k\",\n    \"-bufsize\",\"1000k\",\n    \"-vf\",\"scale=-2:${targetHeight}\",\n    \"-b:a\",\"128k\",\n    \"${output}\"\n  ]\n```\n\n[media/ffprobe](src/main/java/com/creactiviti/piper/taskhandler/media/Ffprobe.java)\n\n```\n  name: ffprobeResults\n  type: media/ffprobe\n  input: /path/to/my/media/file.mov\n```\n\n[media/framerate](src/main/java/com/creactiviti/piper/taskhandler/media/Framerate.java)\n\n```\n  name: framerate\n  type: media/framerate\n  input: /path/to/my/video/file.mov\n```\n\n[media/mediainfo](src/main/java/com/creactiviti/piper/taskhandler/media/Mediainfo.java)\n\n```\n  name: mediainfoResult\n  type: media/mediainfo\n  input: /path/to/my/media/file.mov\n```\n\n[media/vduration](src/main/java/com/creactiviti/piper/taskhandler/media/Vduration.java)\n\n```\n  name: duration\n  type: media/vduration\n  input: /path/to/my/video/file.mov\n```\n\n[media/vsplit](src/main/java/com/creactiviti/piper/taskhandler/media/Vsplit.java)\n\n```\n  name: chunks\n  type: media/vsplit\n  input: /path/to/my/video.mp4\n  chunkSize: 30s\n```\n\n[media/vstitch](src/main/java/com/creactiviti/piper/taskhandler/media/Vstitch.java)\n\n```\n  type: media/vstitch\n  chunks:\n    - /path/to/chunk_001.mp4\n    - /path/to/chunk_002.mp4\n    - /path/to/chunk_003.mp4\n    - /path/to/chunk_004.mp4\n  output: /path/to/stitched/file.mp4\n```\n\n[random/int](src/main/java/com/creactiviti/piper/taskhandler/random/RandomInt.java)\n\n```\n  name: someRandomNumber\n  type: random/int\n  startInclusive: 1000 # default 0\n  endInclusive: 9999 # default 100\n```\n\n[random/rogue](src/main/java/com/creactiviti/piper/taskhandler/random/Rogue.java)\n\n```\n  type: random/rogue\n  probabilty: 0.25 # default 0.5\n```\n\n[s3/getObject](src/main/java/com/creactiviti/piper/taskhandler/s3/S3GetObject.java)\n\n```\n  type: s3/getObject\n  uri: s3://my-bucket/path/to/file.mp4\n  filepath: /path/to/my/file.mp4\n```\n\n[s3/listObjects](src/main/java/com/creactiviti/piper/taskhandler/s3/S3ListObjects.java)\n\n```\n  type: s3/listObjects\n  bucket: my-bucket\n  prefix: some/path/\n```\n\n[s3/getUrl](src/main/java/com/creactiviti/piper/taskhandler/s3/S3GetUrl.java)\n\n```\n  type: s3/getUrl\n  uri: s3://my-bucket/path/to/file.mp4\n```\n\n[s3/presignGetObject](src/main/java/com/creactiviti/piper/taskhandler/s3/S3PresignedGetObject.java)\n\n```\n  name: url\n  type: s3/presignGetObject\n  uri: s3://my-bucket/path/to/file.mp4\n  signatureDuration: 60s\n```\n\n[s3/putObject](src/main/java/com/creactiviti/piper/taskhandler/s3/S3PutObject.java)\n\n```\n  type: s3/putObject\n  uri: s3://my-bucket/path/to/file.mp4\n  filepath: /path/to/my/file.mp4\n```\n\n[shell/bash](src/main/java/com/creactiviti/piper/taskhandler/shell/Bash.java)\n\n```\n  name: listOfFiles\n  type: shell/bash\n  script: |\n        for f in /tmp\n        do\n          echo \"$f\"\n        done\n```\n\n[time/sleep](src/main/java/com/creactiviti/piper/taskhandler/time/Sleep.java)\n\n```\n  type: time/sleep\n  millis: 60000\n```\n\n# Expression Functions\n\n[boolean](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${boolean('false')}\"\n```\n\n[byte](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${byte('42')}\"\n```\n\n[char](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${char('1')}\"\n```\n\n[short](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${short('42')}\"\n```\n\n[int](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${int('42')}\"\n```\n\n[long](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${long('42')}\"\n```\n\n[float](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${float('4.2')}\"\n```\n\n[double](src/main/java/com/creactiviti/piper/core/task/Cast.java)\n\n```\n  type: core/var\n  value: \"${float('4.2')}\"\n```\n\n[systemProperty](src/main/java/com/creactiviti/piper/core/task/SystemProperty.java)\n\n```\n  type: core/var\n  value: \"${systemProperty('java.home')}\"\n```\n\n[range](src/main/java/com/creactiviti/piper/core/task/Range.java)\n\n```\n  type: core/var\n  value: \"${range(0,100)}\" # [0,1,...,100]\n```\n\n[join](src/main/java/com/creactiviti/piper/core/task/Join.java)\n\n```\n  type: core/var\n  value: \"${join('A','B','C')}\" # ABC\n```\n\n[concat](src/main/java/com/creactiviti/piper/core/task/Concat.java)\n\n```\n  type: core/var\n  value: \"${join('A','B','C')\"}\n```\n\n[concat](src/main/java/com/creactiviti/piper/core/task/Concat.java)\n\n```\n  type: core/var\n  value: ${concat(['A','B'],['C'])} # ['A','B','C']\n```\n\n[flatten](src/main/java/com/creactiviti/piper/core/task/Flatten.java)\n\n```\n  type: core/var\n  value: ${flatten([['A'],['B']])} # ['A','B']\n```\n\n[sort](src/main/java/com/creactiviti/piper/core/task/Sort.java)\n\n```\n  type: core/var\n  value: ${sort([3,1,2])} # [1,2,3]\n```\n\n[tempDir](src/main/java/com/creactiviti/piper/core/task/TempDir.java)\n\n```\n  type: core/var\n  value: \"${tempDir()}\"  # e.g. /tmp\n```\n\n[uuid](src/main/java/com/creactiviti/piper/core/task/Uuid.java)\n\n```\n  name: workDir\n  type: core/var\n  value: \"${tempDir()}/${uuid()}\"\n```\n\n[stringf](src/main/java/com/creactiviti/piper/core/task/StringFormat.java)\n\n```\n  type: core/var\n  value: \"${stringf('%03d',5)}\"  # 005\n```\n\n[now](src/main/java/com/creactiviti/piper/core/task/Now.java)\n\n```\n  type: core/var\n  value: \"${dateFormat(now(),'yyyy')}\"  # e.g. 2020\n```\n\n[timestamp](src/main/java/com/creactiviti/piper/core/task/Timestamp.java)\n\n```\n  type: core/var\n  value: \"${timestamp()}\"  # e.g. 1583268621423\n```\n\n[dateFormat](src/main/java/com/creactiviti/piper/core/task/DateFormat.java)\n\n```\n  type: core/var\n  value: \"${dateFormat(now(),'yyyy')}\"  # e.g. 2020\n```\n\n[config](src/main/java/com/creactiviti/piper/core/task/Config.java)\n\n```\n  type: core/var\n  value: \"${config('some.config.property')}\"\n```\n\n# Tutorials\n\n## Hello World\n\nStart a local Postgres database:\n\n```\n./scripts/database.sh\n```\n\nStart a local RabbitMQ instance:\n\n```\n./scripts/rabbit.sh\n```\n\nBuild Piper:\n\n```\n./scripts/build.sh\n```\n\nStart Piper:\n\n```\n./scripts/development.sh\n```\n\nGo to the browser at \u003ca href=\"http://localhost:8080/jobs\" target=\"_blank\"\u003ehttp://localhost:8080/jobs\u003c/a\u003e\n\nWhich should give you something like:\n\n```\n{\n  number: 0,\n  totalItems: 0,\n  size: 0,\n  totalPages: 0,\n  items: [ ]\n}\n```\n\nThe `/jobs` endpoint lists all jobs that are either running or were previously run on Piper.\n\nStart a demo job:\n\n```\ncurl -s \\\n     -X POST \\\n     -H Content-Type:application/json \\\n     -d '{\"pipelineId\":\"demo/hello\",\"inputs\":{\"yourName\":\"Joe Jones\"}}' \\\n     http://localhost:8080/jobs\n```\n\nWhich should give you something like this as a response:\n\n```\n{\n  \"createTime\": \"2017-07-05T16:56:27.402+0000\",\n  \"webhooks\": [],\n  \"inputs\": {\n    \"yourName\": \"Joe Jones\"\n  },\n  \"id\": \"8221553af238431ab006cc178eb59129\",\n  \"label\": \"Hello Demo\",\n  \"priority\": 0,\n  \"pipelineId\": \"demo/hello\",\n  \"status\": \"CREATED\",\n  \"tags\": []\n}\n```\n\nIf you'll refresh your browser page now you should see the executing job.\n\nIn case you are wondering, the `demo/hello` pipeline is located at \u003ca href=\"https://github.com/creactiviti/piper/blob/master/piper-core/src/main/resources/pipelines/demo/hello.yaml\" target=\"_blank\"\u003ehere\u003c/a\u003e\n\n## Writing your first pipeline\n\nCreate the directory `~/piper/pipelines` and create a file in there called `mypipeline.yaml`.\n\nEdit the file and the following text:\n\n```\nlabel: My Pipeline\n\ninputs:\n  - name: name\n    type: string\n    required: true\n\ntasks:\n  - label: Print a greeting\n    type: io/print\n    text: Hello ${name}\n\n  - label: Print a farewell\n    type: io/print\n    text: Goodbye ${name}\n\n```\n\nExecute your workflow\n\n```\ncurl -s -X POST -H Content-Type:application/json -d '{\"pipelineId\":\"mypipeline\",\"inputs\":{\"name\":\"Arik\"}}' http://localhost:8080/jobs\n```\n\nYou can make changes to your pipeline and execute the `./scripts/clear.sh` to clear the cache to reload the pipeline.\n\n## Scaling Piper\n\nDepending on your workload you will probably exhaust the ability to run Piper on a single node fairly quickly. Good, because that's where the fun begins.\n\nStart RabbitMQ:\n\n```\n./scripts/rabbit.sh\n```\n\nStart the Coordinator:\n\n```\n./scripts/coordinator.sh\n```\n\nFrom another terminal window, start a Worker:\n\n```\n./scripts/worker.sh\n```\n\nExecute the demo pipeline:\n\n```\ncurl -s \\\n     -X POST \\\n     -H Content-Type:application/json \\\n     -d '{\"pipelineId\":\"demo/hello\",\"inputs\":{\"yourName\":\"Joe Jones\"}}' \\\n     http://localhost:8080/jobs\n```\n\n## Transcoding a Video\n\nNote: You must have [ffmpeg](https://ffmpeg.org) installed on your worker machine to get this demo to work\n\nTranscode a source video to an SD (480p) output:\n\n```\ncurl -s \\\n     -X POST \\\n     -H Content-Type:application/json \\\n     -d '{\"pipelineId\":\"video/transcode\",\"inputs\":{\"input\":\"/path/to/video/input.mov\",\"output\":\"/path/to/video/output.mp4\",\"profile\":\"sd\"}}' \\\n     http://localhost:8080/jobs\n```\n\nTranscode a source video to an HD (1080p) output:\n\n```\ncurl -s \\\n     -X POST \\\n     -H Content-Type:application/json \\\n     -d '{\"pipelineId\":\"video/transcode\",\"inputs\":{\"input\":\"/path/to/video/input.mov\",\"output\":\"/path/to/video/output.mp4\",\"profile\":\"hd\"}}' \\\n     http://localhost:8080/jobs\n```\n\n## Transcoding a Video (Split \u0026 Stitch)\n\nSee [Transcoding video at scale with Piper](https://medium.com/@arik.c.mail/transcoding-video-at-scale-with-piper-dca23eb26fd2)\n\n## Adaptive Streaming\n\nSee [Adaptive Streaming with Piper](https://medium.com/@arik.c.mail/adaptive-streaming-with-piper-b37e55d95466)\n\n# Using Git as a Pipeline Repository backend\n\nRather than storing the pipelines in your local file system you can use Git to store them for you. This has great advantages, not the least of which is pipeline versioning, Pull Requests and everything else Git has to offer.\n\nTo enable Git as a pipeline repository set the `piper.pipeline-repository.git.enabled` flag to `true` in `./scripts/development.sh` and restart Piper. By default, Piper will use the demo repository [piper-pipelines](https://github.com/creactiviti/piper-pipelines).\n\nYou can change it by using the `piper.pipeline-repository.git.url` and `piper.pipeline-repository.git.search-paths` configuration parameters.\n\n# Configuration\n\n```ini\n# messaging provider between Coordinator and Workers (jms | amqp | kafka) default: jms\npiper.message-broker.provider=jms\n# turn on the Coordinator process\npiper.coordinator.enabled=true\n# turn on the Worker process and listen to tasks.\npiper.worker.enabled=true\n# when worker is enabled, subscribe to the default \"tasks\" queue with 5 concurrent consumers.\n# you may also route pipeline tasks to other arbitrarilty named task queues by specifying the \"node\"\n# property on any give task.\n# E.g. node: captions will route to the captions queue which a worker would subscribe to with piper.worker.subscriptions.captions\n# note: queue must be created before tasks can be routed to it. Piper will create the queue if it isn't already there when the worker\n# bootstraps.\npiper.worker.subscriptions.tasks=5\n# enable a git-based pipeline repository\npiper.pipeline-repository.git.enabled=true\n# The URL to the Git Repo\npiper.pipeline-repository.git.url=https://github.com/myusername/my-pipelines.git\npiper.pipeline-repository.git.branch=master\npiper.pipeline-repository.git.username=me\npiper.pipeline-repository.git.password=secret\n# folders within the git repo that are scanned for pipelines.\npiper.pipeline-repository.git.search-paths=demo/,video/\n# enable file system based pipeline repository\npiper.pipeline-repository.filesystem.enabled=true\n# location of pipelines on the file system.\npiper.pipeline-repository.filesystem.location-pattern=$HOME/piper/**/*.yaml\n# data source\nspring.datasource.platform=postgres # only postgres is supported at the moment\nspring.datasource.url=jdbc:postgresql://localhost:5432/piper\nspring.datasource.username=piper\nspring.datasource.password=piper\nspring.datasource.initialization-mode=never # change to always when bootstrapping the database for the first time\n```\n\n# Docker\n\n[creactiviti/piper](https://hub.docker.com/r/creactiviti/piper)\nHello World in Docker:\n\nStart a local Postgres database:\n\n```\n./scripts/database.sh\n```\n\nCreate an empty directory:\n\n```\nmkdir pipelines\ncd pipelines\n```\n\nCreate a simple pipeline file -- `hello.yaml` -- and paste the following to it:\n\n```\nlabel: Hello World\ninputs:\n  - name: name\n    label: Your Name\n    type: core/var\n    required: true\ntasks:\n  - label: Print Hello Message\n    type: io/print\n    text: \"Hello ${name}!\"\n```\n\n```\ndocker run \\\n  --name=piper \\\n  --link postgres:postgres \\\n  --rm \\\n  -it \\\n  -e spring.datasource.url=jdbc:postgresql://postgres:5432/piper \\\n  -e spring.datasource.initialization-mode=always \\\n  -e piper.worker.enabled=true \\\n  -e piper.coordinator.enabled=true \\\n  -e piper.worker.subscriptions.tasks=1 \\\n  -e piper.pipeline-repository.filesystem.enabled=true \\\n  -e piper.pipeline-repository.filesystem.location-pattern=/pipelines/**/*.yaml \\\n  -v $PWD:/pipelines \\\n  -p 8080:8080 \\\n  creactiviti/piper\n```\n\n```\ncurl -s \\\n     -X POST \\\n     -H Content-Type:application/json \\\n     -d '{\"pipelineId\":\"hello\",\"inputs\":{\"name\":\"Joe Jones\"}}' \\\n     http://localhost:8080/jobs\n```\n\n# License\n\nPiper is released under version 2.0 of the [Apache License](LICENSE).\n","funding_links":[],"categories":["HarmonyOS","Java"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frunabol%2Fpiper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frunabol%2Fpiper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frunabol%2Fpiper/lists"}