{"id":13582515,"url":"https://github.com/dgruber/wfl","last_synced_at":"2025-08-20T06:32:40.836Z","repository":{"id":27016245,"uuid":"111231238","full_name":"dgruber/wfl","owner":"dgruber","description":" A Simple Way of Creating Job Workflows in Go running in Processes, Containers, Tasks, Pods, or Jobs ","archived":false,"fork":false,"pushed_at":"2024-10-23T12:54:39.000Z","size":187873,"stargazers_count":46,"open_issues_count":3,"forks_count":8,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-12-08T01:13:05.324Z","etag":null,"topics":["containers","docker","go","golang","high-throughput","hpc","k8s","kubernetes","linux","macos","processes","singularity","workflow","workflow-engine"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dgruber.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-11-18T19:01:32.000Z","updated_at":"2024-11-13T22:27:50.000Z","dependencies_parsed_at":"2024-01-15T15:47:54.275Z","dependency_job_id":"4fc1539e-3703-4cdd-a819-646aedb2d405","html_url":"https://github.com/dgruber/wfl","commit_stats":null,"previous_names":[],"tags_count":33,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dgruber%2Fwfl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dgruber%2Fwfl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dgruber%2Fwfl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dgruber%2Fwfl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dgruber","download_url":"https://codeload.github.com/dgruber/wfl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230400615,"owners_count":18219831,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["containers","docker","go","golang","high-throughput","hpc","k8s","kubernetes","linux","macos","processes","singularity","workflow","workflow-engine"],"created_at":"2024-08-01T15:02:47.244Z","updated_at":"2025-08-20T06:32:40.805Z","avatar_url":"https://github.com/dgruber.png","language":"Go","readme":"# ☮ wfl - A Simple and Pluggable Workflow Language for Go ☮\n\n_Don't mix wfl with [WFL](https://en.wikipedia.org/wiki/Work_Flow_Language)._\n\n[![CircleCI](https://circleci.com/gh/dgruber/wfl/tree/master.svg?style=svg)](https://circleci.com/gh/dgruber/wfl/tree/master)\n[![codecov](https://codecov.io/gh/dgruber/wfl/branch/master/graph/badge.svg)](https://codecov.io/gh/dgruber/wfl)\n\n## What's new?\n\n- Remote context for executing job workflows remotely\n- GPT support for experimenting with LLMs in job workflows ([blog article](https://www.gridengine.eu/index.php/programming-apis/260-enhancing-wfl-with-large-language-models-researching-the-power-of-gpt-for-hpcaienterprisejob-workflows-2023-05-14), [examples](https://github.com/dgruber/wfl/blob/master/examples/llm_openai/main.go))\n- _Check out my [blog article](https://gridengine.eu/index.php/programming-apis/259-streamline-your-machine-learning-workflows-with-the-wfl-go-library-2023-04-10), where I discuss leveraging the wfl library for ML/AI applications using Python and TensorFlow, among other tools._\n\n## Introduction\n\nCreating process, container, pod, task, or job workflows based on raw interfaces of\noperating systems, Docker, Google Batch, Kubernetes, and HPC job schedulers like\n[Open Cluster Scheduler](https://github.com/hpc-gridware/clusterscheduler) can\nbe a tedious. Lots of repeating code is required. All workload management systems\nhave a different API.\n\n_wfl_ abstracts away from the underlying details of the processes, containers, and\nworkload management systems. _wfl_ provides a simple, unified interface which allows\nto quickly define and execute a job workflow and change between different execution\nbackends without changing the workflow itself.\n\n_wfl_ is simple to use and designed to define and\nrun jobs and self-contained job workflows with inter-job dependencies. There is no\nexternal controller runtime required. The whole job workflow can be contained in a \nsingle binary.\n\nIn its simplest form a process can be started and waited for:\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).Run(\"convert\", \"image.jpg\", \"image.png\").Wait()\n```\n\nIf the output of the command needs to be displayed on the terminal you can set the out path in the default _JobTemplate_ (see below) configuration:\n\n```go\n template := drmaa2interface.JobTemplate{\n        ErrorPath:  \"/dev/stderr\",\n        OutputPath: \"/dev/stdout\",\n }\n\n flow := wfl.NewWorkflow(wfl.NewProcessContextByCfg(wfl.ProcessConfig{\n        DefaultTemplate: template,\n }))\n\n flow.Run(\"echo\", \"hello\").Wait()\n```\n\nRunning a job as a Docker container requires a different context (and the image\nalready pulled before).\n\n```go\n    import (\n        \"github.com/dgruber/drmaa2interface\"\n        \"github.com/dgruber/wfl\"\n        \"github.com/dgruber/wfl/pkg/context/docker\"\n    )\n    \n    ...\n    ctx := docker.NewDockerContextByCfg(docker.Config{DefaultDockerImage: \"busybox:latest\"})\n    wfl.NewWorkflow(ctx).Run(\"sleep\", \"60\").Wait()\n```\n\nStarting a Docker container without a _run command_ which exposes ports requires more\nconfiguration which can be provided by using a _JobTemplate_ together with the _RunT()_\nmethod.\n\n```go\n    jt := drmaa2interface.JobTemplate{\n        JobCategory: \"swaggerapi/swagger-editor\",\n    }\n    jt.ExtensionList = map[string]string{\"exposedPorts\": \"80:8080/tcp\"}\n    \n    wfl.NewJob(wfl.NewWorkflow(docker.NewDockerContext())).RunT(jt).Wait()\n```\n\nStarting a Kubernetes batch job and waiting for its end is not much different.\n\n```go\n    wfl.NewWorkflow(kubernetes.NewKubernetesContext()).Run(\"sleep\", \"60\").Wait()\n```\n\n_wfl_ also supports submitting jobs into HPC schedulers like SLURM, Open Cluster Scheduler /\nGrid Engine and so on.\n\n```go\n    wfl.NewWorkflow(libdrmaa.NewLibDRMAAContext()).Run(\"sleep\", \"60\").Wait()\n```\n\n_wfl_ aims to work for any kind of workload. It works on a Mac and Raspberry Pi the same way as on a high-performance compute cluster.\n\nThere is basic support for getting the job output as a string back with the _Output()_ method. It is a convenience wrapper which just reads the job output from a file which\nmust be set before with _OutputPath_. Note that when having multiple tasks, they need\nto have different output paths set (hence use _RunT()_, or different flows, try\nthe new \"{{.ID}}\" replacement in the _OutputPath_, or use _wfl.RandomFileNameInTempDir()_ as _OutputPath_). _Output()_ is currently implemented for the OS, Docker, and Kubernetes backend.\n\nSome backend implementations (like for Kubernetes) support basic file transfer in the\n_JobTemplate_ (when using _RunT()_) using the _StageInFiles_ and _StageOutFiles_ maps.\nOn large scale you are missing checkpoint and restart functionality or HA of the workflow\nprocess itself. Here the idea is not to require any complicated runtime environment\nfor the workflow applications rather keeping workflows small and repeatably executable\nfrom other workflows.\n\n_wfl_ works with simple primitives: _context_, _workflow_, _job_, and _jobtemplate_\n\nFirst support for logging is also available. Log levels can be controlled by environment variables\n(_export WFL_LOGLEVEL=DEBUG_ or _INFO_/_WARNING_/_ERROR_/_NONE_). Applications can use the same\nlogging facility by getting the logger from the workflow (_workflow.Logger()_) or registering\nyour own logger in a workflow _(workflow.SetLogger(Logger interface)_). Default is set to ERROR.\n\n## Getting Started\n\nDependencies of _wfl_ (like drmaa2) are vendored in. The only external package required to be installed\nmanually is the _drmaa2interface_.\n\n```go\n    go get github.com/dgruber/drmaa2interface\n```\n\n## Context\n\nA context defines the execution backend for the workflow. Contexts can be easily created\nwith the _New_ functions which are defined in the _context.go_ file or in the separate\npackages found in _pkg/context_.\n\nFor creating a context which executes the jobs of a workflow in operating system processes use:\n\n```go\n    wfl.NewProcessContext()\n```\n\nIf the workflow needs to be executed in containers the _DockerContext_ can be used:\n\n```go\n    docker.NewDockerContext()\n```\n\nIf the Docker context needs to be configured with a default Docker image\n(when Run() is used or RunT() without a configured _JobCategory_ (which _is_ the Docker image)) then the _ContextByCfg()_ can be called.\n\n```go\n    docker.NewDockerContextByCfg(docker.Config{DefaultDockerImage: \"busybox:latest\"})\n```\n\nFor running jobs either in VMs or in containers in Google Batch the _GoogleBatchContext_ needs to be allocated:\n\n```go\n    googlebatch.NewGoogleBatchContextByCfg(\n        googlebatch.Config{\n          DefaultJobCategory: googlebatch.JobCategoryScript, // default container image Run() is using or script if cmd runs as script\n          GoogleProjectID:    \"google-project\",\n          Region:             \"europe-north1\",\n          DefaultTemplate: drmaa2interface.JobTemplate{\n          MinSlots: 1, // for MPI set MinSlots = MaxSlots and \u003e 1\n          MaxSlots: 1, // for just a bunch of tasks MinSlots = 1 (parallelism) and MaxSlots = \u003ctasks\u003e\n\t},\n    )\n```  \n\nWhen you want to run the workflow as Cloud Foundry tasks the _CloudFoundryContext_ can be used:\n\n```go\n    cloudfoundry.NewCloudFoundryContext()\n```\n\nWithout a config it uses following environment variables to access the Cloud Foundry cloud controller API:\n\n* CF_API (like \u003chttps://api.run.pivotal.io\u003e)\n* CF_USER\n* CF_PASSWORD\n\nFor submitting Kubernetes batch jobs a Kubernetes context exists.\n\n```go\n   ctx := kubernetes.NewKubernetesContext()\n```\n\nNote, that each job requires a container image specified which can be done by using\nthe JobTemplate's _JobCategory_. When the same container image is used within the whole job workflow it makes sense to use the Kubernetes config otherwise you\ncan use _RunT()_ to specify a container image for a specific task.\n\n```go\n   ctx := kubernetes.NewKubernetesContextByCfg(kubernetes.Config{DefaultImage: \"busybox:latest\"})\n```\n\nFor working with HPC schedulers the _libdrmaa_ context can be used. This context requires\n_libdrmaa.so_ available in the library path at runtime. Grid Engine ships _libdrmaa.so_\nbut the _LD_LIBRARY_PATH_ needs to be typically set. For SLURM _libdrmaa.so_ often needs\nto be [build](https://github.com/natefoo/slurm-drmaa).\n\nSince C go is used under the hood (drmaa2os which uses [go drmaa](https://github.com/dgruber/drmaa)) some compiler flags needs to be set during build time. Those flags depend on the workload manager used. Best check out the go drmaa project for finding the right flags.\n\nFor building SLURM requires:\n\n```bash\n    export CGO_LDFLAGS=\"-L$SLURM_DRMAA_ROOT/lib\"\n    export CGO_CFLAGS=\"-DSLURM -I$SLURM_DRMAA_ROOT/include\"\n```\n\nIf all set a libdrmaa context can be created by importing:\n\n```go\n   ctx := libdrmaa.NewLibDRMAAContext()\n```\n\nThe JobCategory is whatever the workload-manager associates with it. Typically it is a\nset of submission parameters. A basic example is [here](https://github.com/dgruber/wfl/blob/master/examples/libdrmaa/libdrmaa.go).\n\nThe **Remote Context** is used for sending jobs to a _drmaa2os_ compatible\njob remote server backend. Such a remote server can be easily created by\nthe drmaa2os remote jobtracker package or by using the [OpenAPI specification](https://github.com/dgruber/drmaa2os/blob/master/pkg/jobtracker/remote/jobtracker_1_0_0_openapi_v3.yaml).\nIt allows to use any existing drmaa2 jobtracker to be accessible as a server. An example is executing Docker containers on a remote server. Another is sending jobs from a container running in Kubernetes to a sidecar.\n\nA simple server example is [here](https://github.com/dgruber/drmaa2os/blob/master/examples/remote/server/server.go). Another is [here](https://github.com/dgruber/wfl/blob/master/examples/remote/server/server.go).\n\n```go\n\n    import(\n   \t    genclient \"github.com/dgruber/drmaa2os/pkg/jobtracker/remote/client/generated\"\n        ...\n    )\n\n\tparams := \u0026client.ClientTrackerParams{\n\t\tServer: \"https://localhost:8088\",\n\t\tPath:   \"/jobserver/jobmanagement\",\n\t\tOpts: []genclient.ClientOption{\n\t\t\tgenclient.WithHTTPClient(httpsClient),\n\t\t\tgenclient.WithRequestEditorFn(basicAuthProvider.Intercept),\n\t\t},\n\t}\n\n\tctx := wfl.NewRemoteContext(wfl.RemoteConfig{}, params)\n```\n\n## Workflow\n\nA workflow encapsulates a set of jobs/tasks using the same backend (context). Depending on the execution\nbackend it can be seen as a namespace.\n\nIt can be created by using:\n\n```go\n    wf := wfl.NewWorkflow(ctx)\n```\n\nErrors during creation can be catched with\n\n```go\n    wf := wfl.NewWorkflow(ctx).OnError(func(e error) {panic(e)})\n```\n\nor with\n\n```go\n    if wf.HasError() {\n        panic(wf.Error())\n    }\n```\n\n## Job\n\nJobs are the main objects in _wfl_. A job defines helper methods for dealing with the workload. Many of those methods\nreturn the job object itself to allow chaining calls in an easy way. Errors are stored internally and\ncan be fetched with special methods. A job is as a container and control unit for tasks.\nTasks are mapped in most cases to jobs of the underlying workload manager (like in Kubernetes,\nHPC schedulers etc.) or raw processes or containers.\n\nThe _Run()_ method submits a new task and returns immediately, i.e. not waiting for the job to be started\nor finished. When the _Run()_ method errors the job submission has failed. The _Wait()_ method waits\nuntil the task has been finished. If multiple _Run()_ methods are called in a chain, multiple tasks might be executed\nin parallel (depending on the backend). When the same task should be executed multiple times\nthe _RunArray()_ method might be convenient. When using a HPC workload manager using the\nLibDRMAA implementation it gets translated to an array job, which is used for submitting\nand running 10s of thousands of tasks in an HPC clusters (like for bioinformatics or for\nelectronic design automation workloads). Each task gets an unique task number set as environment\nvariable. This is used for accessing specific data sets.\n\nThe method _RunMatrixT()_ allows to submit and run multiple tasks based on a job template\nwith placeholders. Those placeholders get replaced with defined values before jobs get submitted.\nThat allows to submit many tasks using different job templates in a convenient way\n(like for executing a range of commands in a set of different container images for testing).\n\nIn some systems it is required to delete job related resources after the job is finished\nand no more information needs to be queried about its execution. This functionality is\nimplemented in the DRMAA2 _Reap()_ method which can be executed by _ReapAll()_ for each\ntask in the job object. Afterwards the job object should not be used anymore as some\ninformation might not be available anymore. In a Kubernetes environment it removes\nthe job objects and potentially related objects like configmaps.\n\nMethods can be classified in blocking, non-blocking, job template based, function based, and error handlers.\n\n### Job Submission\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n|  Run() |  Starts a process, container, or submits a task and comes back immediately | no | |\n|  RunT() |  Like above but with a JobTemplate as parameter | no | |\n|  RunArray() | Submits a bulk job which runs many iterations of the same command | no | |\n|  Resubmit() | Submits a job _n_-times (Run().Run().Run()...) | no | |\n|  RunEvery() | Submits a task every d _time.Duration_ | yes | |\n|  RunEveryT() | Like _RunEvery()_ but with JobTemplate as param | yes | |\n|  RunMatrixT() | Replaces placeholders in the job template and submits combinations | no | |\n\n### Job Control\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| Suspend() | Stops a task from execution (e.g. sending SIGTSTP to the process group)... | | |\n| Resume()|  Continues a task (e.g. sending SIGCONT)... | | |\n| Kill() | Stops process (SIGKILL), container, task, job immediately. | | |\n\n### Function Execution\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| Do() | Executes a Go function | yes | |\n| Then() | Waits for end of process and executes a Go function | yes | |\n| OnSuccess() | Executes a function if the task run successfully (exit code 0)  | yes | |\n| OnFailure() | Executes a function if the task failed (exit code != 0)  | yes | |\n| OnError() | Executes a function if the task could not be created  | yes | |\n| ForEach(f, interface{}) | Executes a user defined function by iterating over all tasks | does not wait for jobs | |\n| ForAll(f, interface{}) | Executes a user defined function concurrently in goroutines on all tasks | no | |\n\n### Blocker\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| After() | Blocks a specific amount of time and continues | yes | |\n| Wait() | Waits until the task submitted latest finished | yes | |\n| Synchronize() | Waits until all submitted tasks finished | yes | |\n| Output() | Waits until the last submitted task is finished and returns the output as string| yes | Only for process, Docker, and K8s currently. |\n\n### Job Flow Control\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| ThenRun() | Wait() (last task finished) followed by an async Run() | partially | |\n| ThenRunT() | ThenRun() with template | partially | |\n| OnSuccessRun() | Wait() if Success() then Run() | partially | |\n| OnSuccessRunT() | OnSuccessRun() but with template as param | partially | |\n| OnFailureRun() | Wait() if Failed() then Run() | partially | |\n| OnFailureRunT() | OnFailureRun() but with template as param | partially | |\n| Retry() | wait() + !success() + resubmit() + wait() + !success() | yes | |\n| AnyFailed() | Cchecks if one of the tasks in the job failed | yes | |\n\n### Job Status and General Checks\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| JobID() | Returns the ID of the submitted job | no | |\n| JobInfo() | Returns the DRMAA2 JobInfo of the job  | no | |\n| Template() |   | no | |\n| State() |   | no | |\n| LastError() |   | no | |\n| Failed() |   | no | |\n| Success() |   | no | |\n| ExitStatus() |   | no | |\n| ReapAll() | Cleans up all job related resources from the workload manager. Do not\nuse the job object afterwards. Calls DRMAA2 Reap() on all tasks. | no | |\n| ListAllFailed() | Waits for all tasks and returns the failed tasks as DRMAA2 jobs | yes | |\n| ListAll() | Returns all tasks as a slice of DRMAA2 jobs | no | |\n\n### LLM (GPT) Enhancements\n\nFor using the LLM methods the workflow needs to be initialized with an LLM config. For this\nthe _WithLLMOpenAI()_ method is used.\n\n````go\n\tflow := wfl.NewWorkflow(wfl.NewProcessContext()).WithLLMOpenAI(\n\t\twfl.OpenAIConfig{\n\t\t\tToken: os.Getenv(\"OPENAI_KEY\"),\n\t\t}).OnErrorPanic()\n````\nThen the flow offers the _TemplateP(\"what should the script do?\")_ method which can create\njob templates (_flow.TemplateP()_) and following _Job_ methods can be use:\n\n| Function Name | Purpose | Blocking | Examples |\n| -- | -- | -- | -- |\n| OutputP() | Returns the output of the job on which the given prompt is applied | yes | OutputP(\"Summarize in 2-3 sentences.\") |\n| ErrorP() | Takes the submission error message and applies a prompt | no | ErrorP(\"Explain the error and provide a solution\") |\n\n## JobTemplate\n\nJobTemplates are specifying the details about a job. In the simplest case the job is specified by the application name and its arguments like it is typically done in the OS shell. In that case the _Run()_ methods (_ThenRun()_, _OnSuccessRun()_, _OnFailureRun()_) can be used. Job template based methods (like _RunT()_) can be completely avoided by providing a\ndefault template when creating the context (_...ByConfig()_). Then each _Run()_ inherits the settings (like _JobCategory_ for the container image name and _OutputPath_ for redirecting output to _stdout_). If more details for specifying the jobs are required the _RunT()_ methods needs to be used.\nI'm using currently the [DRMAA2 Go JobTemplate](https://github.com/dgruber/drmaa2interface/blob/master/jobtemplate.go). In most cases only _RemoteCommand_, _Args_, _WorkingDirectory_, _JobCategory_, _JobEnvironment_,  _StageInFiles_ are evaluated. Functionality and semantic is up to the underlying [drmaa2os job tracker](https://github.com/dgruber/drmaa2os/tree/master/pkg/jobtracker).\n\n* [For the process mapping see here](https://github.com/dgruber/drmaa2os/tree/master/pkg/jobtracker/simpletracker)\n* [For the Kubernetes batch job mapping here](https://github.com/dgruber/drmaa2os/blob/master/pkg/jobtracker/kubernetestracker)\n* [For the Docker mapping here](https://github.com/dgruber/drmaa2os/tree/master/pkg/jobtracker/dockertracker)\n* [For the mapping to a drmaa1 implementation (libdrmaa.so) for SLURM, Open Cluster Scheduler / Grid Engine, PBS, ...](https://github.com/dgruber/drmaa2os/blob/master/pkg/jobtracker/libdrmaa)\n* [For the Cloud Foundry Task mapping here](https://github.com/dgruber/drmaa2os/blob/master/pkg/jobtracker/cftracker)\n\nThe [_Template_](https://github.com/dgruber/wfl/blob/master/template.go) object provides helper functions for job templates. For an example see [here](https://github.com/dgruber/wfl/tree/master/examples/template/template.go).\n\n## Examples\n\nFor examples please have a look into the examples directory. [template](https://github.com/dgruber/wfl/tree/master/examples/template/template.go) is a canonical example of a pre-processing job, followed by parallel execution, followed by a post-processing job.\n\n[test](https://github.com/dgruber/wfl/blob/master/test/test.go) is an use case for testing. It compiles\nall examples with the local go compiler and then within a Docker container using the _golang:latest_ image\nand reports errors.\n\n[cloudfoundry](https://github.com/dgruber/wfl/blob/master/examples/cloudfoundry/cloudfoundry.go) demonstrates how a Cloud Foundry tasks can be created.\n\n\n## Creating a Workflow which is Executed as OS Processes\n\nThe allocated context defines which workload management system / job execution backend is used.\n\n```go\n    ctx := wfl.NewProcessContext()\n```\n\nDifferent contexts can be used within a single program. That way multi-clustering potentially\nover different cloud solutions is supported.\n\nUsing a context a workflow can be established.\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext())\n```\n\nHandling an error during workflow generation can be done by specifying a function which\nis only called in the case of an error.\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).OnError(func(e error) {\n  panic(e)\n })\n```\n\nThe workflow is used in order to instantiate the first job using the _Run()_ method.\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).Run(\"sleep\", \"123\")\n```\n\nBut you can also create an initial job like that:\n\n```go\n    job := wfl.NewJob(wfl.NewWorkflow(wfl.NewProcessContext()))\n```\n\nFor more detailed settings (like resource limits) the DRMAA2 job template can be used as parameter for _RunT()_.\n\nJobs allow the execution of workload as well as expressing dependencies.\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).Run(\"sleep\", \"2\").ThenRun(\"sleep\", \"1\").Wait()\n```\n\nThe line above executes two OS processes sequentially and waits until the last job in chain is finished.\n\nIn the following example the two sleep processes are executed in parallel. _Wait()_ only waits for the sleep 1 job. Hence sleep 2 still runs after the wait call comes back!\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).Run(\"sleep\", \"2\").Run(\"sleep\", \"1\").Wait()\n```\n\nRunning two jobs in parallel and waiting until _all jobs_ finished can be done with _Synchronize()_.\n\n```go\n    wfl.NewWorkflow(wfl.NewProcessContext()).Run(\"sleep\", \"2\").Run(\"sleep\", \"1\").Synchronize()\n```\n\nJobs can also be suspended (stopped) and resumed (continued) - if supported by the execution backend (like OS, Docker).\n\n```go\n    wf.Run(\"sleep\", \"1\").After(time.Millisecond * 100).Suspend().After(time.Millisecond * 100).Resume().Wait()\n```\n\nThe exit status is available as well. _ExitStatus()_ blocks until the previously submitted job is finished.\n\n```go\n    wfl.NewWorkflow(ctx).Run(\"echo\", \"hello\").ExitStatus()\n```\n\nIn order to run jobs depending on the exit status the _OnFailure_ and _OnSuccess_ methods can be used:\n\n```go\n    wf.Run(\"false\").OnFailureRun(\"true\").OnSuccessRun(\"false\")\n```\n\nFor executing a function on a submission error _OnError()_ can be used.\n\nFor running multiple jobs on a similar job template (like for test workflows) the _RunMatrixT()_\ncan be used. It expects a _JobTemplate_ with self-defined placeholders (can be any string).\nThose placeholders are getting replaced by the lists specified in the Replacements structs.\nThen any combination of the replacement lists are evaluated and new job templates are generated\nand submitted.\n\nThe following example submits and waits for 4 tasks:\n\n* sleep 0.1\n* echo 0.1\n* sleep 0.2\n* echo 0.2\n\nIf only a list of replacements is required then the second replacement can just\nleft empty (_wfl.Replacement{}_). For _JobTemplate_ fields with numbers the replacement\nstrings are automatically converted to numbers.\n\n```go\njob := flow.NewJob().RunMatrixT(\n    drmaa2interface.JobTemplate{\n     RemoteCommand: \"{{cmd}}\",\n     Args:          []string{\"{{arg}}\"},\n    },\n    wfl.Replacement{\n     Fields:       []wfl.JobTemplateField{{wfl.RemoteCommand},\n\n     Pattern:      \"{{cmd}}\",\n     Replacements: []string{\"sleep\", \"echo\"},\n    },\n    wfl.Replacement{\n     Fields:       []wfl.JobTemplateField{{wfl.Args},\n\n     Pattern:      \"{{arg}}\",\n     Replacements: []string{\"0.1\", \"0.2\"},\n    },\n   )\njob.Synchronize()\n```\n\nMore methods can be found in the sources.\n\n## Basic Workflow Patterns\n\n### Sequence\n\nThe successor task runs after the completion of the predecessor task.\n\n```go\n    flow := wfl.NewWorkflow(ctx)\n    flow.Run(\"echo\", \"first task\").ThenRun(\"echo\", \"second task\")\n    ...\n```\n\nor\n\n```go\n    flow := wfl.NewWorkflow(ctx)\n    job := flow.Run(\"echo\", \"first task\")\n    job.Wait()\n    job.Run(\"echo\", \"second task\")\n    ...\n```\n\n### Parallel Split\n\nAfter completion of a task run multiple branches of tasks.\n\n```go\n\n    flow := wfl.NewWorkflow(ctx)\n    flow.Run(\"echo\", \"first task\").Wait()\n\n    notifier := wfl.NewNotifier()\n\n    go func() {\n        wfl.NewJob(wfl.NewWorkflow(ctx)).\n            TagWith(\"BranchA\").\n            Run(\"sleep\", \"1\").\n            ThenRun(\"sleep\", \"3\").\n            Synchronize().\n            Notify(notifier)\n    }\n\n    go func() {\n        wfl.NewJob(wfl.NewWorkflow(ctx)).\n            TagWith(\"BranchB\").\n            Run(\"sleep\", \"1\").\n            ThenRun(\"sleep\", \"3\").\n            Synchronize().\n            Notify(notifier)\n    }\n\n    notifier.ReceiveJob()\n    notifier.ReceiveJob()\n\n    ...\n```\n\n### Synchronization of Tasks\n\nWait until all tasks of a job which are running in parallel are finished.\n\n```go\n    flow := wfl.NewWorkflow(ctx)\n    flow.Run(\"echo\", \"first task\").\n        Run(\"echo\", \"second task\").\n        Run(\"echo\", \"third task\").\n        Synchronize()\n\n```\n\n### Synchronization of Branches\n\nWait until all branches of a workflow are finished.\n\n```go\n\n    notifier := wfl.NewNotifier()\n\n    go func() {\n        wfl.NewJob(wfl.NewWorkflow(ctx)).\n            TagWith(\"BranchA\").\n            Run(\"sleep\", \"1\").\n            Wait().\n   Notify(notifier)\n    }\n\n    go func() {\n        wfl.NewJob(wfl.NewWorkflow(ctx)).\n            TagWith(\"BranchB\").\n            Run(\"sleep\", \"1\").\n            Wait().\n   Notify(notifier)\n    }\n\n    notifier.ReceiveJob()\n    notifier.ReceiveJob()\n\n    ...\n```\n\n### Exclusive Choice\n\n```go\n    flow := wfl.NewWorkflow(ctx)\n    job := flow.Run(\"echo\", \"first task\")\n    job.Wait()\n\n    if job.Success() {\n        // do something\n    } else {\n        // do something else\n    }\n    ...\n```\n\n### Fork Pattern\n\nWhen a task is finished _n_ tasks needs to be started in parallel.\n\n```go\n    job := wfl.NewWorkflow(ctx).Run(\"echo\", \"first task\").\n        ThenRun(\"echo\", \"parallel task 1\").\n        Run(\"echo\", \"parallel task 2\").\n        Run(\"echo\", \"parallel task 3\")\n    ...\n```\n\nor\n\n```go\n    flow := wfl.NewWorkflow(ctx)\n    \n    job := flow.Run(\"echo\", \"first task\")\n    job.Wait()\n    for i := 1; i \u003c= 3; i++ {\n        job.Run(\"echo\", fmt.Sprintf(\"parallel task %d\", i))\n    }\n    ...\n```\n\nFor missing functionality or bugs please open an issue on github. Contributions welcome!\n","funding_links":[],"categories":["Go","HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdgruber%2Fwfl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdgruber%2Fwfl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdgruber%2Fwfl/lists"}