{"id":21480562,"url":"https://github.com/flatironinstitute/disbatch","last_synced_at":"2025-09-08T05:44:31.150Z","repository":{"id":40775568,"uuid":"93343566","full_name":"flatironinstitute/disBatch","owner":"flatironinstitute","description":"Dynamic dispatch of a list of command-line tasks, locally or on a cluster. Supports retrying failed tasks, and adding/removing compute resources on-the-fly. ","archived":false,"fork":false,"pushed_at":"2025-09-01T21:07:00.000Z","size":4254,"stargazers_count":46,"open_issues_count":7,"forks_count":9,"subscribers_count":10,"default_branch":"master","last_synced_at":"2025-09-01T23:21:51.246Z","etag":null,"topics":["batch-processing","distributed-computing"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/flatironinstitute.png","metadata":{"files":{"readme":"Readme.md","changelog":"CHANGES.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2017-06-04T21:51:19.000Z","updated_at":"2025-09-01T21:07:02.000Z","dependencies_parsed_at":"2023-01-29T07:45:53.026Z","dependency_job_id":"69df0a12-3796-41bc-b13b-ce84d8c9d50d","html_url":"https://github.com/flatironinstitute/disBatch","commit_stats":null,"previous_names":[],"tags_count":14,"template":false,"template_full_name":null,"purl":"pkg:github/flatironinstitute/disBatch","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flatironinstitute%2FdisBatch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flatironinstitute%2FdisBatch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flatironinstitute%2FdisBatch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flatironinstitute%2FdisBatch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/flatironinstitute","download_url":"https://codeload.github.com/flatironinstitute/disBatch/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/flatironinstitute%2FdisBatch/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274140023,"owners_count":25229138,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-08T02:00:09.813Z","response_time":121,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["batch-processing","distributed-computing"],"created_at":"2024-11-23T12:16:17.718Z","updated_at":"2025-09-08T05:44:31.136Z","avatar_url":"https://github.com/flatironinstitute.png","language":"Python","readme":"# disBatch\n\nDistributed processing of a batch of tasks.\n\n[![Tests](https://github.com/flatironinstitute/disBatch/actions/workflows/tests.yaml/badge.svg)](https://github.com/flatironinstitute/disBatch/actions/workflows/tests.yaml)\n\n## Quickstart\n\nInstall with pip:\n\n    pip install disbatch\n\nCreate a file called `Tasks` with a list of commands you want to run. These should be Bash commands as one would run on the command line:\n\n    myprog arg0 \u0026\u003e myprog_0.log\n    myprog arg1 \u0026\u003e myprog_1.log\n    ...\n    myprog argN \u0026\u003e myprog_N.log\n\nThis file can have as many tasks (lines) as you like.  The `...` is just a stand-in and wouldn't literally be in the task file.\n\nThen, to run 5 tasks at a time in parallel on your local machine, run:\n\n    disBatch -s localhost:5 Tasks\n\n`disBatch` will start the first five running concurrently. When one finishes, the next will be started until all are done.\nNote that this effectively means that tasks (lines) may run in any order.\n\nTo distribute this work on a Slurm cluster instead of locally, run:\n\n    sbatch -n 5 disBatch Tasks\n\nYou may need to provide additional arguments specific to your cluster to specify a partition, time limit, etc.\n\nThe same invocation as a Slurm batch script would look like:\n```bash\n#!/bin/bash\n# file: job.sh\n#SBATCH -n 5\n\ndisBatch Tasks\n```\n\nSubmit as usual with `sbatch job.sh`. disBatch will inspect the environment as see that it is running under Slurm, and use\n`srun` internally to launch persistent task slots.\n  \n## Overview\n\nOne common usage pattern for distributed computing involves processing a\nlong list of commands (aka *tasks*):\n\n    myprog -a 0 -b 0 -c 0\n    myprog -a 0 -b 0 -c 1\n    ...\n    myprog -a 9 -b 9 -c 9\n\nOne could run this by submitting 1,000 separate jobs to a cluster, but that may\npresent problems for the queuing system and can behave badly if the\nsystem is configured to handle jobs in a simple first come, first serve\nfashion.  For short tasks, the job launch overhead may dominate the runtime, too.\n\nOne could simplify this by using, e.g., Slurm job arrays, but each job in a job\narray is an independent Slurm job, so this suffers from the same per-job overheads\nas if you submitted 1000 independent jobs. Furthermore, if nodes are being allocated\nexclusively (i.e. the nodes that are allocated to your job are not shared by other jobs),\nthen the job array approach can hugely underutilize the compute resources unless each\ntask is using a full node's worth of resources.\n\nAnd what if you don't have a cluster available, but do have a collection of networked computers? Or you just want to make use of multiple cores on your own computer?\n\nIn any event, when processing such a list of tasks, it is helpful to\nacquire metadata about the execution of each task: where it ran, how\nlong it took, its exit return code, etc.\n\ndisBatch has been designed to support this usage in a simple and\nportable way, as well as to provide the sort of metadata that can be\nhelpful for debugging and reissuing failed tasks.\n\nIt can take as input a file, each of whose lines is a task in the form of a\nBash command. For example, the file could consists of the 1000 commands listed above. It launches the tasks one\nafter the other until all specified execution resources are in use. Then as one\nexecuting task exits, the next task in the file is launched. This repeats until all\nthe lines in the file have been processed.\n\nEach task is run in a new shell; i.e. all lines are independent of one another. \n\nHere's a more complicated example, demonstrating controlling the execution environment and capturing the output of the tasks:\n\n    ( cd /path/to/workdir ; source SetupEnv ; myprog -a 0 -b 0 -c 0 ) \u0026\u003e task_0_0_0.log\n    ( cd /path/to/workdir ; source SetupEnv ; myprog -a 0 -b 0 -c 1 ) \u0026\u003e task_0_0_1.log\n    ...\n    ( cd /path/to/workdir ; source SetupEnv ; myprog -a 9 -b 9 -c 8 ) \u0026\u003e task_9_9_8.log\n    ( cd /path/to/workdir ; source SetupEnv ; myprog -a 9 -b 9 -c 9 ) \u0026\u003e task_9_9_9.log\n\nEach line uses standard Bash syntax. Let's break it down:\n\n1. the `( ... ) \u0026\u003e task_0_0_0.log` captures all output (stdout and stderr) from any command in the parentheses and writes it to `task_0_0_0.log`;\n2. `cd /path/to/workdir` changes the working directory;\n3. `source SetupEnv` executes a script called `SetupEnv`, which could contain commands like `export PATH=...` or `module load ...` to set up the environment;\n4. `myprog -a 0 -b 0 -c 0` is the command you want to run.\n\nThe semicolons between the last 3 statements are Bash syntax to run a series of commands on the same line.\n\nYou can simplify this kind of task file with the `#DISBATCH PREFIX` and `#DISBATCH SUFFIX` directives. See the [#DISBATCH directives](#disbatch-directives) section for full details, but here's how that could look:\n\n    #DISBATCH PREFIX ( cd /path/to/workdir ; source SetupEnv ; myprog \n    #DISBATCH SUFFIX ) \u0026\u003e task_${DISBATCH_TASKID}.log\n    -a 0 -b 0 -c 0\n    -a 0 -b 0 -c 1\n    ...\n    -a 9 -b 9 -c 9\n\n\nNote that for a simple environment setup, you don't need a `source SetupEnv`. You can just set an environment variable directly in the task line, as you can in Bash:\n\n    export LD_LIBRARY_PATH=/d0/d1/d2:$LD_LIBRARY_PATH ; rest ; of ; command ; sequence\n    \nFor more complex set ups, command sequences and input/output redirection requirements, you could place everything in a small shell script with appropriate arguments for the parts that vary from task to task, say `RunMyprog.sh`:\n\n    #!/bin/bash\n    \n    id=$1\n    shift\n    cd /path/to/workdir\n    module purge\n    module load gcc openblas python3 \n    \n    export LD_LIBRARY_PATH=/d0/d1/d2:$LD_LIBRARY_PATH\n    myProg \"$@\" \u003e results/${id}.out 2\u003e logs/${id}.log\n\nThe task file would then contain:\n\n    ./RunMyprog.sh 0_0_0 -a 0 -b 0 -c 0\n    ./RunMyprog.sh 0_0_1 -a 0 -b 0 -c 1\n    ...\n    ./RunMyprog.sh 9_9_8 -a 9 -b 9 -c 8\n    ./RunMyprog.sh 9_9_9 -a 9 -b 9 -c 9\n\nSee [#DISBATCH directives](#disbatch-directives) for more ways to simplify task lines. disBatch also sets some environment variables that can be used in your commands as arguments or to generate task-specifc file names:\n\n* `DISBATCH_JOBID`: A name disBatch creates that should be unique to the job\n* `DISBATCH_NAMETASKS`: The basename of the task file\n* `DISBATCH_REPEAT_INDEX`: See the repeat construct in [\\#DISBATCH directives](#disbatch-directives)\n* `DISBATCH_STREAM_INDEX`: The 1-based line number of the line from the task file that generated the task\n\" `DISBATCH_TASKID`: 0-based sequential counter value that uniquely identifies each task\n\nAppending `_ZP` to any of the last three will produce a 0-padded value (to six places). If these variables are used to create file names, 0-padding will result in files names that sort correctly.\n\nOnce you have created the task file, running disBatch is straightforward. For example, working with a cluster managed by Slurm,\nall that needs to be done is to submit a job like the following:\n\n    sbatch -n 20 -c 4 disBatch TaskFileName\n\nThis particular invocation will allocate sufficient resources to process\n20 tasks at a time, each of which needs 4 cores.\ndisBatch will use environment variables initialized by Slurm to determine the execution resources to use for the run.\nThis invocation assumes an appropriately installed disBatch is in your PATH, see [installation](#installation) for details.\n\ndisBatch also allows the pool of execution resources to be increased or decreased during the course of a run:\n\n    sbatch -n 10 -c 4 ./TaskFileName_dbUtil.sh\n\nwill add enough resources to run 10 more tasks concurrently. `TaskFileName_dbUtl.sh` is a utility script created by `disBatch` when the run starts (the actual name is a little more complex, see [startup](#user-content-startup)).\n\nVarious log files will be created as the run unfolds:\n\n* `TaskFileName_*_status.txt`: status of every task (details below). `*` elides a unique identifier disBatch creates to distinguish one run from another. This is the most important output file and we recommend checking it after every run.\n* `TaskFileName_*_[context|driver|engine].log`:\n  The disBatch driver log file contains details mostly of interest in case of a\n  problem with disBatch itself. (The driver log file name can be changed with `-l`). It can generally be ignored by end\n  users (but keep it around in the event that something did go\n  wrong\u0026mdash;it will aid debugging). The `*_[context|engine].log` files contain similar information for the disBatch components that manage execution resources.\n* `disBatch_*_kvsinfo.txt`: TCP address of invoked KVS server if any (for additional advanced status monitoring)\n\n\u003e [!TIP]\n\u003e The `*_status.txt` file is the most important disBatch output file and we recommend checking it after every run.\n\nWhile disBatch is a Python 3 application, it can run tasks from any language environment\u0026mdash;anything you can run from a shell can be run as a task.\n\n### Status file\n\nThe status file is the most important disBatch output file and we recommend checking it after every run. The filename is `TaskFileName_*_status.txt`. It contains tab-delimited lines of the form:\n\n    314\t315\t-1\tworker032\t8016\t0\t10.0486528873\t1458660919.78\t1458660929.83\t0\t\"\"\t0\t\"\"\tcd /path/to/workdir ; myprog -a 3 -b 1 -c 4 \u003e task_3_1_4.log 2\u003e\u00261\n\nThese fields are:\n\n  1. Flags: The first field, blank in this case, may contain `E`, `O`, `R`, `B`, or `S` flags.\n     Each program/task should be invoked in such a way that standard error\n     and standard output end up in appropriate files. If that is not the case\n     `E` or `O` flags will be raised. `R` indicates that the task\n     returned a non-zero exit code. `B` indicates a [barrier](#disbatch-directives). `S` indicates the job was skipped (this may happen during \"resume\" runs).\n  1. Task ID: The `314` is the 0-based index of the task (starting from the beginning of the task file, incremented for each task, including repeats).\n  1. Line number: The `315` is the 1-based line from the task file. Blank lines, comments, directives and repeats may cause this to drift considerably from the value of Task ID.\n  1. Repeat index: The `-1` is the repeat index (as in this example, `-1` indicates this task was not part of a repeat directive).\n  1. Node: `worker032` identifies the node on which the task ran.\n  1. PID: `8016` is the PID of the bash shell used to run the task.\n  1. Exit code: `0` is the exit code returned.\n  1. Elapsed time: `10.0486528873` (seconds),\n  1. Start time:`1458660919.78` (Unix epoch based),\n  1. Finish time: `1458660929.83` (Unix epoch based).\n  1. Bytes of *leaked* output (not redirected to a file),\n  1. Output snippet (up to 80 bytes consisting of the prefix and suffix of the output),\n  1. Bytes of leaked error output,\n  1. Error snippet,\n  1. Command: `cd ...` is the text of the task (repeated from the task file, but subject to modification by [directives](#disbatch-directives)).\n\n\n## Installation\n\n**Users of Flatiron clusters: disBatch is available via the module system. You can run `module load disBatch` instead of installing it.**\n\nThere are several ways to get disBatch:\n\n  1. installation with pip;\n  1. direct invocation with pipx or uvx;\n  1. cloning the repo.\n\nMost users can install via pip. Direct invocation with uvx may be of particular interest for users on systems without a modern Python, as uvx will bootstrap Python for you.\n  \n### Installation with pip\nYou can use pip to install disbatch just like a normal Python package:\n\n  1. from PyPI: `pip install disbatch`\n  2. from GitHub: `pip install git+https://github.com/flatironinstitute/disBatch.git`\n\nThese should be run in a venv. Installing with `pip install --user disbatch` may work instead, but as a general practice is discouraged.\n\nAfter installation, disBatch will be available via the `disbatch` and `disBatch` executables on the `PATH` so long as the venv is activated. Likewise, disBatch can be run as a module with `python -m disbatch`.\n\n\u003cdetails\u003e\n\u003csummary\u003eClick here for a complete example using pip and venv\u003c/summary\u003e\n\nYou'll need a modern Python to install disBatch this way. We recommend the uvx installation method below if you don't have one, as uv will boostrap Python for you.\n\n```\npython -m venv venv\n. venv/bin/activate\npip install disbatch\ndisbatch TaskFile\n```\n\u003c/details\u003e\n\n### Direct invocation with pipx or uvx\n\n[pipx](https://pipx.pypa.io/stable/) and [uvx](https://docs.astral.sh/uv/guides/tools/) are two tools that will create an isolated venv, download and install disbatch into that venv, and run it all in a single command:\n\n  1. `pipx disbatch TaskFile`\n  1. `uvx disbatch TaskFile`\n\npipx already requires a somewhat modern Python, so for disbatch's purposes it just saves you the step of creating and activating a venv and installing disBatch.\n\nuvx, on the other hand, will download a modern Python for you if you don't have one available locally. It requires [installing uv](https://docs.astral.sh/uv/getting-started/installation/), which is straightforward and portable.\n\nHere's a complete example of running disbatch on a system without modern Python:\n\n```\ncurl -LsSf https://astral.sh/uv/install.sh | sh\nsource $HOME/.local/bin/env\nuvx disbatch TaskFile\n```\n\nAfterwards, disbatch will always be available as `uvx disbatch`.\n\nFor Slurm users, note that the above will install disbatch into the user's default cache directory. If this directory is not visible to all nodes on the cluster, then disbatch jobs will fail. One can specify a different cache directory with `uvx --cache-dir=...`, but the simplest fix is to do a `tool install`:\n\n```\nuv tool install disbatch\nsbatch disbatch TaskFile\n```\n\nThis places `disbatch` on the `PATH` in a persistent location; no need to use `uvx` anymore.\n\n\n### Cloning the repo\nUsers or developers who want to work on the code should clone the repo then do an editable install into a venv:\n\n```\ngit clone https://github.com/flatironinstitute/disBatch.git\npip install -e ./disBatch\n```\n\nSetting `PYTHONPATH` may also work, but as a general practice is discouraged. If you don't have a modern Python available, [uv](https://docs.astral.sh/uv/getting-started/installation/) can bootstrap one for you.\n\n## Execution Environments\ndisBatch is designed to support a variety of execution environments, from your own desktop, to a local collection of workstations, to large clusters managed by job schedulers.\nIt currently supports Slurm and can be executed from `sbatch`, but it is architected to make it simple to add support for other resource managers.\n\nYou can also run directly on one or more machines by setting an environment variable:\n\n    DISBATCH_SSH_NODELIST=localhost:7,otherhost:3\n\nor specifying an invocation argument:\n\n    -s localhost:7,otherhost:3\n    \nThis allows execution directly on your `localhost` and via ssh for remote hosts without the need for a resource management system.\nIn this example, disBatch is told it can use seven CPUs on your local host and three on `otherhost`. Assuming the default mapping of one task to one CPU applies in this example, seven tasks could be in progress at any given time on `localhost`, and three on `otherhost`. Note that `localhost` is an actual name you can use to refer to the machine on which you are currently working. `otherhost` is fictious. \nHosts used via ssh must be set up to allow ssh to work without a password and must share the working directory for the disBatch run.\n\ndisBatch refers to a collection of execution resources as a *context* and the resources proper as *engines*. So the Slurm example `sbatch -n 20 -c 4`, run on a cluster with 16-core nodes, might create one context with five engines (one each for five 16-core nodes, capable of running four concurrent 4-core tasks each), while the SSH example creates one context with two engines (capable of running seven and three concurrent tasks, respectively).\n\n## Invocation\n```\nusage: disbatch [-h] [-e] [--force-resume] [--kvsserver [HOST:PORT]] [--logfile FILE]\n                [--loglevel {CRITICAL,ERROR,WARNING,INFO,DEBUG}] [--mailFreq N] [--mailTo ADDR] [-p PATH]\n                [-r STATUSFILE] [-R] [-S] [--status-header] [--use-address HOST:PORT] [-w] [-f]\n                [--taskcommand COMMAND] [--taskserver [HOST:PORT]] [--version] [-C TASK_LIMIT] [-c N] [--fill]\n                [--no-retire] [-l COMMAND] [--retire-cmd COMMAND] [-s HOST:CORECOUNT] [-t N]\n                [taskfile]\n\nUse batch resources to process a file of tasks, one task per line.\n\npositional arguments:\n  taskfile              File with tasks, one task per line (\"-\" for stdin)\n\noptions:\n  -h, --help            show this help message and exit\n  -e, --exit-code       When any task fails, exit with non-zero status (default: only if disBatch itself fails)\n  --force-resume        With -r, proceed even if task commands/lines are different.\n  --kvsserver [HOST:PORT]\n                        Use a running KVS server.\n  --logfile FILE        Log file.\n  --loglevel {CRITICAL,ERROR,WARNING,INFO,DEBUG}\n                        Logging level (default: INFO).\n  --mailFreq N          Send email every N task completions (default: 1). \"--mailTo\" must be given.\n  --mailTo ADDR         Mail address for task completion notification(s).\n  -p PATH, --prefix PATH\n                        Path for log, dbUtil, and status files (default: \".\"). If ends with non-directory component,\n                        use as prefix for these files names (default: \u003cTaskfile\u003e_disBatch_\u003cYYYYMMDDhhmmss\u003e_\u003cRandom\u003e).\n  -r STATUSFILE, --resume-from STATUSFILE\n                        Read the status file from a previous run and skip any completed tasks (may be specified\n                        multiple times).\n  -R, --retry           With -r, also retry any tasks which failed in previous runs (non-zero return).\n  -S, --startup-only    Startup only the disBatch server (and KVS server if appropriate). Use \"dbUtil...\" script to\n                        add execution contexts. Incompatible with \"--ssh-node\".\n  --status-header       Add header line to status file.\n  --use-address HOST:PORT\n                        Specify hostname and port to use for this run.\n  -w, --web             Enable web interface.\n  -f, --fail-fast       Exit on first task failure. Running tasks will be interrupted and disBatch will exit with a\n                        non-zero exit code.\n  --taskcommand COMMAND\n                        Tasks will come from the command specified via the KVS server (passed in the environment).\n  --taskserver [HOST:PORT]\n                        Tasks will come from the KVS server.\n  --version             Print the version and exit\n  -C TASK_LIMIT, --context-task-limit TASK_LIMIT\n                        Shutdown after running COUNT tasks (0 =\u003e no limit).\n  -c N, --cpusPerTask N\n                        Number of cores used per task; may be fractional (default: 1).\n  --fill                Try to use extra cores if allocated cores exceeds requested cores.\n  --no-retire           Don't retire nodes from the batch system (e.g., if running as part of a larger job).\n  -l COMMAND, --label COMMAND\n                        Label for this context. Should be unique.\n  --retire-cmd COMMAND  Shell command to run to retire a node (environment includes $NODE being retired, remaining\n                        $ACTIVE node list, $RETIRED node list; default based on batch system). Incompatible with \"--\n                        ssh-node\".\n  -s HOST:CORECOUNT, --ssh-node HOST:CORECOUNT\n                        Run tasks over SSH on the given nodes (can be specified multiple times for additional hosts;\n                        equivalent to setting DISBATCH_SSH_NODELIST)\n  -t N, --tasksPerNode N\n                        Maximum concurrently executing tasks per node (up to cores/cpusPerTask).\n```\n\nThe options for mail will only work if your computing environment permits processes to access mail via SMTP.\n\nA value for `-c` \u003c 1 effectively allows you to run more tasks concurrently than CPUs specified for the run. This is somewhat unusual, and generally not recommended, but could be appropriate in some cases.\n\nThe `--no-retire` and `--retire-cmd` flags allow you to control what disBatch does when a node is no longer needed to run jobs.\nWhen running under slurm, disBatch will by default run the command:\n\n    scontrol update JobId=\"$SLURM_JOBID\" NodeList=\"${DRIVER_NODE:+$DRIVER_NODE,}$ACTIVE\"\n\nwhich will tell slurm to release any nodes no longer being used.\nYou can set this to run a different command, or nothing at all.\nWhile running this command, the follow environment variables will be set: `NODE` (the node that is no longer needed), `ACTIVE` (a comma-delimited list of nodes that are still active), `RETIRED` (a comma-delimited list of nodes that are no longer active, including `$NODE`), and possibly `DRIVER_NODE` (the node still running the main disBatch script, if it's not in `ACTIVE`).\n\n`-S` Startup only mode. In this mode, `disBatch` starts up the task management system and then waits for execution resources to be added.\n\u003cspan id='user-content-startup'\u003eAt startup\u003c/span\u003e, `disBatch` always generates a script `\u003cPrefix\u003e_dbUtil.sh`, where `\u003cPrefix\u003e` refers to the `-p` option or default, see above. We'll call this simply `dbUtils.sh` here,\nbut remember to include `\u003cPrefix\u003e_` in actual use. You can add execution resources by doing one or more of the following multiple times:\n1. Submit `dbUtils.sh` as a job, e.g.:\n\n    `sbatch -n 40 dbUtil.sh`\n\n2. Use ssh, e.g.:\n\n    `./dbUtil.sh -s localhost:4,friendlyNeighbor:5`\n\nEach of these creates an execution context, which contains one of more execution engines (if using, for example, 8-core nodes, then five for the first; two in the second).\nAn engine can run one or more tasks currently. In the first example, each of the five engines will run up to eight tasks concurrently, while in the\nsecond example, the engine on `localhost` will run up to four tasks concurrently and the engine on `friendlyNeighbor` will run up to five.\n`./dbUtil.sh --mon` will start a simple ASCII-based monitor that tracks the overall state of the disBatch run, and the activity of the individual\ncontexts and engines. By cursoring over an engine, you can send a shutdown signal to the engine or its context. This signal is *soft*, triggering\na graceful shutdown that will occur only after currently assigned tasks are complete. Other execution resources are uneffected.\n\nWhen a context is started, you can also supply the argument `--context-task-limit N`. This will shutdown the context and all associated engines\nafter it has run `N` tasks.  \n\nTaken together, these mechanisms enable disBatch to run on a dynamic pool of execution resources, so you can \"borrow\" a colleague's workstation overnight, or\nclaim a large chunk of a currently idle partition, but return some if demands picks up, or chain together a series of time limited allocations to\naccomplish a long run. When using this mode, keep in mind two caveats: (i) The time quantum is determined by your task duration. If any given task might\nrun for hours or days, then the utility of this is limited. You can still use standard means (kill, scancel) to terminate contexts and engines, but\nyou will likely have incomplete tasks to\nreckon with; (ii) The task manangement system must itself be run in a setting where a long lived process is OK. Say in a `screen` or `tmux` session on\nthe login node of a cluster, or on your personal workstation (assuming it has the appropriate connectivity to reach the other resources you plan to use).\n\n\n`-r` uses the status file of a previous run to determine what tasks to run during this disBatch invocation. Only those tasks that haven't yet run (or with `-R`, those that haven't run or did but returned a non-0 exit code) are run this time. By default, the numeric task identifier and the text of the command are used to determine if a current task is the same as one found in the status file. `--force-resume` restricts the comparison to just the numeric identifier.\n\n`--use-address HOST:PORT` can be used if disBatch is not able to determine the correct hostname for the machine it is running on (or you need to override what was detected). This is often the case when running on a personal laptop without a \"real\" network configuration. In this case `--use-address=localhost:0` will generally be sufficient.\n\n`--kvsserver`, `--taskcommand`, and `--taskserver` implement advanced functionality (placing disBatch in an existing shared key store context and allowing for a programmatic rather than textual task interface). Contact the authors for more details.\n\n\n### Considerations for large runs\n\nIf you do submit jobs with order 10000 or more tasks, you should\ncarefully consider how you want to organize the output (and error) files\nproduced by each of the tasks. It is generally a bad idea to have more\nthan a few thousand files in any one directory, so you will probably\nwant to introduce at least one extra level of directory hierarchy so\nthat the files can be divided into smaller groups. Intermediate\ndirectory `13`, say, might hold all the files for tasks 13000 to\n13999.\n\n## #DISBATCH directives\n\n### PREFIX and SUFFIX\n\nIn order to simplify task files, disBatch supports a couple of\ndirectives to specify common task prefix strings and suffix strings. As noted above, it\nalso sets environment variables to identify various aspects of the\nsubmission. Here's an example\n\n    # Note there is a space at the end of the next line.\n    #DISBATCH PREFIX ( cd /path/to/workdir ; source SetupEnv ; \n    #DISBATCH SUFFIX  ) \u0026\u003e ${DISBATCH_NAMETASKS}_${DISBATCH_JOBID}_${DISBATCH_TASKID_ZP}.log\n\nThese are textually prepended and appended, respectively, to the text of\neach subsequent task line. If the suffix includes redirection and a task is a proper command sequence (a series of\ncommands joined by `;`), then the task should be wrapped in `( ... )`, as in this example, so that the standard error and standard output of the whole sequence\nwill be redirected to the log file. If this is not done, only standard\nerror and standard output for the last component of the command sequence\nwill be captured. This is probably not what you want unless you have\nredirected these outputs for the previous individual parts of the\ncommand sequence.\n\nUsing these, the above commands could be replaced with:\n\n    myprog -a 0 -b 0 -c 0\n    myprog -a 0 -b 0 -c 1\n    ... \n    myprog -a 9 -b 9 -c 8\n    myprog -a 9 -b 9 -c 9\n\nNote: the log files will have a different naming scheme, but there will still be one per task.\n\nLater occurrences of `#DISBATCH PREFIX` or `#DISBATCH SUFFIX` in a task\nfile simply replace previous ones. When these are used, the tasks\nreported in the status file include the prefix and suffix in\nforce at the time the task was launched.\n\n### BARRIER\n\nIf your tasks fall into groups where a later group should only begin\nafter all tasks of the previous group have completely finished, you can\nuse this directive:\n\n    #DISBATCH BARRIER\n\nWhen disBatch encounters this directive, it will not launch another task\nuntil all tasks in progress have completed. The following form:\n\n    #DISBATCH BARRIER CHECK\n\nchecks the exit status of the tasks done since the last barrier (or\nstart of the run). If any task had a non-zero exit status, the run\nwill exit once this barrier is met.\n\n### REPEAT\n\nFor those problems that are easily handled via a job-array-like approach:\n\n    #DISBATCH REPEAT 5 myprog file${DISBATCH_REPEAT_INDEX}\n\nwill expand into five tasks, each with the environment variable\n`DISBATCH_REPEAT_INDEX` set to one of 0, 1, 2, 3 or 4.\n\nThe starting index and step size can also be changed:\n\n    #DISBATCH REPEAT 5 start 100 step 50 myprog file${DISBATCH_REPEAT_INDEX}\n\nThis will result in indices 100, 150, 200, 250, and 300. `start` defaults\nto 0, and `step` to 1.\n\nThe command is actually optional; one might want to omit the command\nif a prefix and/or suffix are in place. Returning to our earlier example, the task file\ncould be:\n\n    #DISBATCH PREFIX a=$((DISBATCH_REPEAT_INDEX/100)) b=$(((DISBATCH_REPEAT_INDEX%100)/10 )) c=$((DISBATCH_REPEAT_INDEX%10) ; ( cd /path/to/workdir ; source SetupEnv ; myprog -a $a -b $b -c $c ) \u0026\u003e task_${a}_${b}_${c}.log\n    #DISBATCH REPEAT 1000\n\nThis is not a model of clarity, but it does illustrate that the repeat constuct can be relatively powerful. Many users may find it more convenient to use the tool of their choice to generate a text file with 1000 invocations explictly written out.\n\n### PERENGINE\n\n    #DISBATCH PERENGINE START { command ; sequence ; } \u0026\u003e engine_start_${DISBATCH_ENGINE_RANK}.log\n    #DISBATCH PERENGINE STOP { command ; sequence ; } \u0026\u003e engine_stop_${DISBATCH_ENGINE_RANK}.log\n\nUse these to specify commands that should run at the time an engine joins a disBatch run or at the time the engine leaves the disBatch run, respectively.\nYou could, for example, use these to bulk copy some heavily referenced read-only data to the engine's local storage area before any tasks are run, and then delete that data when the engine shuts down.\nYou can use the environment variable DISBATCH_ENGINE_RANK to distinguish one engine from another; for example, it is used here to keep log files separate.\n\nThese directives must come before any other tasks.\n\n## Embedded disBatch\n\nYou can start disBatch from within a python script by instantiating a \"DisBatcher\" object.\n\nSee `exampleTaskFiles/dberTest.py` for an example.\n\nThe \"DisBatcher\" class (defined in `disbatch/disBatch.py`) illustrates how to interact with disBatch via KVS. This approach could be used to enable similar functionality in other language settings.\n\n## License\n\nCopyright 2024 Simons Foundation\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fflatironinstitute%2Fdisbatch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fflatironinstitute%2Fdisbatch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fflatironinstitute%2Fdisbatch/lists"}