{"id":19606295,"url":"https://github.com/zisismaras/pipeproc","last_synced_at":"2025-04-27T19:32:58.115Z","repository":{"id":46940117,"uuid":"156970990","full_name":"zisismaras/pipeproc","owner":"zisismaras","description":"Multi-process log processing for nodejs","archived":false,"fork":false,"pushed_at":"2023-01-06T01:37:58.000Z","size":1296,"stargazers_count":8,"open_issues_count":11,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-04-05T03:01:53.244Z","etag":null,"topics":["electron","kafka","log","multi-process","redis-stream","structured-commit-log","topic"],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zisismaras.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-11-10T10:34:06.000Z","updated_at":"2020-05-05T13:17:33.000Z","dependencies_parsed_at":"2023-02-05T01:46:06.815Z","dependency_job_id":null,"html_url":"https://github.com/zisismaras/pipeproc","commit_stats":null,"previous_names":[],"tags_count":16,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zisismaras%2Fpipeproc","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zisismaras%2Fpipeproc/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zisismaras%2Fpipeproc/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zisismaras%2Fpipeproc/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zisismaras","download_url":"https://codeload.github.com/zisismaras/pipeproc/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251196348,"owners_count":21550941,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["electron","kafka","log","multi-process","redis-stream","structured-commit-log","topic"],"created_at":"2024-11-11T10:04:34.928Z","updated_at":"2025-04-27T19:32:57.702Z","avatar_url":"https://github.com/zisismaras.png","language":"TypeScript","readme":"# PipeProc\n\n\u003e Multi-process log processing for nodejs\n\n\u003c!-- START doctoc generated TOC please keep comment here to allow auto update --\u003e\n\u003c!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --\u003e\n**Table of Contents**\n\n- [Intro](#intro)\n- [Example](#example)\n- [Installing](#installing)\n- [Status](#status)\n- [Process management](#process-management)\n  - [spawn](#spawn)\n  - [connect](#connect)\n  - [shutdown](#shutdown)\n- [Committing logs](#committing-logs)\n  - [commit examples](#commit-examples)\n- [Read API](#read-api)\n  - [range](#range)\n    - [range signature](#range-signature)\n    - [range examples](#range-examples)\n  - [revrange](#revrange)\n  - [length](#length)\n- [Procs](#procs)\n  - [offsets](#offsets)\n  - [ack](#ack)\n  - [ackCommit](#ackcommit)\n  - [reclaim](#reclaim)\n    - [reclaim settings](#reclaim-settings)\n  - [destroying procs](#destroying-procs)\n  - [inspecting procs](#inspecting-procs)\n  - [resuming/disabling procs](#resumingdisabling-procs)\n- [SystemProcs](#systemprocs)\n  - [systemProc example](#systemproc-example)\n  - [Inline processors](#inline-processors)\n- [LiveProcs](#liveprocs)\n  - [liveProc signature](#liveproc-signature)\n  - [liveProc example](#liveproc-example)\n- [Waiting for procs to complete](#waiting-for-procs-to-complete)\n- [GC](#gc)\n  - [Caveats/problems](#caveatsproblems)\n- [Typings](#typings)\n- [Tests](#tests)\n- [Meta](#meta)\n- [Contributing](#contributing)\n\n\u003c!-- END doctoc generated TOC please keep comment here to allow auto update --\u003e\n\n## Intro\n\nPipeProc is a data processing system that can be embedded in nodejs applications (eg. electron).  \nIt will be run in a separate process and can be used to off-load processing logic from the main “thread” in a structured manner.\n\nUnderneath it uses a [structured commit log](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying) and a “topic” abstraction to categorize logs.\n\nInspired by [Apache Kafka](https://kafka.apache.org/) and [Redis streams](https://redis.io/topics/streams-intro).\n\nIn practice it is a totally different kind of system since it is meant to be run embedded in the main application as a single instance node.  \nAnother key difference is that it also handles the execution of the processing logic by itself and not only the stream pipelining.  \nIt does this by using processors which are custom-written modules/functions that can be plugged to the system, consume topic streams, execute custom logic and push the results to another topic, thus creating a processing pipeline.\n\n## Example\n\n```javascript\nconst {PipeProc} = require(\"pipeproc\");\nconst pipeProcClient = PipeProc();\n\npipeProcClient.spawn().then(function() {\n    //commit a log to topic \"my_topic_1\"\n    //the topic is created if it does not exists\n    pipeProcClient.commit({\n        topic: \"my_topic\",\n        body: {greeting: \"hello\"}\n    }).then(function(id) {\n        console.log(id);\n        //1518951480106-0\n        //{timestamp}-{sequenceNumber}\n    });\n});\n```\n\n## Installing\n\n```bash\nnpm install --save pipeproc\n```\n\n## Status\n\n![Test Suite](https://github.com/zisismaras/pipeproc/workflows/Test%20Suite/badge.svg) \n[![npm version](https://badge.fury.io/js/pipeproc.svg)](https://badge.fury.io/js/pipeproc)\n\n## Process management\n\n### spawn\n\nSpawn the node and connect to it.  \nIf there is a need to spawn multiple nodes on the same host you can use the `namespace` option with a custom name.  \nIf a custom `namespace` is used, all clients that will `connect()` to it will need to provide it.  \nThe node can also use TCP connections instead by setting the host and the port in the `tcp` settings.  \nClients (local and remote) can then `connect()` to it by providing the same host and port.  \nA socket address can also be used instead of a namespace or tcp options, for example:\n\n- `ipc:///tmp/mysocket`\n- `tcp://127.0.0.1:9999`\n\nTLS is also available when using TCP. A cert, key, and ca should be provided for both server and client.  \nAny client that also needs to `connect()` should provide its client keys.  \nIf the node is spawned with TLS, only secure connections will be allowed.\n\n```typescript\nspawn(\n    options?: {\n        //use a different ipc namespace\n        namespace?: string,\n        //use a tcp socket\n        tcp?: {\n            host: string,\n            port: number\n        },\n        //tls settings\n        tls?: {\n            server: {\n                key: string;\n                cert: string;\n                ca: string;\n            },\n            client: {\n                key: string;\n                cert: string;\n                ca: string;\n            }\n        }\n        //use a socket address directly\n        socket?: string,\n        //use an in-memory store instead of the disk adapter\n        memory?: boolean,\n        //set the location of the underlying store (if memory is false)\n        location?: string,\n        //the number of workers(processes) to use (check the systemProc section below), set to 0 for no workers, defaults to 1\n        workers?: number,\n        //the number of processors that can be run concurrently by each worker, defaults to 1\n        workerConcurrency?: number,\n        //restart any worker that reaches X systemProc executions, useful to mitigate memory leaks, defaults to 0 (no restarts)\n        workerRestartAfter?: number,\n        //tune the garbage collector settings (check the gc section below)\n        gc?: {minPruneTime?: number, interval?: number} | boolean\n    }\n): Promise\u003cstring\u003e;\n```\n\n### connect\n\nConnect to an already spawned node.  \n\nUsecase: Connect to the same PipeProc instance from a different process (eg. electron renderer) or to a remote instance (only when using TCP)\n\n```typescript\nconnect(\n    options?: {\n        //use a different ipc namespace\n        namespace?: string,\n        //connect to a tcp socket\n        tcp?: {\n            host: string,\n            port: number\n        },\n        //tls settings\n        tls?: {\n            key: string;\n            cert: string;\n            ca: string;\n        } | false,\n        //use a socket address directly\n        socket?: string,\n        //specify a connection timeout, defaults to 1000ms\n        timeout?: number\n    }\n): Promise\u003cstring\u003e;\n```\n\n### shutdown\n\nGracefully close the PipeProc instance.\n\n```typescript\nshutdown(): Promise\u003cstring\u003e;\n```\n\n## Committing logs\n\nThis is how you add logs to a topic.  \nThe topic will be created implicitly when its first log is committed.  \nMultiple logs can be committed in a batch, either in the same topic or to different topics, in that case\nthe write will be an atomic operation and either all logs will be successfully written or all will fail.\n\n### commit examples\n\nAdd a single log to a topic:\n\n```javascript\npipeProcClient.commit({\n    topic: \"my_topic\",\n    body: {greeting: \"hello\"}\n}).then(function(id) {\n    console.log(id);\n    //=\u003e 1518951480106-0\n});\n```\n\n`commit()` will return the id(s) of the log(s) committed.  \nIds follow a format of `{timestamp}-{sequenceNumber}` where timestamp is the time the log was committed in milliseconds\nand the sequence number is an auto-incrementing integer (starting from 0) indicating the log's position in its topic.  \nThe log's body can be an arbitrarily nested javascript object.\n\nAdding multiple logs to the same topic:\n\n```javascript\npipeProcClient.commit([{\n    topic: \"my_topic\",\n    body: {\n        myData: \"some data\"\n    }\n}, {\n    topic: \"my_topic\",\n    body: {\n        myData: \"more data\"\n    }\n}]).then(function(ids) {\n    console.log(ids);\n    //=\u003e [\"1518951480106-0\", \"1518951480106-1\"]\n});\n```\n\nNotice the timestamps are the same since the two logs where inserted at the same time but the sequence number is different and auto-increments.  \n\nAdding multiple logs to different topics:\n\n```javascript\npipeProcClient.commit([{\n    topic: \"my_topic\",\n    body: {\n        myData: \"some data\"\n    }\n}, {\n    topic: \"another_topic\",\n    body: {\n        myData: \"some data for another topic\"\n    }\n}]).then(function(ids) {\n    console.log(ids);\n    //=\u003e [\"1518951480106-0\", \"1518951480106-0\"]\n});\n```\n\nAs before, the timestamps are the same (since they were committed at the same time) but the sequence numbers are both `0` since these two logs are the first logs committed in their respective topics.\n\n## Read API\n\n### range\n\nGet a slice of a topic.\n\n#### range signature\n\n```typescript\nrange(\n    topic: string,\n    options?: {\n        start?: string,\n        end?: string,\n        limit?: number,\n        exclusive?: boolean\n    }\n): Promise\u003c{id: string, body: object}[]\u003e;\n```\n\n#### range examples\n\n```javascript\npipeProcClient.range(\"my_topic\", {\n  start: \"1518951480106-0\",\n  end: \"1518951480107-10\"\n})\n\n//timestamps only\npipeProcClient.range(\"my_topic\", {\n  start: \"1518951480106\",\n  end: \"1518951480107\"\n})\n\n\n//from beginning to end\npipeProcClient.range(\"my_topic\")\n\n//from specific timestamp to the end\npipeProcClient.range(\"my_topic\", {\n  start: \"1518951480106\"\n})\n\n//with a limit\npipeProcClient.range(\"my_topic\", {\n  start: \"1518951480106\",\n  limit: 5\n})\n\n//by sequence id\npipeProcClient.range(\"my_topic\", {\n  start: \":5\",\n  end: \":15\"\n}) //=\u003e [5..15]\n\n//by sequence id exclusive\npipeProcClient.range(\"my_topic\", {\n  start: \":5\",\n  end: \":15\"\n  exclusive: true\n}) //=\u003e [6..14]\n\n//returns a Promise that resolves to an array of logs\n[{\n  id: \"1518951480106-0\",\n  body: {\n    myData: \"hello\"\n  }\n}]\n```\n\n### revrange\n\nRanges through the topic in an inverted order.  \n`start` and `end` should also be inverted. (start \u003e= end).  \nThe API is the same as `range()`.  \neg. to get the latest log\n\n```javascript\npipeProcClient.revrange(\"my_topic\", {\n  limit: 1\n})\n```\n\n### length\n\nget the total logs in a topic\n\n```javascript\npipeProcClient.length(\"my_topic\").then(function(length) {\n  console.log(length);\n});\n```\n\n## Procs\n\nProcs are the way to consistently process logs of a topic.  \nLet's start with an example and explain as we go along.\n\n```javascript\n//lets add some logs\nawait pipeProcClient.commit([{\n    topic: \"numbers\",\n    body: {myNumber: 1}\n}, {\n    topic: \"numbers\",\n    body: {myNumber: 2}\n}]);\n\n//run a proc on the \"numbers\" topic\nconst log = await pipeProcClient.proc(\"numbers\", {\n  name: \"my_proc\",\n  offset: \"\u003e\"\n});\n//=\u003e log = {id: \"1518951480106-0\", body: {myNumber: 1}}\n\ntry {\n    //process the log\n    const incrementedNumber = log.data.myNumber + 1;\n    //ack the operation and commit the result to a different topic\n    await pipeProcClient.ackCommit(\"my_proc\", {\n        topic: \"incremented_numbers\",\n        body: {myIncrementedNumber: incrementedNumber}\n    });\n} catch (err) {\n    //something went wrong on our processing, the proc should be reclaimed\n    console.error(err);\n    pipeProcClient.reclaim(\"my_proc\");\n}\n```\n\nProcs are the way to consistently fetch logs from a topic, process them and commit the results in a safe and serial manner.  \nSo, what's going on in the above example?  \n\n- first we add a log to our \"numbers\" topic\n- then we create a proc named \"my_proc\" with an offset of \"\u003e\" (it means start fetching from the very beginning of the topic, see more below) for the \"numbers\" topic\n- the proc returns a log (the log we added on the first commit)\n- we do some processing (incrementing the number)\n- we then acknowledge the operation and commit our result to a different topic\n- we are also catching errors in our processing and the ack, in that case the proc must be reclaimed.\n\nIf everything goes well, the next time we call the proc it will fetch us our second log `1518951480106-1`.  \nIf something goes wrong and `reclaim()` is called the proc will be \"reset\" and will fetch the first log again.  \nUntil we call `ack()` (or `ackCommit()` in this case) to move on or `reclaim()` to reset, the proc will not fetch us any new logs.  \n\nHere is the whole proc signature:\n\n```typescript\nproc(\n    //for what topic this proc is for\n    topic: string,\n    options: {\n        //the proc name\n        name: string,\n        //the proc offset (See below)\n        offset: string,\n        //how many logs to fetch\n        count?: number,\n        //reclaim settings, see below\n        maxReclaims?: number,\n        reclaimTimeout?: number,\n        onMaxReclaimsReached?: \"disable\" | \"continue\"\n    }\n): Promise\u003cnull | {id: string, body: object} | {id: string, body: object}[]\u003e;\n```\n\n### offsets\n\noffsets are how you position the proc to a specific point in the topic.  \n\n- `\u003e` fetch the next log after the latest acked log for this proc. If no logs have been acked yet, it will start from the beginning of the topic.\n- `$\u003e` like `\u003e` but it will start from new logs and not from the beginning (logs created after the proc’s creation)\n- `{{specific_log/timestamp}}` - follows the `range()` syntax. It can be a full log name, a partial timestamp or a sequence id(`:{{id}}`). The next non-acked log AFTER the match will be returned.\n\n### ack\n\nAcking the proc is an explicit operation and should be run after the log has successfully been processed by calling `ack()` or `ackCommit()`.\n\n```typescript\nack(\n    procName: string\n): Promise\u003cstring\u003e;\n```\n\nReturns the logId of the log we just acked. If our proc fetched multiple logs (using count \u003e 1) all of the logs will be acknowledged as processed and the call instead of an id will return a range (`1518951480106-0..1518951480106-1`).  \nThe next time the proc is executed it will fetch the next log after the above logId(or range).\n\n### ackCommit\n\n`ackCommit()` combines an `ack()` and a `commit()` in an atomic operation. If either of these fail, both will fail.\n\n### reclaim\n\nIf something goes wrong while we are processing our log(s) or a PipeProc error is raised when we ack/commit our result, we should call `reclaim`.  \nThis will reset the proc, allowing to retry the operation.\n\n#### reclaim settings\n\nIn the proc's signature there are some settings for the reclaims, allowing us to control how reclaims work and not retrying failed operations forever or getting stuck.\n\n- maxReclaims - how many times we can call reclaim on a proc before the `onMaxReclaimsReached` strategy is triggered (defaults to 10, set to -1 for no limit)\n- reclaimTimeout - In order not to get stuck by a bad processing (failing to call `ack()` or `reclaim()`), the proc will be automatically be reclaimed after a certain amount of time by the system, this value sets the time.\n- onMaxReclaimsReached - what to do when the maxReclaims are reached. By default it will \"disable\" the proc which will raise an error if we try to use the proc. Can be set to \"continue\" so we can keep reclaiming forever.\n\n### destroying procs\n\nSince procs are persisted and are not meant to be used as an once-off operation (use a simple range() for that) they need to be explicitly destroyed.\n\n```javascript\npipeProcClient.destroyProc(\"my_proc\") // throws if it doesn't exist\n.then(function(status) {\n  console.log(status);\n});\n.catch(function(err) {\n  console.error(err);\n});\n```\n\nIf a destroyed proc is re-run it will be re-created anew without maintaining the previous state.\n\n### inspecting procs\n\nInspect the internal proc's state (last claimed/acked ranges etc). Useful for debugging.\n\n```javascript\npipeProcClient.inspectProc(\"my_proc\") // throws if it doesn't exist\n.then(function(proc) {\n  console.log(proc);\n});\n.catch(function(err) {\n  console.error(err);\n});\n```\n\n### resuming/disabling procs\n\nManually disable the proc or resume it (eg. after reaching `maxReclaims`)\n\n```javascript\npipeProcClient.disableProc(\"my_proc\") // throws if it doesn't exist\n//.resumeProc(\"my_proc\") - throws if already active or doesn't exist\n.then(function(proc) {\n  //proc = same as inspectProc output\n})\n.catch(function(err) {\n  console.error(err);\n});\n```\n\n## SystemProcs\n\nManually executing and managing a proc can be tiresome.  \nSystemProcs will take care of all creation/execution/management of procs while also distributing the load to multiple workers, let's take a look using the above proc example with incrementing numbers but now using a `systemProc` and a processor module.\n\n### systemProc example\n\n```javascript\npipeProcClient.systemProc({\n  name: \"my_system_proc\",\n  offset: \"\u003e\",\n  from: \"numbers\",\n  processor: \"/path/to/myProcessor.js\",\n  to: \"incremented_numbers\"\n  //all the other standard Proc options\n});\n```\n\n```javascript\n//myProcessor.js\nmodule.exports = function(log, done) {\n  //log = {id: \"1518951480106-0\", body: {myNumber: 1}}\n  done(null, {myIncrementedNumber: log.body.myNumber+ 1});\n};\n```\n\nProcessors can publish to multiple topics by setting the `to` field to an array of topics.  \nIf the `to` field is omitted, the processor will not publish any logs (eg. get a log, process it, write the result to a database)\nInstead of using a `done` callback, you can also return a promise.  \nIf an error is returned in the done callback (or a rejected promise is returned) the proc will be reclaimed.\n\n### Inline processors\n\nProcessors can also be inlined:\n\n```javascript\npipeProcClient.systemProc({\n    name: \"number_writer\",\n    offset: \"\u003e\",\n    count: 1,\n    maxReclaims: -1,\n    reclaimTimeout: 5000,\n    from: \"numbers\",\n    to: \"incremented_numbers\",\n    processor: (log, done) =\u003e {\n        done(null, {n: log.body.myNumber + 1});\n    }\n});\n```\n\n## LiveProcs\n\nWith liveprocs you can react to topic changes while not having to keep executing the underlying proc.  \nliveprocs are run in the process in which they are called and are not distributed to the workers like systemProcs.  \n\n### liveProc signature\n\n```typescript\nliveProc(\n    options: {\n        topic: string,\n        //\"all\" will point the proc to the beginning of the topic (\"\u003e\" offset)\n        //\"live\" will start fetching logs created after the liveProc's creation (\"$\u003e\" offset)\n        mode: \"live\" | \"all\",\n        //how many logs to fetch each time\n        count?: number\n    }\n): ILiveProc;\n```\n\n### liveProc example\n\n```javascript\npipeProcClient.liveProc({\n    topic: \"my_topic\",\n    mode: \"all\"\n}).changes(function(err, logs, next) {\n    if (err) {\n        //reclaim has to be called manually\n        return this.reclaim();\n    } else if (logs) {\n        //do something with the logs\n        //ack() is also manual\n        this.ack().then(function() {\n          next();\n        });\n    }\n});\n```\n\nInside the `changes` function you can either return a promise or use the `next` callback to keep listening for changes.  \n\nliveProc instances also have simpler versions of all of the proc's methods (that implicitly point to the underlying proc)\n\n```typescript\ninterface ILiveProc {\n    changes: (cb: ChangesCb) =\u003e ILiveProc;\n    inspect: () =\u003e Promise\u003cIProc\u003e;\n    destroy: () =\u003e Promise\u003cIProc\u003e;\n    disable: () =\u003e Promise\u003cIProc\u003e;\n    resume: () =\u003e Promise\u003cIProc\u003e;\n    reclaim: () =\u003e Promise\u003cstring\u003e;\n    ack: () =\u003e Promise\u003cstring\u003e;\n    ackCommit: (commitLog: ICommitLog) =\u003e void;\n    cancel: () =\u003e Promise\u003cvoid\u003e;\n}\n```\n\n## Waiting for procs to complete\n\nWhen you have multiple systemProcs and/or liveProcs running it is sometimes needed to know when all logs in their topics have been acked.  \nFor example when we need to shutdown and exit the application:  \n\n```javascript\npipeProcClient.waitForProcs().then(function() {\n  pipeProcClient.shutdown().then(function() {\n    process.exit(0);\n  });\n});\n```\n\n`waitForProcs()` can take a proc name or an array of proc names and it will wait only for those to complete.  \nIf nothing is passed then it will wait for every active proc.\n\n## GC\n\nLogs are immutable and cannot be edited or deleted after creation, so a garbage collector is needed to make sure our topics don't grow too large.\n\nEvery time it runs it performs the following:\n\n- for topics that have no procs attached it will collect all logs that have passed the `minPruneTime`\n- for topics that have procs attached, it will collect all logs 2 positions behind the last claimed log range, but only if they have also passed the `minPruneTime`\n\nYou can configure the `minPruneTime` and gc `interval` when you `spawn` the PipeProc node.  \nBy default they are both set to 30000ms.  \n\n**By default the gc is disabled.** It can be enabled by passing `true` on the `spawn`'s gc options or an `object` with prune time and interval settings.\n\n### Caveats/problems\n\n- topic, proc and systemProc metadata are left behind even if the topic is empty and/or no longer used\n- the `length()` function will return an incorrect number if a part of the topic is collected\n- there seems to be a problem with the gc timers on OSX, causing the tests to sometimes fail\n\n## Typings\n\nSince PipeProc is written in typescript all public interfaces are properly typed and should be loaded automatically in your editor.\n\n## Tests\n\nYou can run the test suite with:\n\n```bash\nnpm install --save-dev\nnpm run test\n```\n\n## Meta\n\nDistributed under the The 3-Clause BSD License. See ``LICENSE`` for more information.\n\n## Contributing\n\n1. Fork it (\u003chttps://github.com/zisismaras/pipeproc/fork\u003e)\n2. Create your feature branch (`git checkout -b feature/fooBar`)\n3. Commit your changes (`git commit -am 'Add some fooBar'`)\n4. Push to the branch (`git push origin feature/fooBar`)\n5. Create a new Pull Request","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzisismaras%2Fpipeproc","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzisismaras%2Fpipeproc","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzisismaras%2Fpipeproc/lists"}