{"id":18318741,"url":"https://github.com/xdevplatform/serverless-flow-framework","last_synced_at":"2025-04-09T13:55:08.239Z","repository":{"id":80527682,"uuid":"391155632","full_name":"xdevplatform/serverless-flow-framework","owner":"xdevplatform","description":"Run and scale realtime data analysis flows on serverless infrastructure","archived":false,"fork":false,"pushed_at":"2021-09-09T00:18:54.000Z","size":138,"stargazers_count":3,"open_issues_count":0,"forks_count":1,"subscribers_count":14,"default_branch":"main","last_synced_at":"2025-02-15T07:51:42.938Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xdevplatform.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2021-07-30T18:14:48.000Z","updated_at":"2023-10-24T19:42:24.000Z","dependencies_parsed_at":null,"dependency_job_id":"e5a99a57-a749-40de-9067-f65475408bb5","html_url":"https://github.com/xdevplatform/serverless-flow-framework","commit_stats":null,"previous_names":["xdevplatform/serverless-flow-framework"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xdevplatform%2Fserverless-flow-framework","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xdevplatform%2Fserverless-flow-framework/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xdevplatform%2Fserverless-flow-framework/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xdevplatform%2Fserverless-flow-framework/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xdevplatform","download_url":"https://codeload.github.com/xdevplatform/serverless-flow-framework/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248054217,"owners_count":21039951,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T18:11:25.650Z","updated_at":"2025-04-09T13:55:08.221Z","avatar_url":"https://github.com/xdevplatform.png","language":"TypeScript","readme":"# Serverless Flow Framework\n\n**The serverless data flow framework**\n\n# Introduction\n\nThe Serverless Flow Framework (SeFF) offers a simple way to process real time data\nby stringing together functions to generate, transform and store objects.\nAs the name suggests, the framework leverages modern serverless runtimes,\nqueues and databases to eliminate the complexity traditionally involved with\nprocessing data at scale.\n\n# Basic concepts\n\n*TBD*\n\n# Getting started\n\n## Prerequisites\n\n1. You will need to have [Node.js](https://nodejs.org/) installed.\n\n## Installation\n\nSeFF is freely available under the Apache 2.0 license.\n\n1. Clone the repository from GitHub:\n\n\u003e [github.com/twitterdev/serverless-flow-framework](https://github.com/twitterdev/serverless-flow-framework)\n\n1. Change into the project directory and build like so:\n\n```\n$ cd serverless-flow-framework\n$ npm install\n$ npm run build\n```\n\nIn order to deploy code to your AWS account, you will need to set up your\ncredentials and region in the [AWS credentials file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)\nor in your environment variables. To set up your environment, add the\nfollowing code to your `~/.zshrc` or `~/.bashrc` file (choose the right\nversion for your shell):\n\n```\nexport AWS_ACCESS_KEY_ID='xxxxxxxxxxxxxxxxxxxx'\nexport AWS_SECRET_ACCESS_KEY='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\nexport AWS_REGION='us-east-1'\n```\n\nSubstitute the `x`s for your access key ID and secret access key downloaded\nfrom the AWS console (Keep the `'` charactrers). Substitute the region for\nthe one you want to use.\n\n## Hello, world!\n\nThe [examples](/examples) folder includes a number of sample projects that\ndemonstrate the basic capabilities of the framework.\n\nAs per tradition, we will start with \"Hello, world!\". The\n[hello.js](/examples/hello.js) project deploys a single serverless function\nthat prints the famous greeting to the log.\n\nRun this command to deploy the project:\n\n```\n$ ./seff deploy examples/hello.js\n```\n\nThe project defines a single flow named `hello`. It is configured for\nmanual invocation and can be run using the following command:\n\n```\n$ ./seff run examples/hello.js hello\n```\n\nBy now you should have a serverless function by the name\n`seff-hello-hello-helloWorld`. Check out the logs for this function to\nview the greeting.\n\nFinally, remove the cloud resources you deployed with the command:\n\n```\n$ ./seff destroy examples/hello.js\n```\n\n## Your first data flow\n\nOur first real data flow will be the [addition.js](/examples/addition.js)\nproject. This project generates two random numbers, adds them together and\nprints the sum to the log.\n\nUse the following command to deploy the project:\n\n```\n$ ./seff deploy examples/addition.js\n```\n\nThe project defines a single flow named `add`. It is configured for manual\ninvocation and can be run using the following command:\n\n```\n$ ./seff run examples/addition.js add\n```\n\nThis will trigger an execution chain starting with the first function that\ngenerates two random numbers, followed by the second function that adds them\ntogether and ends with the third function which prints the sum to the log. Of\ncourse, in a real world example we wouldn't use separate functions for such\ngranular computations.\n\nThe printed numbers can be viewed in log for the serverless function\n`seff-addition-add-printNumber`. If you are using AWS, check your\n[CloudWatch logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html)\nunder the stream for the Lambda function with this name.\n\nAt this point you may want to look at the file `addition.state.json` which\nholds the state of your project. **DO NOT DELETE THIS FILE**, as the\nframework will lose track of the resources it deployed to the cloud\nwill not be able to update or destroy your project.\n\nThe state file and (almost) all resources deployed to the cloud can be\nremoved by running\n\n```\n$ ./seff destroy examples/addition.js\n```\n\nThe only resources not removed by this command are cloud logs, the\nexecution role created for serverless functions and object store buckets\n(objects in these bucket are, however, removed).\n\n## Using library functions\n\nThe [random.js](/examples/random.js) project shows a data flow that uses a\nlibrary function to generate random numbers. Library functions are very useful\nto shorten your code and speed up the building of new projects.\n\n\u003e **WHILE IN ALPHA** library functions are not publically available yet. You\n\u003e will need to set up an S3 bucket for library functions and configure the\n\u003e environment variable `SEFF_FUNCTION_LIBRARY_BASEURL` to point to the\n\u003e bucket using a URL in the form `s3://\u003cbucket\u003e`. Run the following commands\n\u003e to deploy the library functions to your bucket:\n\u003e\n\u003e ```\n\u003e $ ./seff upload -j library/aws/dynamodbWrite\n\u003e $ ./seff upload -j library/aws/rdsInsert\n\u003e $ ./seff upload -j library/aws/s3Put\n\u003e $ ./seff upload -c -j library/ibm/watsonNluAnalyze\n\u003e $ ./seff upload -c -j library/std/countInvocations\n\u003e $ ./seff upload -c -j library/std/generateRandomNumber\n\u003e $ ./seff upload -c -j library/std/printEvent\n\u003e $ ./seff upload -c -j library/twitter/pollTweetsWithQuery\n\u003e ```\n\nDeploy the project using\n\n```\n$ ./seff deploy examples/random.js\n```\n\nand run using\n\n```\n$ ./seff run examples/random.js random\n```\n\nThis should print out a single number between 0 and 100 to the log. Do not\ndestroy the project just yet.\n\n## Updating your code\n\nMake a simple change to [random.js](/examples/random.js). For example, change\nthe multiplier in the second fucntion `multiplyNumber` to a different number.\n\nDeploy again using the same command used for the initial deployment:\n\n```\n$ ./seff deploy examples/random.js\n```\n\nYou will notice that only the changes were redeployed. SeFF tracks the\nstate of your project in the cloud (see `random.state.json`). This\nallows the framework to detect changes you make to your code and only deploy\nthe differences to the cloud.\n\n*This makes deployments lightning fast and enables using the cloud with all\nits services and scale as your development environment.*\n\nWe wrap up by deleting our state and cloud resources:\n\n```\n$ ./seff destroy examples/random.js\n```\n\n## Persisting data\n\nSeFF makes it easy to persist data. The\n[dynamodb.js](/examples/dynamodb.js) project generates random message\nobjects, runs a simple transformation to add a timestamp to each message\nand stores the objects in a database table.\n\nSince databases vary in their features and APIs, this example, as well as\nthe library functions used to define tables and persist data, are specific\nto one database. This example is designed for\n[AWS DynamoDB](https://aws.amazon.com/dynamodb/).\n\nThis project showcases three important aspects of the framework:\n\n* **Resources** like a DynamoDB table, which are created automatically when\nthe project is deployed and deleted (along with the data!) when the project\nis destroyed.\n\n* **Arrays** in function input and output streams are automatically unpacked\nso that function code can focus on the transformation logic instead of dealing\nwith data management. In order to pass an actual array to a function, pass it\nas an attribute value in an object.\n\n* **Resource functions** are library functions referenced by resources\nto provide easier access to related functions. In this project we use\n`messages.write` to reference the `dynamodbWrite` library fucntion and\nwrite data to our specific table.\n\nYou can run this project using the same `deploy`, `run` and `destroy`\ncommands we used above. Make sure to deploy the `dynamodbWrite` library\nfunction to S3 before attempting to deploy this project, as shown above.\n\n## More examples\n\n* **[api.js](/examples/api.js)** creates API endpoints.\n\n* **[numbers.js](/examples/numbers.js)** features a library functions that\npersists state between invocations.\n\n* **[rds.js](/examples/rds.js)** load data records into a SQL table.\n\n* **[s3.js](/examples/s3.js)** loads data into objects in an AWS S3 bucket.\n\n* **[state.js](/examples/state.js)** demonstrates how functions can persist\nstate between invocations.\n\n* **[twitter2dynamodb.js](/examples/twitter2dynamodb.js)** loads Tweets into\na dynamoDB table.\n\n# Contact us\n\n## Issues?\n\n*TBD*\n\n## Security Issues?\n\nPlease report sensitive security issues via Twitter's bug-bounty program (https://hackerone.com/twitter) rather than GitHub.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxdevplatform%2Fserverless-flow-framework","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxdevplatform%2Fserverless-flow-framework","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxdevplatform%2Fserverless-flow-framework/lists"}