{"id":13694582,"url":"https://github.com/americanexpress/unify-flowret","last_synced_at":"2025-04-12T12:13:09.659Z","repository":{"id":41175316,"uuid":"296371675","full_name":"americanexpress/unify-flowret","owner":"americanexpress","description":"A lightweight Java based orchestration engine","archived":false,"fork":false,"pushed_at":"2024-07-05T17:51:50.000Z","size":216,"stargazers_count":112,"open_issues_count":0,"forks_count":26,"subscribers_count":9,"default_branch":"main","last_synced_at":"2025-04-12T12:13:00.358Z","etag":null,"topics":["bpm","java","workflow","workflow-automation","workflow-engine","workflow-management"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/americanexpress.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":"audit_log.png","citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-09-17T15:45:56.000Z","updated_at":"2025-03-30T22:30:00.000Z","dependencies_parsed_at":"2022-08-10T01:43:00.042Z","dependency_job_id":"d5108115-b45b-4f0c-b774-39fccc91ae17","html_url":"https://github.com/americanexpress/unify-flowret","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/americanexpress%2Funify-flowret","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/americanexpress%2Funify-flowret/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/americanexpress%2Funify-flowret/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/americanexpress%2Funify-flowret/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/americanexpress","download_url":"https://codeload.github.com/americanexpress/unify-flowret/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248565082,"owners_count":21125418,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bpm","java","workflow","workflow-automation","workflow-engine","workflow-management"],"created_at":"2024-08-02T17:01:35.159Z","updated_at":"2025-04-12T12:13:09.620Z","avatar_url":"https://github.com/americanexpress.png","language":"Java","readme":"![Simple Workflow](flowret.png)\n\n\n### Flowret – A lightweight Java based orchestration engine\n\n---\n\n#### Whats in a name?\nFlowret is derived from the english word \"floret\" which is in turn derived from the french word \"florete\".\nThe word floret means a small / budding flower. We brought in the component of \"flow\" into this word\nand named our orchestrator \"Flowret\". We felt that this resonated well with the nature of the\norchestrator we have built. It is lightweight, small, a pleasure to use and budding into something beautiful!\n\n---\n\n#### Getting Flowret package\n\nFlowret is available as a jar file in Maven central with the following Maven coordinates:\n\n````pom\n\u003cgroupId\u003ecom.americanexpress.unify.flowret\u003c/groupId\u003e\n\u003cartifactId\u003eunify-flowret\u003c/artifactId\u003e\n\u003cversion\u003e1.7.3\u003c/version\u003e\n````\n\n---\n\n#### Prerequisites\nFlowret works with Java 8 and later.\n\nMake sure that log4j configuration file is found in the class path.\n\n---\n\n#### Glossary\nTerm | Description\n---- | -----------\nProcess Definition | A defined sequence of steps (sequential / parallel) and routing information\nCase | An instance of a process definition. Note that the words case and process are used synonymously\nStep | An invokable action in a process definition that does a defined piece of work usually executed on the application\nRoute | A conditional step that determines a path which a case takes\nSingular Route | A route from which only one of the outgoing paths can be chosen for case execution\nParallel Route | A route from which one or more of the outgoing paths can be chosen for case execution and the paths are known in advance\nParallel Route Dynamic | A route which has only one outgoing branch, the steps on which are executed in parallel as determined by the parallel dynamic route\nProcess variables | Case specific variables required for routing a case through the process flow\nSLA Milestone | An action to be taken at a specified time in the future\n\n---\n\n#### Design Goals\n* Extreme flexibility in stitching together journeys\n* Enable true technical parallel processing enabling faster performance and SLAs\n* Simplify process definition. Keep things as simple as possible\n* Scale horizontally\n* Cloud deploy-able\n* Extreme reliability. Be able to withstand JVM crashes\n\n---\n\n#### Architecture\n* Core Java 8.0. Only depends on custom JDocs library which in turn depends on\nJackson JSON parsing open source library\nNo dependency on non-core heavy weight or commercial products\n* Packaged as a JAR less than ~300 KB\n* Runs as an embedded orchestrator in the host application making\nit horizontally scale-able along with the application\n\n---\n\n#### Capability and Feature Summary\n\n##### Process Definition\n\n1. Very simple process definition comprising of steps and routes in a JSON file\n1. Very simple SLA milestone definition (SLA definition in short) comprising of a list of milestones in a JSON file\n1. Ability to define singular, static parallel or dynamic parallel routes\n1. Ability to hook up a route to an application rule to determine path to take\n1. Case instance specific copy of process definition and SLA Milestone definition at case start.\nLocks both the process definition and SLA definition for a case for its life time\nthereby eliminating errors due to definitions changing while a case is being processed\n\n##### Parallel Processing\n1. An extremely critical feature required to reduce end to end processing time for a\ncase\n1. True parallel processing available out of box. As a side note, most of the mainstream\nBPM products do not offer true technical parallel processing.\nThey offer business parallel processing in which while one can have multiple branches\nemanating from a route, they will still be executed one at a time.\n1. Except for synchronization of application specific data structures, no additional work around enabling parallel\n   processing is required to be done on the application / consumer side. Parallel processing is completely managed by\n   Flowret as specified in the process definition file\n1. Configurable number of threads from a pool used for parallel processing or unbounded child threads - the choice is\n   yours\n\n##### State Management\n1. Implements \"resume from where it left off\" functionality\n1. Can support multiple data stores to persist state via a published API including\nRDBMS / NoSQL / File System. File system is a quick and efficient way to test\non laptop in stand alone isolated mode\n1. Allows applications to specify “execute next on resume” or “execute self on resume”\n(only in case of a non-error pends) conditions on case resume\n1. Configurable state persistence by specifying a special “Persist” step anywhere\nin the process definition or when process pends / completes\n1. \"Write state information immediately\" mode wherein the orchestrator updates state information\nimmediately after it executes a step or a route. This makes it crash proof to the last but one executed\nstep / route\n\n##### Audit Logging\n1. Audit log of all execution paths and process variables written out after execution of each step / route\n2. Minimizes the possibility of having orphaned cases\n(cases where we do not know which step was executed last due to data loss due to a crash)\n\n##### Ticket Management\n1. “Go to step” feature implemented via tickets\n1. Ticket can be raised by any step\n1. On a ticket being raised, control gets transferred to the step\nspecified in the ticket definition\n\n###### Ticket Example 1\nAssume an application processing workflow that contains 40 steps and each step can raise an approved or declined\ndecision. In case of a decline by any step, control needs to get routed to the last step.\nIn the absence of this feature, the return from each step would need to be\nconditionally evaluated in a route. This would make the process definition\nmessy, tedious to build, cumbersome to use and error prone to maintain.\nThis ticket feature would eliminate the need to specify an evaluation condition\nafter each step thereby keeping the process definition clean and focused on business flow.\n\n##### Process Variables\n1. Ability to define process variables with initial values as part of process definition\n1. Ability to provide all process variables to steps and routes invoked as part of process flow\n1. Ability for each step / route to modify existing / add new process variables to the case\n1. All process variables persisted in audit logs and state persistence\n\n###### Process variable example 1\nIn a parts ordering application,\nthere may be more than one parts to order. Each part needs to be taken\nthrough a set of steps. A process variable “curr_part”\nmay be used to maintain the information of which instance of part is being currently processed\n\n###### Process variable example 2\nIn an ordering application, the customer might not specify the address to ship the order to. A\nprocess variable “is_address_available” could be used with initial value false to start with.\nAfter the customer provides the address information, this variable could be set to true.\nThis variable would be used for routing e.g.\nif it is false, take the application to the step where we request address information\nfrom the customer else not\n\n##### Quality\n1. Can be run as a stand alone Java application on a laptop\n1. Steps and routes can be easily mocked for testing various permutations and combinations of routing\n1. A journey can be tested independently using mocks of steps / routes. This helps decouple\nthe journey definition and testing from the actual implementation of the steps\nand routes\n\n##### Application call back Events\n1. Ability to provide call back events to applications / consumers e.g.\non process start, on process pend etc.\n1. The application may use these call back events for house-keeping tasks\nand data synchronization during parallel processing. Example -\u003e on process resume event,\nthe application may load data into memory and on process pend,\nmay write the data back to database and release memory\n\n##### Crash proof\t\n1. Ability to write out the state of the process after execution of each step and route\n1. Provides the following variables which can be used to identify \"orphaned\" cases\n    1. \"pend_exec_path\" - the name of the execution path if case is pended else empty\n    1. \"is_complete\" - a flag indicating the status of the case\n    1. \"ts\" - the timestamp when the status was saved\n1. On resuming such an orphaned application, the correct state is automatically\nreconstructed and process resumes from the state it recorded last\n1. An application batch process could scan the process info files to identify\ncases which have pend_exec_path as empty (meaning that the process is running),\nis_complete as false and a timestamp which is x units exceeded\n(as per application needs) and could resume such cases in a batch mode\n\n#### SLA and work management\n\n1. Maintains information of work baskets into which cases are parked\n1. Provides a work manager which can be used to switch work baskets for a case and carry out actions on change of a work\n   basket\n1. Provides an SLA manager which can be used to define, enqueue and dequeue SLA milestones automatically based on the\n   SLA configuration defined\n\n---\n\n#### Getting started in 5 minutes\n\nPlease refer to the packages `com.americanexpress.unify.flowret.sample` and `/resources/flowret/sample` in the test\nfolder. All Java class files and resources described below can be found there. You can also run the sample by running\nthe file `FlowretSample.java`.\n\nIn case you would like to create your own project and copy the sample files there, please note the following:\n\n1. Make sure that unify-flowret dependency is downloaded and on the class path\n2. Make sure that log4j2.json is found in the root of the resources path of your project. You can take a sample from the\n   location `/resources/log4j2.json` in the test folder\n\n##### Step 1: Create a process definition file\n\nPlease refer to the file `/resources/flowret/sample/order_part.json` under the test folder. The contents of this file\nare the same as the contents of the process flow as defined in the next section \"Defining a process\".\n\n##### Step 2: Create an object to persist Flowret data\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.SampleDao.java` in the test folder.\n\nThis object is used by Flowret to read from / to the data store. The data store may be an RDBMS, a NoSQL or for simple\ntesting even a file system. In our sample, this object reads and writes from / to a file system.\n\nThis is Flowret's way of saying - \"Give me an object which I can use to persist my state and audit data\".\n\nYou will note that this object expects to be provided a path to a valid directory on the file system. More on this later\nin step 7.\n\n##### Step 3: Create a sample step\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.SampleStep.java` in the test folder.\n\nThis is the object returned by a component factory and represents a step to be executed. This object implements\nthe `InvokableStep` interface.\n\nThis is Flowret's way of saying - \"Give me an object that represents a step I want to execute on which I can call\nthe `executeStep` method\".\n\nYou will note here that we have a single class file to represent all steps in the process definition. In reality,\napplications may have one class for one step.\n\n##### Step 4: Create a sample route\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.SampleRoute.java` in the test folder.\n\nThis is the object returned by a component factory and represents a route to be executed. This object implements\nthe `InvokableRoute` interface.\n\nThis is Flowret's way of saying - \"Give me an object that represents a route I want to execute on which I can call\nthe `executeRoute` method\".\n\nAgain, you will note here that we have a single class file to represent all routes in the process definition. In\nreality, applications may have one class per route.\n\n##### Step 5: Create a sample component factory\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.SampleComponentFactory.java` in the test folder.\n\nThis is the object which is invoked by Flowret whenever it wants to execute a step or a route. Flowret tells this object\nwhat type of entity is being run e.g. a step or a route (via the process context variable). In return, it expects to be\nreturned an object of the right type i.e. an object which implements `InvokableStep` in case of a step (`SampleStep` in\nour example) and an object which implements `InvokableRoute` in case of a route\n(`SampleRoute` in our example).\n\nThis is Flowret's way of saying - \"Give me a factory which I can call to get a Step or a Route to execute\".\n\n##### Step 6: Create a sample event handler\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.SampleEventHandler.java` in the test folder.\n\nThis is the object to which Flowret delivers the case lifecycle events like ON_PROCESS_START etc. This object implements\nthe `EventHandler` interface.\n\nThis is Flowret's way of saying - \"Give me an object to which I can deliver case lifecycle events\".\n\n##### Step 7: Create a main program and run the sample process\n\n***When you run this program, the first thing it does is delete all files in the directory specified. Hence please take\ncare to not point it to a directory where you may have content.***\n\nPlease refer to the file `com.americanexpress.unify.flowret.sample.FlowretSample.java` in the test folder.\n\nPlease note the following:\n\n1. The directory path used for reading and writing Flowret files is `C:/Temp/flowret/`. You can change it to whatever\n   works for you. Note that this is a Windows path. If you are on Mac, please change this path accordingly\n2. The directory path should be a valid path and should already exist. Ideally, you want to to make sure it is empty to\n   start with\n\n###### Initialize Flowret\n\n```java\nERRORS_FLOWRET.load();\nFlowret.init(10, 30000, \"-\");\nFlowret.instance().setWriteAuditLog(true);\nFlowret.instance().setWriteProcessInfoAfterEachStep(true);\n```\n\n###### Wire up objects and get runtime service\n\n```java\nSampleFlowretDao dao = new SampleFlowretDao(DIR_PATH);\nSampleComponentFactory factory = new SampleComponentFactory();\nSampleEventHandler handler = new SampleEventHandler();\nRts rts = Flowret.instance().getRunTimeService(dao, factory, handler, null);\n```\n\n###### Get process definition and start the case\n\nNote that we start a case with case id as 1.\n\n```java\nString json = BaseUtils.getResourceAsString(FlowretSample.class, \"/flowret/sample/order_part.json\");\nrts.startCase(\"1\", json, null, null);\ntry {\n   while (true) {\n     logger.info(\"\\n\");\n     rts.resumeCase(\"1\");\n   }\n}\ncatch (UnifyException e) {\n  logger.error(\"Exception -\u003e \" + e.getMessage());\n}\n```\n\nNote that we have a while loop that tries to resume the case till it hits an exception. This is to resume the case\nmultiple times in case of more than one pends (one resume for each pend) and take the case to completion.\n\nYou can see the logs in the console which will tell you of the progress. You may safely ignore the last exception\nmessage shown below.\n\n```java\n[americanexpress.unify.flowret.sample.FlowretSample] ERROR:Exception -\u003e Cannot resume a case that has already completed.Case id -\u003e 1\n```\n\n##### Step 8: Experiment a bit\n\nTry suppressing the writing of `flowret_audit_log` files by specifying below in `FlowretSample.java`:\n\n```java\nFlowret.instance().setWriteAuditLog(false);\n```\n\nReturn a pend from a step and see what happens. You can refer to below code (already available as commented\nin `SampleStep.java` file):\n\n```java\nif (compName.equals(\"get_part_info\")) {\n  return new StepResponse(UnitResponseType.OK_PEND, \"\", \"SOME_WORK_BASKET\");\n}\n```\n\nReturn an error pend randomly 50% of the time. The program will resume the case till there is no pend and take it to\ncompletion.\n\n```java\nint value = new Random().nextInt(2);\nif (value == 0) {\n  return new StepResponse(UnitResponseType.OK_PROCEED, \"\", \"\");\n}\nelse {\n  return new StepResponse(UnitResponseType.ERROR_PEND, \"\", \"ERROR_WORK_BASKET\");\n}\n```\n\n##### Step 9: Run unit test cases\n\nTry running the unit tests which are defined under the following folders in the test package:\n\n1. test_singular - single path of execution test cases\n2. test_parallel - parallel processing test cases\n3. test_parallel_dyn - dynamic parallel processing test cases\n4. test_parallel_in_dyn_parallel - parallel processing within dynamic parallel processing test cases\n\nPlease note that there are some test cases for which assertions are yet to be done. These test cases can however\nbe run and their output examined manually for understanding and correctness. You can identify such\ntest cases from the console output when you run all test cases.\n\nEach test case class has the following two variables which are switched off by default:\n\n```java\nprivate static boolean writeFiles = false;\nprivate static boolean writeToConsole = false;\n``` \n\nTurning on the ```writeFiles``` variable will write the process info and audit log files to the directory specified.\nNote that there is a subdirectory created for each test case class and inside that for each test case.\nIt would be your responsibility to clean up any contents in these directories should you want to do so.\n\nTurning on the ```writeToConsole``` will output the log messages to console. This is useful when you want to see the\nprogress of the test case execution.\n\n---\n\n#### Defining a process\n\nA process is defined in a JSON file. Consider a simple workflow to order a specific part:\n\n![Simple Workflow](simple_workflow.png)\n\nThe following would be the JSON for representing this flow:\n\n```json\n{\n  \"journey\": {\n    \"name\": \"order_part\",\n    \"tickets\": [\n      {\n        \"name\": \"cancel_order\",\n        \"step\": \"step_4\"\n      }\n    ],\n    \"process_variables\": [\n      {\n        \"name\": \"user\",\n        \"type\": \"string\",\n        \"value\": \"Jack\",\n        \"comment\": \"The name of the person who has raised this order\"\n      }\n    ],\n    \"flow\": [\n      {\n        \"name\": \"start\",\n        \"component\": \"start\",\n        \"next\": \"step_1\"\n      },\n      {\n        \"name\": \"step_1\",\n        \"type\": \"step\",\n        \"component\": \"get_part_info\",\n        \"user_data\": \"Any data can go here\",\n        \"next\": \"step_2\",\n        \"comment\": \"Get detailed information for the requested part\"\n      },\n      {\n        \"name\": \"step_2\",\n        \"component\": \"get_part_inventory\",\n        \"next\": \"route_1\"\n      },\n      {\n        \"name\": \"route_1\",\n        \"type\": \"s_route\",\n        \"component\": \"is_part_available\",\n        \"branches\": [\n          {\n            \"name\": \"Yes\",\n            \"next\": \"step_3\"\n          },\n          {\n            \"name\": \"No\",\n            \"next\": \"step_4\"\n          }\n        ]\n      },\n      {\n        \"name\": \"step_3\",\n        \"component\": \"ship_part\",\n        \"next\": \"end\"\n      },\n      {\n        \"name\": \"step_4\",\n        \"component\": \"cancel_order\",\n        \"next\": \"end\"\n      }\n    ]\n  }\n}\n```\n\n##### Notes:\n\n**The tickets block**\n\nThis block consists of an array, each element of which denotes a ticket.\nA ticket is defined using the following two fields:\n1. `name` - name of the ticket\n2. `step` - name of the step where the control needs to be routed to in case this ticket is raised\n\nIn the above example, it is possible for step_1 to raise a ticket (in case it sees that it is a duplicate application,\nin which case control will get transferred to `step_4` (decline application)\n\n**The process variables block**\n\nThis block consists of an array of process variables defined by the following fields:\n1. `name` - name of the process variable\n2. `type` - type of the variable. Can be one of the following values:\n    1. `string`\n    2. `boolean`\n    3. `long`\n    4. `integer`\n3. `value` - starting out value. Always stored as a string but converted to the appropriate data type\nwhile reading or writing\n4. `comment` - any comment associated with the variable. This is an optional field\n\nNote that process variables and their values defined in the process definition file consist of an initial set only.\nProcess variables can always be updated / added by steps / routes as we go along in the process.\n\nAll process variables are made available to steps and routes via process context. The step or route implementations\nin the application domain can update the process variables i.e. add new variables, delete existing variables\nand update the values of existing variables.\n \n**The flow block**\n\nThis block consists of an array of units. The different kinds of units are:\n1. Step\n1. Route\n1. Parallel Join\n1. Persist\n1. Pause\n\nThe structure of the unit for each type is described below:\n\n**Step**\n1. `name` - name of the step - has to be unique across all units in the process definition\n2. `type` - type of the unit. Value is `step`. Optional. If not present, the unit is assumed to be of type `step`\n3. `component` - name of the component. This points to a specific component in the application domain.\nFlowret while executing a step will provide this component name (as part of the process context)\nto the given application process component factory. The application factory is free to use\nthe component name in what ever way it deems appropriate\n4. `user_data` - user data - this will be passed in the process context object when a step or route is invoked. This\ncould be used to specify any additional information to be passed to the application via the process context. Optional\n5. `comment` - any comment associated with this unit. This is an optional field\n6. `next` - this specified the next unit to be executed\n\n**Route**\n1. `name` - name of the route - has to be unique across all units in the process definition\n2. `type` - type of the unit. Following are the values\n    1. `s_route` - a singular route\n    1. `p_route` - a static parallel route\n    1. `p_route_dyn` - a dynamic parallel route\n3. `component` - name of the component. Same as for a step\n4. `user_data` - user data. Same as for a step. Optional\n5. `comment` - same as for a step. Optional\n6. An array of elements, each denoting a branch which is defined using the following fields:\n    1. `name` - name of the branch\n    2. `next` - next component to be executed for this branch\n\n**Parallel Join**\n1. `name` - name of the join - has to be unique across all units in the process definition\n2. `type` - type of the unit. Value is `p_join`\n\n**Persist**\n1. `name` - name of the persist step - has to be unique across all units in the process definition\n2. `type` - type of the unit. Value is `persist`\n\n**Pause**\n1. `name` - name of the pause step - has to be unique across all units in the process definition\n2. `type` - type of the unit. Value is `pause`\n\n---\n\n#### Creating a parallel processing flow\n\nParallel processing in a flow can be easily incorporated by using `p_route` / `p_route_dynamic` and `p_join` constructs.\nBelow is a sample parallel processing flow:\n\n![Parallel process flow](parallel_flow.png)\n\nAnd the process definition json for this flow:\n\n```json\n{\n  \"journey\": {\n    \"name\": \"parallel_test\",\n    \"flow\": [\n      {\n        \"name\": \"start\",\n        \"component\": \"start\",\n        \"next\": \"route_1\"\n      },\n      {\n        \"name\": \"route_1\",\n        \"type\": \"p_route\",\n        \"component\": \"route_1\",\n        \"branches\": [\n          {\n            \"name\": \"1\",\n            \"next\": \"step_2\"\n          },\n          {\n            \"name\": \"2\",\n            \"next\": \"step_4\"\n          },\n          {\n            \"name\": \"3\",\n            \"next\": \"step_6\"\n          }\n        ]\n      },\n      {\n        \"name\": \"step_2\",\n        \"component\": \"step_2\",\n        \"next\": \"step_3\"\n      },\n      {\n        \"name\": \"step_3\",\n        \"component\": \"step_3\",\n        \"next\": \"step_3a\"\n      },\n      {\n        \"name\": \"step_3a\",\n        \"component\": \"step_3a\",\n        \"next\": \"join_1\"\n      },\n      {\n        \"name\": \"step_4\",\n        \"component\": \"step_4\",\n        \"next\": \"step_5\"\n      },\n      {\n        \"name\": \"step_5\",\n        \"component\": \"step_5\",\n        \"next\": \"join_1\"\n      },\n      {\n        \"name\": \"step_6\",\n        \"component\": \"step_6\",\n        \"next\": \"step_7\"\n      },\n      {\n        \"name\": \"step_7\",\n        \"component\": \"step_7\",\n        \"next\": \"join_1\"\n      },\n      {\n        \"name\": \"join_1\",\n        \"type\": \"p_join\",\n        \"next\": \"step_8\"\n      },\n      {\n        \"name\": \"step_8\",\n        \"component\": \"step_8\",\n        \"next\": \"end\"\n      }\n    ]\n  }\n}\n```\n\n**Constraints on parallel processing**\n\nWhile enabling parallel processing in flows, the following need to be observed:\n1. Flowret will execute each branch on a separate thread. Any synchronization issues on the application\nside (due to shared variables etc.) need to be handled by the application. This does\nnot apply to the application changing process variables as access to process variables is internally synchronized\nby Flowret\n2. All outgoing branching from a `p_route` must converge to a corresponding `p_join`\n3. In case a ticket is raised on a branch executing on a parallel processing thread, it must refer to a step\nwhich would have executed on the single processing thread of the process. This means that the\nstep should be outside of the outest `p_route` / `p_join` construct \n\n---\n\n#### Initialize Flowret - needs to be done only once at startup\n\n```java\n  Flowret.init(idleTimeout,typeIdSep);\n        Flowret.init(idleTimeout,typeIdSep,errorWorkbasket);\n        Flowret.init(maxThreads,idleTimeout,typeIdSep);\n        Flowret.init(maxThreads,idleTimeout,typeIdSep,errorWorkbasket);\n```\n\n`int maxThreads`\n\nSpecifies the maxiumum number of threads in an executor eervice pool used for parallel processing.\n\nThis variable only comes into picture when Flowret has to do parallel processing. For single threaded process execution,\nthe caller thread is used to run the process.\n\nThe parallel processing can be setup in two ways. If the value of this variable is specified and is more than 0, then\nthis specifies the maximum number of threads which can be used in parallel processing across cases. This is important to\nunderstand - Flowret will internally create a thread pool with so many threads and each time it is required for a\nparallel path to be executed, a thread from this pool will be used. This is a fixed thread pool which allows clients to\nspecify an upper bound on the number of threads to be used for parallel processing.\n\nIn case the value passed for this variable is less than or equal to 0, Flowret will create threads on the fly with no\nupper bound. Note that, in this option, there may be a very small impact on performance as each time a parallel path is\nto be executed, a new thread will be created (\nas compared to the fixed thread pool where the threads are already created and ready to run). Very important to note is\nthat this option is not bounded. In other words, clients could run multiple parallel processing cases such that the pod\ngets overwhelmed with the high number of threads / processing. It is left up to the clients to take care of such\nscenarios and put some kind of safe guards.\n\n`int idleTimeout`\n\nSpecifies the idle time of a thread in the parallel processing thread pool after which it will be terminated to conserve\nsystem resources.\n\n`String typeIdSep`\n\nSpecifies the character to be used as the separator between the type and id fields in the name of the document to be\nwritten to the data store (via dao object). Flowret uses the following document naming\nconvention `\u003ctype\u003e\u003cseparator\u003e\u003cid\u003e`\n\n`String errorWorkBasket`\n\nSpecifies the name of the work basket to be used in case Flowret encounters an error after the step / route has been\nexecuted but Flowret encounters an error while processing the application event or encounters an internal error. This\nvalue will be written out to the process info file.\n\nAt this point of time, we can describe the various documents that Flowret writes to the data store as it executes a\ncase.\n\n1. Audit Log - a document that stores the state of the execution paths and process variables after execution of each\n   step / route\n   1. Type - `flowret_audit_log`\n   1. Separator - `-`\n   1. id - `\u003ccase_id\u003e_\u003csequence_number\u003e_\u003cstep_name\u003e`\n    1. Example - `flowret_audit_log-1_00001_step_13`\n1. Journey - the document that stores the process definition which a case needs to execute\n    1. Type - `flowret_journey`\n    1. Separator - `-`\n    1. id - `\u003ccase_id\u003e`\n    1. Example - `flowret_journey-1`\n1. Process Info - the document that stores the latest state of the process in terms of\nexecution paths and process variables\n    1. Type - `flowret_process_info`\n    1. Separator - `-`\n    1. id - `\u003ccase_id\u003e`\n    1. Example - `flowret_process_info-1`\n\n---\n\n#### Get an instance of Flowret\nFlowret is a singleton object.\n\n```java\nFlowret flowret = Flowret.instance();\n```\n\n---\n\n#### Get runtime service of Flowret\n```java\nRts rts = Flowret.getRunTimeService(dao, factory, handler, SlaQueueManager);\n```\n\nThe application is expected to provide the following objects\nto the run time service for Flowret to use:\n\n`FlowretDao dao` specifies an object that implements the `FlowretDao` interface as below.\nThis object will be used to persist the state of the process.\n\n```java\npublic interface FlowretDao {\n  public void write(String key, Document d);\n  public Document read(String key);\n  public long incrCounter(String key);\n}\n```\n\n`ProcessComponentFactory factory` specifies an object that implements the `ProcessComponentFactory` interface\nas below. This object is called upon to create an instance of the class\non which the execute method of a step or a route will be invoked. Flowret will pass the\nprocess context to this object.\n\n```java\npublic interface ProcessComponentFactory {\n  public Object getObject(ProcessContext pc);\n}\n```\nThe object returned by the above implementation must implement the interface `InvokableStep`\nin case a step is to be executed or `InvokableRoute` in case a route is to be executed. The passed process context\ncontains details of the step or route to be executed which include the name of the component\nin the application domain which represents the step or route.\n\n```java\npublic interface InvokableStep {\n  public StepResponse executeStep();\n}\n\npublic interface InvokableRoute {\n  public RouteResponse executeRoute();\n}\n```\n\n`Event Handler handler` specifies an object that implements the `EventHandler` interface as below. Methods on this\nobject will be invoked to inform the application of process life cycle events.\n\n```java\npublic interface EventHandler {\n  public void invoke(EventType event, ProcessContext pc);\n}\n```\n\n`ISlaQueueManager slaQm` specifies an object that implements the `ISlaQueueManager` interface. This is\ndescribed later in the section on SLA milestone management.\n\n---\n\n#### Flowret Application Events\nFlowret will inform the application of the following events types via the invoke method of the EventHandler object:\n\n```java\npublic enum EventType {\n  ON_PROCESS_START,\n  ON_PROCESS_RESUME,\n  ON_PROCESS_PEND,\n  ON_PROCESS_COMPLETE,\n  ON_TICKET_RAISED,\n  ON_PERSIST\n}\n```\n\n---\n\n#### Flowret Process Context\n\nAlong with the event type, the process context will also be passed. The process context is provided as an instance\nof class `ProcessContext` which has the following fields which can be accessed via the getter methods:\n\n```java\n  private String journeyName; // name of the journey / work flow being executed\n  private String caseId; // the case id of the instance of the journey\n  private String stepName; // the step name\n  private String compName; // the component name\n  private String userData; // the user data associated with the step / route as specified in the process definition json\n  private UnitType compType; // the type of the unit i.e. step or route\n  private String execPathName; // the name of the execution path\n  private ProcessVariables processVariables; // the object containing the process variables\n  private String pendWorkBasket; // the work basket into which the application has pended\n  private ErrorTuple pendErrorTuple; // the details of error in case the application had pended with an error\n  private String isPendAtSameStep; // tells us if the pend has taken place on the same step or the pend has\n                                   // taken place after the application has moved forward\n```\n\nThe process context variables will be set as below while invoking events. Other variables will be set to null:\n\n```text\n* ON_PROCESS_START\n    * journeyName\n    * caseId\n    * processVariables\n    * execPathName - this will be set to \".\" denoting the starting root execution path\n    \n* ON_PROCESS_PEND\n    * journeyName\n    * caseId\n    * stepName - where process has pended\n    * compName - the name of the component corresponding to the step\n    * userData\n    * compType\n    * processVariables\n    * execPathName - the name of the execution path on which the pend has occurred\n    * pendWorkBasket\n    * pendErrorTuple\n    * isPendAtSameStep\n    \n* ON_PROCESS_RESUME\n    * journeyName\n    * caseId\n    * processVariables\n    * execPathName - this will be set to \".\" denoting the starting root execution path\n    * stepName - where process had pended\n    * compName - the name of the component corresponding to the step\n    \n* ON_PROCESS_COMPLETE\n    * journeyName\n    * caseId\n    * stepName - which last executed\n    * compName - the name of the component corresponding to the step\n    * userData\n    * compType\n    * processVariables\n    * execPathName - the name of the execution path which executed the last step\n    \n* ON_TICKET_RAISED\n    * journeyName\n    * caseId\n    * stepName - where process has pended\n    * compName - the name of the component corresponding to the step\n    * userData\n    * compType\n    * processVariables\n    * execPathName - the name of the execution path on which the pend has occurred\n    \n* ON_PERSIST\n    * journeyName\n    * caseId\n    * execPathName - the name of the execution path on which the pend has occurred\n```\n\n---\n\n#### Note on Execution Paths\n\nAn execution path is the thread on which a step or a route in the process is executed. When a process is started, \nit is always on the root thread denoted by `.` as the execution path. Whenever a parallel route is encountered,\nnumber of threads equal to the number of outgoing branches are started. Each thread represents an execution path and\nis named by appending the route name and the branch name to the parent execution path.\nTaking an example, if the name of the parallel route is ``route_1`` and the names of the three\noutgoing branches are `1`, `2` and `3`, then the execution paths will be named as:\n\n1. `.` - representing the root thread\n1. `.route_1.1.` - execution path for the thread for the first branch\n1. `.route_1.2.` - execution path for the thread for the second branch\n1. `.route_1.3.` - execution path for the thread for the third branch\n\nThe character `.` is used to represent the starting root execution path. It is also used as the delimiter \nfor appending. For example, `.route_1.1.` comprises of the following:\n1. `.` representing the root thread appended by\n1. `route_1` representing the name of the route from where the new execution path emanated from appended by\n1. `.` the delimiter appended by\n1. `1` representing the name of the branch appended by\n1. `.` the delimiter for more execution paths to come\n\nTalking of a case when there is further parallel processing encountered on the execution path `.route_1.1.`\nfrom a parallel route named `route_2` and the first branch being `1`, the execution path for this branch\nwould be named as `.route_1.1.route_2.1.`   \n\nAny levels of nesting and parallel procssing are supported using the above naming construct.\n\n**CAUTION** -\u003e *As a result of the above naming convention, it derives that the use of the delimiter character `.` is not allowed\nto be part of the route name or the branch name.*\n \n--- \n\n#### Start a case\n\nInvoke below method on run time service:\n\n```java\nrts.startCase(caseId, journeyName, journeyJson, processVariables, journeySlaJson);\n```\n\n`String caseId` specifies the id for the case to be created. This needs to be unique.\nFlowret will throw an exception in case the case id already exists.\n\n`String journeyName` specifies the name of the process for which the case is being created.\n\n`String journeyJson` specifies the sequence of steps and routes that comprise the process.\n\n`ProcessVariables processVariables` is a collection of variables that are\nmade available to steps and routes. The process variables can be updated by steps and routes.\n\n`String journeySlaJson` specifies the SLA Milestones for the process. This is detailed out in the secition on SLA\nMilestones later in this document. A null value can be passed in case no SLA milestones are to be used.\n\nThis methods throws an exception if the case could not be started.\n\nThis method will internally start the process execution and return when either the process pends or when it finishes.\n\nAt the time of starting the case, Flowret also writes out a copy of the process definition associated\nwith the case. This locks the process definition to the case\nfor its life time. This is used to ensure that in flight applications are not impacted in case the\nmain process definition changes.\n\n---\n\n#### Execution of process flow\n\nWhen a case is started or resumed, Flowret startes executing the workflow defined in the journey file.\nFor each step / route encountered, it invokes the `getObject` method in the `ProcessComponentFactory` object provided\nby the application. Then it invokes either `invokeStep` or `invokeRoute` method on this object\ndepending upon whether the unit being executed is a step or route. The application responds back to Flowret\nvia the return objects of these two methods i.e. `StepResponse` or `RouteResponse`. These objects allow the application\nto set the following:\n\n1. Raise a ticket\n1. Set the unit response type\n\nThe above are set when the application creates the object using the constructor.\n\n**StepResponse**\n\nThe `StepResponse` class looks like this. Note that it is only a step which is allowed to raise a ticket.\n\n```java\npublic class StepResponse {\n  private UnitResponseType unitResponseType = null;\n  private String ticket = \"\";\n  private String workBasket = \"\";\n  private ErrorTuple errorTuple = new ErrorTuple();\n\n  public StepResponse(UnitResponseType unitResponseType, String ticket, String workBasket) {\n    // ...\n  }\n\n  public StepResponse(UnitResponseType unitResponseType, String ticket, String workBasket, ErrorTuple errorTuple) {\n    \n  }\n\n  public UnitResponseType getUnitResponseType() {\n    // ...\n  }\n\n  public String getTicket() {\n    // ...\n  }\n\n  public String getWorkBasket() {\n    // ...\n  }\n\n  public ErrorTuple getErrorTuple() {\n    // expected to be set by the application if it is pending with an error\n  }\n\n}\n```\n\n**RouteResponse**\n\nThe `RouteResponse` class looks like this.\n\n```java\npublic class RouteResponse {\n  private UnitResponseType unitResponseType = null;\n  private List\u003cString\u003e branches = null;\n  private String workBasket = \"\";\n  private ErrorTuple errorTuple = new ErrorTuple();\n\n  public RouteResponse(UnitResponseType unitResponseType, List\u003cString\u003e branches, String workBasket) {\n    // ...\n  }\n\n  public RouteResponse(UnitResponseType unitResponseType, List\u003cString\u003e branches, String workBasket, ErrorTuple errorTuple) {\n    // ...\n  }\n\n  public UnitResponseType getUnitResponseType() {\n    // ...\n  }\n\n  public List\u003cString\u003e getBranches() {\n    // ...\n  }\n\n  public String getWorkBasket() {\n    // ...\n  }\n\n  public ErrorTuple getErrorTuple() {\n    // expected to be set by the application if it is pending with an error\n  }\n\n}\n```\n\nNote that a route response can return a list of branches. The following\nshould be adhered to:\n\n* In case of a route response from a singlular route i.e. `s_route`, only one branch should be returned in the list\nas a singular route can take one and only one of the many outgoing branches. In case the implementation\nreturn more than one branch in the list, the first one would be used.\n* In case of a route response from a parallel route, the implementation can return one or more branches in the list.\nFor example, if there are three branches emanating from the parallel route in the process, the implementation\ncan return upto three branches. Remaining ones will be ignored by Flowret.\n* In case of route response from a dynamic parallel route, the implementation can return more than one\nbranches. There is no upper limit although implementations are advised to keep this number reasonable so as\nto not overwhelm the JVM.\n\nNeedless to say, the name of the branch returned should match with the name specified in the process definition.\n\nThe application can set the unit response type to one of the following:\n\n```java\npublic enum UnitResponseType {\n  OK_PROCEED,\n  OK_PEND,\n  OK_PEND_EOR,\n  ERROR_PEND\n}\n```\n\n`OK_PROCEED` tells Flowret to move ahead in the process. This means that all is OK and the process should move ahead\n\n`OK_PEND` tells Flowret to pend but execute the next step when it is resumed. This means that all is OK but we need to\npend the process and when the process is resumed, the next unit should be called\n\n`OK_PEND_EOR` tells Flowret to pend but execute this step (the one that is causing the pend) again when the\nprocess is resumed. EOR stands for execute on resume.\n\n`ERROR_PEND` tells Flowret to pend as an error has occurred. In such case, this step will always be executed\nwhen the process is resumed\n\n**What happens if multiple branches in a parallel process pend?**\n\nIn the case of the parallel flow described previously, it is possible for more than one branch to pend.\nFlowret, in such a case will record all pends but will return the pend instance that occurred first\nback to the application as part of the `ON_PROCESS_PEND` event.\n\nWhen the condition for this pend has cleared and the process is resumed, Flowret will first check if there are\nany other pends that have not yet been returned to the application. If there are, the next pend will be returned\nimmediately without further execution on any branch till the last pend is resolved.\n\nIn future versions, a feature may be provided for Flowret to return a list of pends to the application. \n\n---\n\n#### Resume a case\n\nIn case a process had been started earlier and had pended, the application can resume the same by the following call:\n\n```java\nrts.resumeCase(caseId);\n```\n\n`String caseId` specifies the case id of the process.\n\n---\n\n#### Reopen a case\n\nA case which has already been finalized can be reopened. In order to do this, there must be a ticket defined in the\ndefinition which tells Flowret where to begin the execution from when reopen is called. This ticket needs to be passed\ninto the reopen method as below:\n\n```java\nrts.reopenCase(String caseId,String ticket,boolean pendBeforeResume,String pendWorkbasket);\n        rts.reopenCase(String caseId,String ticket,boolean pendBeforeResume,String pendWorkbasket,ProcessVariables pvs);\n```\n\nThe user can also specify if the case is to be pended in a workbasket after reopen. If this is set to true, the name of\nthe pend work basket needs to be specified. The user can also provide a list of process variables which will be added /\nupdated to the already existing process variables.\n\n#### Audit logging\n\nFlowret logs information to the data store (as specified by the Dao) after it executes each step / route. Each entry\ncomprises of a JSON document with the name as described before:\n\nEach audit log JSON file contains information of the following:\n\n* The name of the step and component which was executed\n* the name of the exec path which pended\n* the timestamp of the pend in milliseconds since epoch\n* the completion status of the process\n* The state of the process variables as they exist at that point of time\n* In case of parallel processing, the state of each execution path\n    * name of the execution path\n    * status of the execution path - value can be `started` or `completed`\n    * the name of the step and component executed by this execution path\n    * the response received from the unit\n* The value of ticket if raised\n\nA sample file is shown below:\n\n```json\n{\n  \"process_info\" : {\n    \"last_executed_step\" : \"step_2\",\n    \"last_executed_comp_name\" : \"get_part_info\",\n    \"pend_exec_path\" : \"\",\n    \"ts\" : 1584475244618,\n    \"is_complete\" : false,\n    \"process_variables\" : [ {\n      \"name\" : \"user\",\n      \"value\" : \"Jack\",\n      \"type\" : \"string\"\n    } ],\n    \"exec_paths\" : [ {\n      \"name\" : \".\",\n      \"status\" : \"started\",\n      \"step\" : \"step_2\",\n      \"comp_name\" : \"get_part_info\",\n      \"unit_response_type\" : \"ok_proceed\",\n      \"pend_workbasket\" : \"some_wb\",\n      \"ticket\": \"\",\n      \"pend_error\" : {\n        \"code\" : \"\",\n        \"message\" : \"\",\n        \"details\" : \"\",\n        \"is_retyable\" : false\n      },\n      \"prev_pend_workbasket\" : \"some_other_wb\",\n      \"tbc_sla_workbasket\" : \"\"\n    } ],\n    \"ticket\" : \"\"\n  }\n}\n```\n\nIn addition to the audit logs being generated, Flowret also does logging to console.\n\nBelow are the audit log documents created (assuming that we are persisting to the file system):\n\n![Audit Log Documents](audit_log.png)\n\nBelow is an example of console logging for the above flow:\n\n```text\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, successfully created case\nReceived event -\u003e ON_PROCESS_START\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e start, component -\u003e start, execution path -\u003e .\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing parallel routing rule -\u003e route_1, execution path -\u003e .\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_2, component -\u003e step_2, execution path -\u003e .route_1.1.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_4, component -\u003e step_4, execution path -\u003e .route_1.2.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_6, component -\u003e step_6, execution path -\u003e .route_1.3.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_3, component -\u003e step_3, execution path -\u003e .route_1.1.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_7, component -\u003e step_7, execution path -\u003e .route_1.3.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_5, component -\u003e step_5, execution path -\u003e .route_1.2.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_3a, component -\u003e step_3a, execution path -\u003e .route_1.1.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, handling join for execution path -\u003e .route_1.3.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, handling join for execution path -\u003e .route_1.2.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, handling join for execution path -\u003e .route_1.1.\n[com.americanexpress.unify.flowret.ExecThreadTask] - Case id -\u003e 2, executing step -\u003e step_8, component -\u003e step_8, execution path -\u003e .\nReceived event -\u003e ON_PROCESS_COMPLETE\n```\n\nAudit logs maintain a history of the steps and routes which have been called as part of running the process. However,\nit is upto the application to specify if the audit logs are to be generated. If no audit logs are required, their\ngeneration can be turned off by calling the following method on the Flowret singleton. By default, audit logging\nis turned on:\n\n```java\n  Flowret.getInstance().setWriteAuditLog(false);\n```\n\nFlowret also writes out the process info file. The format and contents of this file is the same as that\nof the audit log file. By default, this file is written after each step / route is executed. However, it is possible to\nwrite this file only when the process completes or pends. This can be acheived by turning on the option\nin Flowret by calling the following method:\n\n```java\n  Flowret.getInstance().setWriteProcessInfoAfterEachStep(false);\n```\n\nBe warned though that if this option is used and there is a JVM crash which happens, the process will be resumed\nfrom the last recorded position which may be many steps back. In such a situation, the steps will again be executed. Hence\nthe application may want to explore the possibility of using idempotent services when using this option.\n\n---\n\n#### Dynamic Parallel Flow\n\nA dynamic parallel flow is one where\n1. We do not know the number of outgoing branches in advance\n2. The same set of units are to be executed in parallel\n\nThe earlier example of a parallel flow is an example of a static parallel flow. Here we know before hand i.e. while\ncreating the process definition that there can be at most three paths that could to be executed\nin parallel and hence as such can be represented in the flow diagram.\n\nTo better dynamic parallel flows, lets consider a hypothetical case of processing\nan order for multiple parts. In our example, consider that an order may be placed for multiple parts in one go.\nBut we cannot know in advance how many parts will be included in an order. Therefore we cannot, at the time of defining the\nprocess, statically define the number of outgoing branches from a parallel route.\nThis is where we use dynamic parallel flows.\n\nIn order to define a dynamic parallel flow, we use a dynamic parallel route which has only\none outgoing branch. All steps on this outgoing branch till the corresponding join\nwill be executed in parallel. The number of parallel threads created will be determined by the dynamic\nparallel route. In other words,\nthe dynamic parallel route will, at run time, provide the number of branches and name of each branch to be executed.\n\nContinuing our example of processing multiple parts, if the dynamic parallel route\ndetermines that there are 3 parts, it will specify three branches\nto be executed. Flowret will then execute all the steps till join on a separate thread.\n\nAn example of how this process would be modeled in described below. The components in\nthe dashed box are part of dynamic parallel processing. The singular rule determines\nif parts are present. The dynamic parallel route \"Process Parts\" returns\ninformation of the number and names of branches i.e. the number of parts.\nFlowret then executes the steps 1, 2 and 3 in parallel. Of course, the routes will need\nto set certain process variables to allow the correct routing to happen. An example of this is included\nin the test folder of Flowret\n\n![Dynamic Parallel Flow](dyn_parallel_flow.png)\n\nBelow is the process definition json. Note the use of the \"p_route_dynamic\" route.\n\n```json\n{\n  \"journey\": {\n    \"name\": \"parallel_parts\",\n    \"flow\": [\n      {\n        \"name\": \"start\",\n        \"component\": \"start\",\n        \"next\": \"route_0\"\n      },\n      {\n        \"name\": \"route_0\",\n        \"type\": \"s_route\",\n        \"component\": \"route_0\",\n        \"branches\": [\n          {\n            \"name\": \"yes\",\n            \"next\": \"route_1\"\n          },\n          {\n            \"name\": \"no\",\n            \"next\": \"end\"\n          }\n        ]\n      },\n      {\n        \"name\": \"route_1\",\n        \"type\": \"p_route_dynamic\",\n        \"component\": \"route_1\",\n        \"next\": \"step_1\"\n      },\n      {\n        \"name\": \"step_1\",\n        \"component\": \"step_1\",\n        \"next\": \"step_2\"\n      },\n      {\n        \"name\": \"step_2\",\n        \"component\": \"step_2\",\n        \"next\": \"step_3\"\n      },\n      {\n        \"name\": \"step_3\",\n        \"component\": \"step_3\",\n        \"next\": \"join_1\"\n      },\n      {\n        \"name\": \"join_1\",\n        \"type\": \"p_join\",\n        \"next\": \"route_0\"\n      }\n    ]\n  }\n}\n```\n\n#### SLA Management Framework\n\nAn SLA management framework is one that is used to manage all aspects of SLA milestones.\nAn SLA milestone is an event which is to be executed sometime in the future. Given below are a couple \nof examples:\n\nExample 1\n\nAssume that we need to setup an SLA milestone to be executed 5 days from case start,\nwhich when reached would trigger application cancellation. This SLA milestone would be setup at case\nstart. In case the case is still active when the execution time for this milestone is reached, \ni.e. 5 days from start, the application would be cancelled. In case the case has completed before\n5 days, this milestone would have been cleared at case complete and hence will become infructous.\n\nExample 2\n\nAssume we have a requirement to send a letter to a customer when the application\npends into a document hold work basket. When the application enters this work basket, \nan SLA milestone needs to be setup and the application would send out the letter.\nOf course it is possible to specify deferring the sending of the letter e.g. 3 hours\nafter the applications pends into the document hold work basket. In this case,\nif the application is still pended into this work basket after 3 hours, a letter\nwill be sent out else this milestone would get cleared when the application moved\nout of the work basket.\n\nFlowret includes an SLA Framework to provide for a seamless overall BPM experience.\nThe framework provides the following features. Each of these features is described subsequently\n\n* SLA milestone definition\n* SLA queue management\n\n**SLA milestone definition**\n\nAn SLA milestone is a definition of an action to be systematically performed sometime in the future.\nSLA milestones can be defined at the following levels:\n\n* On case start\n* On case reopened\n* On work basket entry\n* On work basket exit\n\nSLA milestones are defined in a JSON configuration file which is specific to a journey.\nSimilar to the journey being locked to an instance of a case when started,\nthe SLA configuration is also locked at the time the case is started.\nThis allows for handling of in flight applications in the light of changes in SLA milestone definitions.\n\nAn SLA milestone is defined using the following fields:\n\n1. Name\n1. Setup on - case start or work basket entry or work basket exit\n1. Type - milestone exists at case level or at work basket level\n1. Workbasket name - specifies the work basket name and is only applicable if setup on is work basket entry or exit \n1. Applied at - duration of time after which the first milestone is to fire\n1. Clock starts at - immediately or deferred to the start of the next day - used to compute the time when the milestone should fire\n1. User action - application defined string. Flowret does not interpret this\nbut only passes it to the application\n1. User data - application defined string. Flowret does not interpret this but\nonly passes it to the application\n\nBelow is an example of SLA milestone definition in a JSON file:\n\n```json\n{\n  \"milestones\": [\n    {\n      \"name\": \"case_start_1\",\n      \"setup_on\": \"case_start\",\n      \"type\": \"case_level\",\n      \"applied_at_age\": \"20d\",\n      \"clock_starts\": \"immediately\",\n      \"action\": \"some_user_action\",\n      \"user_data\": \"some user data\"\n    },\n    {\n      \"name\": \"comp3_wb_entry\",\n      \"setup_on\": \"work_basket_entry\",\n      \"type\": \"work_basket\",\n      \"work_basket_name\": \"comp3_wb\",\n      \"applied_at_age\": \"30m\",\n      \"clock_starts\": \"immediately\",\n      \"action\": \"some_user_action\",\n      \"user_data\": \"some user data\"\n    },\n    {\n      \"name\": \"comp11_wb_exit_case\",\n      \"setup_on\": \"work_basket_exit\",\n      \"type\": \"case_level\",\n      \"work_basket_name\": \"comp11_wb\",\n      \"applied_at_age\": \"60m\",\n      \"clock_starts\": \"immediately\",\n      \"action\": \"some_user_action\",\n      \"user_data\": \"some user data\"\n    },\n    {\n      // this is the new way a block can be specified. Note that it only has\n      // an additional block \"further_milestones\"\n      // instead of creating multiple entries for a single milestone,\n      // they can be defined inside of a single milestone entry\n      // this is backward compatible meaning that the existing clients and existing SLA will\n      // continue to work. However if the existing SLA is updated to new structure\n      // then the corresponding clients will also need to change to take into account the\n      // handling of the \"further_milestones\" block\n      \"name\": \"some_milestone\",\n      \"setup_on\": \"work_basket_entry\",\n      \"type\": \"case_level\",\n      \"work_basket_name\": \"some_wb\",\n      \"applied_at_age\": \"30m\",\n      \"clock_starts\": \"immediately\",\n      \"action\": \"CORR\",\n      \"userdata\": \"\",\n      // optional block\n      \"further_milestones\": [\n        {\n          \"applied_at_age\": \"60m\",\n          // first occurrence -\u003e t0 + 60m\n          // second occurrence -\u003e t0 + 120m\n          // third occurrence -\u003e t0 + 180m\n          // repeat block is also optional in which case the default value is 1\n          \"repeat\": 3\n        },\n        {\n          // t0 + 240m\n          \"applied_at_age\": \"240m\"\n        },\n        {\n          // etc.\n          \"applied_at_age\": \"540m\",\n          \"repeat\": 3\n        }\n      ]\n    }\n  ]\n}\n\n```\n\n**Notes:**\n\nEach element of the array is further described below:\n\n1. `name` - name of the SLA milestone. Needs to be unique across the definitions\n1. `setup_on` - possible values can be `case_start` or `work_basket_entry` or `work_basket_exit`\n1. `type` - possible values can be `case_level` or `work_basket`\n1. `work_basket_name` - if the value of `setup_on` is `work_basket_entry` or `work_basket_exit`, then this value\nspecifies the work basket name on which the milestone is defined\n1. `applied_at_age` - specifies the duration of time after which the first milestone should fire. Can be specified as a number followed by a d (days) or m (minutes).\n30d means 30 days and 20m means 20 minutes\n1. `clock_starts` - specifies from when is the `applied_at_age` value computes. Values can be `immediate`\nor `next_day`. Immediately means that the time specified in applied at age should be computed immediately\nwhereas next day means that the time should be computed from the start of the next day. Please note that the\nresponsibility to determine the start of the next day is left to the application. This is because\nthe application may be global in nature and may want to compute the start of the day based \non different time zones\n1. `action` - this is a user defined field which Flowret will pass to the application when the milestone\nis to be setup. Flowret does not interpret this field in any way \n1. `user_data` - this is another user defined field which Flowret will pass to the application when the milestone\nis to be setup. Flowret does not interpret this field in any way\n\n**The \"further_milestones\" block**\n\nThis block is optional. In case the milestone definition requires only one trigger then the same can be specified in the main block.\nHowever, sometimes there is a requirement that there be multiple triggers for a milestone. For example, on an application getting\npended in a certain workbasket, send three reminders to the customer. The first reminder can be specified as part of the main block whereas\nthe remaining two can be specified as part of the \"further_milestones\" block as described below. Note that all\ntrigger times are relative to \"t0\" where \"t0\" is the time when the first milestone was setup as governed by\n\"clock_starts\" field.\n \n1. `applied_at_age` - specifies the duration of time after this milestone should fire. Can be specified as a number followed by a d (days) or m (minutes).\n30d means 30 days and 20m means 20 minutes\n1. `repeat` - an integer value that specifies the number of occurrences of the trigger. Each occurrence will be that much further\nin time as the counter. The example JSON above describes this more clearly. This fields is also optional. If not specified\nthe default assumed should be 1.  \n\n#### Starting a case with SLA milestones\n\nThe prerequisite to handling SLA milestones is for the application to provide an object that will handle\nthe SLA milestone events triggered by Flowret. This object needs to be supplied to Flowret at the time\nof getting the run time service as below:\n\n```java\nRts rts = Flowret.getRunTimeService(dao, factory, handler, SlaQueueManager);\n```\n\n`ISlaQueueManager slaQm` specifies an object that implements the `ISlaQueueManager` interface as below.\nMethods on this object will be invoked to inform the application of SLA milestone life cycle events.\n\n```java\npublic interface ISlaQueueManager {\n  void enqueue(ProcessContext pc, Document milestones);\n  void dequeue(ProcessContext pc, String wb);\n  void dequeueAll(ProcessContext pc);\n}\n```\n\nThe milestone definition can be provided to Flowret when starting a case. As described\nbefore, here is the method to start a case:\n\n```java\nrts.startCase(caseId, journeyName, journeyJson, processVariables, journeySlaJson);\n```\n\n`String journeySlaJson` specifies the SLA milestone definition as a JSON string for the process. \nA null value can be passed in case no SLA milestones are to be used. Flowret will store this definition\nwith the instance of the case and will reference it to invoke SLA events on the application.\n\n#### SLA Queue Management\n\nAt specific points in the process, Flowret will call SLA Queue Manager object methods as required\nasking the application to enqueue or dequeue events. The actual enqueuing and dequeuing of milestones\nis the actual responsibility of the application. The application is free to store the milestones\nin whatever format it prefers. \nThese are described below:\n \n**On case start**\nWhen the case starts, Flowret will call the enqueue method on the SLA Queue Manager. The process context at case start \nwill be provided (as already described) and a list of case level milestones to be set up.\n\n**On case pend**\nIf the case pends in a work basket, Flowret will call the enqueue method passing it the process context\nand the list of milestones to be setup. Note that this list may contain both case level and work basket level\nmilestones.\n\nIf there were any prior work basket level milestones which had already been setup, Flowret will call\nthe dequeue method on the object and pass it the work basket name for which the milestones need to be cleared.\nThese milestones would be of the type `work_basket` (as case level milestones will only be cleared\nwhen the case ends). Note that it is possible that some\nof the milestones from the list may already have been executed. It is the\nresponsibility of the application to track such milestones.\n\n**On case complete**\nFlowret will invoke the dequeueAll method to inform application that since the case is ending,\nall milestones need to be cleared.\n\n#### Executing SLA milestones (also known as SLA Action Management)\n\nFlowret only informs the application of milestones to be enqueued or dequeued. It does not play\na part in actually executing the enqueued milestones when it is time for them to be executed.\nFlowret expects the application to take on that responsibility. The reason for doing this is that\ndifferent applications have different architectures and the framework for storing the enqueued\nmilestones and executing them when the time arrives is best left to the descretion of the application\nso that it can be done in such a way that it fits neatly into the overall architecure of the application.\nFurther, Flowret is an embedded orchestrator which runs in the same JVM as the application. Since there\nis no server component of Flowret, there is no way for Flowret to run timers or jobs that can carry out \nSLA processing.\n\nApplications can store the milestones into a data store. As a sample implementation, applications\ncould implement a batch job that runs at scheduled intervals and picks up the oldest records from the \ndata store and execute the actions specific to that milestone. A milestone once executed could\neither be marked as processed or moved over to a different table. \n\n---\n\n#### Work Management\n\nWe start with a brief description of work baskets.\nIn BPM workflow engines, work baskets are nothing but places where an application is\ntemporarily parked either for some systematic action at a later point in time\nor for some human intervention. A work basket may be shared or individual.\nWhen an application pends, it pends in a work basket typically corresponding to the step.\nA user could move an application from the shared work basket to a personal work basket\n(sometimes also referred to as locking the application for use).\nWhen the application is resumed, it is taken out of the work basket and flow resumed.\n\nIn many BPM applications, when an application is sitting in a work basket, there\nis a requirement to route it to another either based on a time duration elapsed with\nor without some condition being met (also referred to as an SLA milestone) or based on a\nhuman interaction (for example a servicing professional action).\nThese are what we refer to as work basket routing rules which may range from simple to extremely complex.\n\nEven though the application may go from one work basket to another,\neventually, when the process flow for that application is resumed,\nit starts from (a) the step which had pended or (b) the next step.\nWhich of the two is chosen is determined by the response of the step to Flowret when the\npend happened. If the step had informed Flowret to ok_pend_eor (eor meaning execute on resume)\nthen the same step will be invoked again on resumption.\nIf the step had informed ok_pend, then the next step will be invoked.\nNote that in case of the former, it is the responsibility of the application to\nmanage the idempotent behavior of the step if it is invoked more than once.\n\nFlowret provides a work manager interface which can be implemented by an application.\nThe default work manager of Flowret only records the movement of an application\nfrom one work basket to another but relies on an application implementation to run\nadditional actions as required on the application side. These additional actions may be\nevaluating multiple conditions to determine the work basket to move to based on human input\nor carrying out certain actions on a work basket change etc.\n\nFlowret provides the ability to move a case from one work basket to another when the case is pended.\nIn order to do this, the application needs to do the following:\n\n1. Provide its own work manager implementation. This is only required in case\nthe application wants to track the changes in work basket on its side or / and take some additional\naction based on changing work baskets\n1. Get the Flowret work management service object by calling the `getWorkManagementService` method\non `Flowret`. The application needs to provide its work manager implementation and the SLA Queue Manager\n implementation as parameters to this method\n1. Use the work management service to change the work basket of the pended case\n\n**Getting the work management service**\n\nApplications can call the following method to get an instance of the work management service:\n\n```java\n  Wms wms = Rts.getWorkManagementService(dao, wm, slaQm);\n```\n\n`dao` specifies an instance of an object that implements the FlowretDao interface (already explained)\n`wm` specifies an instance of an object that implements the `WorkManager` interface. Passing null is allowed\n`slaQm` specifies an instance of an object that implements the `ISlaQueueManager`\ninterface (already explained). Passing null is allowed\n\nThe work basket can now be changed as below:\n\n```java\n  wms.changeWorkBasket(caseId, newWb);\n```\n\n`String caseId` specifies the case id\n`String newWb` specifies the name of the new work basket\n\nThe application could also query the work management service to know the current work basket into\nwhich the case is pended as below:\n\n```java\n  String wb = wms.getPendWorkbasket(caseId);\n```\n\n---\n\n#### FAQ\n\n##### How do I do retry handling?\n\nFlowret is first and foremost a pure Orchestrator. In order to keep it simple and efficient, it does not deal\nwith retry handling and expects the application to take care of the same. In other orchestrators,\nit may be possible to define the retry mechanism in the process definition itself. For example, if a step returns\nan error, then retry the same at specific intervals a certain number of times. Flowret steers clear of this\ncomplexity as it is very much possible for this layer to be built in the application domain outside of Flowret.\nThe application could create a retry framework that could take care of retries before\nreturning to Flowret.  \n\n##### How can I create a process flow graphically?\n\nAt present, the only way to create a process flow is manually, first by creating the process flow diagram\nand then by creating the JSON file manually. However, a UI application to create the flow graphically is\nin the pipeline.\n\n##### How do I test my process definition?\nTesting of a process flow can be done independently by writing your own event handler,\nprocess component factory and a Flowret Dao object that reads and writes from the local file system. A\nsample implementation of cuh objects is provided in the test folder.\n\nThe test process component factory can be written to return specific values from specific steps so as \nto emulate the run of the process in the real world. This way multiple permutations and combinations can\nbe tested independently. \n\n---\n\n##### What next?\n\nGo through the unit test cases in the source code. Unit test cases are available in the location `src/test`\n\nProvide us feedback. We would love to hear from you.\n\n---\n\n##### Author:\nDeepak Arora, GitHub: @deepakarora3, Twitter: @DeepakAroraHi\n\n---\n\n## Contributing\n\nWe welcome Your interest in the American Express Open Source Community on Github. Any Contributor to\nany Open Source Project managed by the American Express Open Source Community must accept and sign\nan Agreement indicating agreement to the terms below. Except for the rights granted in this \nAgreement to American Express and to recipients of software distributed by American Express, You\nreserve all right, title, and interest, if any, in and to Your Contributions. Please\n[fill out the Agreement](https://cla-assistant.io/americanexpress/unify-flowret).\n\n## License\n\nAny contributions made under this project will be governed by the\n[Apache License 2.0](./LICENSE.txt).\n\n## Code of Conduct\n\nThis project adheres to the [American Express Community Guidelines](./CODE_OF_CONDUCT.md). By\nparticipating, you are expected to honor these guidelines.\n","funding_links":[],"categories":["Java","Library (embedded usage)"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famericanexpress%2Funify-flowret","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Famericanexpress%2Funify-flowret","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famericanexpress%2Funify-flowret/lists"}