{"id":31785546,"url":"https://github.com/stacktical/willitscale-polkadot","last_synced_at":"2025-10-10T11:58:09.625Z","repository":{"id":44271312,"uuid":"238416513","full_name":"Stacktical/willitscale-polkadot","owner":"Stacktical","description":"A Predictive Analysis Platform for Blockchain Network Researchers. By Stacktical (DSLA Protocol)","archived":false,"fork":false,"pushed_at":"2022-02-10T21:10:44.000Z","size":6123,"stargazers_count":10,"open_issues_count":12,"forks_count":1,"subscribers_count":6,"default_branch":"master","last_synced_at":"2023-03-04T04:57:13.638Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://stacktical.com","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Stacktical.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-02-05T09:49:40.000Z","updated_at":"2021-04-22T05:05:05.000Z","dependencies_parsed_at":"2022-09-22T12:33:50.382Z","dependency_job_id":null,"html_url":"https://github.com/Stacktical/willitscale-polkadot","commit_stats":null,"previous_names":[],"tags_count":null,"template":null,"template_full_name":null,"purl":"pkg:github/Stacktical/willitscale-polkadot","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stacktical%2Fwillitscale-polkadot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stacktical%2Fwillitscale-polkadot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stacktical%2Fwillitscale-polkadot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stacktical%2Fwillitscale-polkadot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Stacktical","download_url":"https://codeload.github.com/Stacktical/willitscale-polkadot/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stacktical%2Fwillitscale-polkadot/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279003723,"owners_count":26083610,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-10T02:00:06.843Z","response_time":62,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-10-10T11:58:07.783Z","updated_at":"2025-10-10T11:58:09.618Z","avatar_url":"https://github.com/Stacktical.png","language":"TypeScript","readme":"![willitscale-polkadot](https://storage.googleapis.com/stacktical-public/willitscale-polkadot.jpg)\n\n# But will it scale ?\n\n`\"Will it scale ?\"` is an open source Predictive Analysis Platform that enables you to engineer **scalable** distributed applications, networks and other systems, **by applying mathematical models to bytesized performance measurements**. This initiative is the product of 3 years of R\u0026D at [Stacktical (DSLA Protocol)](https://stacktical.com), and our hands-on experience with capacity planning in cloud computing environments. We are proud to make it public, for everyone to enjoy, direct our efforts towards the blockchain industry.\n\n## Objective\n\nThe initial objective of this project is to surface mathematical relations between the performance metrics of blockchain networks, with a focus on Polkadot, the driving force of interoperability in the blockchain industry (ergo the foundation of increasingly complex, end-to-end, cross-chain testing scenarios).\n\nWhen the value of two system metrics seem to vary in relation to each other, it becomes possible to use them as mathematical coordinates, and fit these coordinates into predictive mathematical models. \n\nIn other words, using mathematical models that can predict the next values in a series, **gives us the ability to predict the next values in a series of blockchain performance metrics**, provided they are bound by maths.  \n\nThis scientific approach to capacity planning alleviates testing requirements (less tests, less time), a significant part of the guesswork involved in scalability-driven architectural, coding and configuration choices, and the overall quality of builds before they're deployed to production.\n\nOur initial research at [Stacktical](https://stacktical.com/) show that such relation seem to exist between the number of validating nodes in a blockchain network, and the throughput of the system, expressed in transactions per seconds (TPS).\n\nInstead of provisionning complex, costly testnets with hundreds of validating nodes, and running hundreds of throughput measurements, `\"Will it scale ?\"` enables you to chart, mathematically quantify and govern the scalability of your system **with only 10 performance measuremenents or so**.\n\nThis also means that the `\"Will it scale ?\"` platform can serve as a tool to scientifically debunk false bockchain TPS claims. Is this is your goal, we'd be happy to hear from you at [contact@stacktical.com](mailto:contact@stacktical.com).\n\n![grant_w3f](resources/grant_w3f.png)\n\n## General Requirements\n\n`\"Will it scale ?\"`  is meant to work on Linux and Mac OS machines with:   \n\n* a [Node.js 10.0+](https://nodejs.org/) runtime environment to execute applications and manage dependencies\n* a [Docker 2.0+](https://docs.docker.com/) installation to build and manage application containers\n\n## Platform Architecture\n\nThe projects is comprised of three main components.\n\n### `willitscale-r-server`\n\nA HTTP R server to make online predictive analysis using mathematical models.\n\n### `willitscale-api`\n\nA Node.js GraphQL API server to query the `willitscale-r-server` and implements the business logic of predictions.\n\n### `willitscale-client`\n\nA Vue.js GraphQL API client, to submit performance test results to the `willitiscale-api` from your browser.\n\nYou will need to build and run these components to run your end-to-end predictions scenarios.\n\n# Local Deployment\n\n## Automated Deployment (TL;DR)\n\nInstall the following tools :\n\n- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)\n- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)\n- [skaffold](https://skaffold.dev/docs/install/)\n\nBuilt the images and run against a local **kind** cluster using the following command :\n\n```bash\nkind create cluster --name willitscale-polkadot\nskaffold run --port-forward\n```\n\n## Manual Deployment\n\n### Reachability\n\nEnsure that docker containers can communicate with each other by setting up the docker bridge network: \n\n`docker network create --driver bridge willitscale-polkadot`\n\n### willitscale-r-server\n\n#### Building your R Server\n\nThe `willitscale-r-server` is designed to run inside a Docker container.  \nTo build your container, use the following command in the root folder of your component.\n\n`docker build -t willitscale-r-server .`\n\n#### Running your R Server\n\nBy default, the R server runs on the 6311 port, inside your Docker container.\nTo run your container by mapping the 6311 port in Docker to the 10001 port in your machine, use the following command:\n\n`docker run -d -p 10001:6311 -t -P --network=willitscale-polkadot --name willitscale-r-server willitscale-r-server`\n\nYour server should be now running on the **10001 port.**\n\n### willitscale-api\n\n#### Install dependencies\n\nWe are using npm to manage dependencies for the `willitscale-api` API server.\n\n**If you plan on using the server locally,** simply use `npm install` from the `willitscale-api` folder.  \n**If you plan to host it instead**, the Dockerfile will take care of installing dependencies for you, while building the container.\n\n#### Building your GraphQL API Server\n\nYou can either build the server locally using npm, or build your server as a Docker container for further deployment (e.g. in Kubernetes).\n\n##### Locally\n\nSet the environment variable and build the server using the following command:  \n`NODE_ENV=\"development\" npm run build`\n\n##### In Docker\n\nBuild the server using the following Docker command:  \n`docker build -t willitscale-api . --build-arg NODE_ENV=development`\n\n#### Running your GraphQL API Server\n\n##### Locally\n\nSet the environment variable and run the server using the following command:  \n`NODE_ENV=\"development\" npm run start`\n\n##### In Docker\n\nRun the server using the following Docker command:\n`docker run -p 10000:10000 -v $(pwd)/dist/:/var/www/willitscale-api/public/dist/ --network=willitscale-polkadot -e SERVICE_R_HOST=willitscale-r-server -e SERVICE_R_PORT=6311 --name willitscale-api willitscale-api`\n\n\nYour server should be now running on the **10000 port.**\n\n### willitscale-client (optional)\n\n`willitscale-client` provides a simple way to visualize and plot the predictions returned by the `willitscale-api`, in your browser.\n\n#### Install dependencies\n\nWe are using npm to manage dependencies for the `willitscale-client` API client.  \nSimply use `npm install` from the `willitscale-client` folder to install them.\n\n#### Serve the client locally\n\nMake sure `willitscale-r-server` and `willitscale-api` are running, then set the environment variable and run `willitscale-client` using the following command:  \n\n`NODE_ENV=\"development\" npm run serve`\n\nYour client should be now running at **[http://localhost:8080](http://localhost:8080)**.\n\n![willitiscale-client.png](resources/willitscale-client.png)\n\n## Verify installation\n\nNow that both our HTTP R server and GraphQL API are running, it is time to try running a prediction.\n\nGraphQL Playground is a graphical, interactive, in-browser GraphQL IDE, created by Prisma and based on GraphiQL (ndlr: the default playground for GraphQL). You can find more information about GraphQL Playgroun in the [Apollo documentation](https://www.apollographql.com/docs/apollo-server/testing/graphql-playground/).\n\nTo get started, go to `http://localhost:10000`, or the address matching your deployment.\n\nThen use the following example to give the playground a try:\n\n```\nmutation {\n  predictCapacity(points:\n    [\n      {\n        p: 1,\n        Rt: 17.1,\n        Xp: 37.9\n      },\n      {\n        p: 5,\n        Rt: 7.2,\n        Xp: 82.1\n      },\n      {\n        p: 10,\n        Rt: 8.8,\n        Xp: 76.3\n      },\n      {\n        p: 15,\n        Rt: 10.3,\n        Xp: 65.9\n      },\n      {\n        p: 20,\n        Rt: 11.3,\n        Xp: 53\n      }\n    ]\n  )\n}\n```\n  \nWhere:  \n\n* p represents **concurrency** (e.g. validating nodes in the network)\n* Rt represents **latency** (e.g. transaction latency in seconds)\n* Xp represents **throughput** (e.g. transactions per second)\n\nThe result pane should now be displaying a stringified JSON object comprised of `nodes vs throughput` points coordinates and other information, predicted from the specified payload. \n\nHere is what the console looks like when you run a prediction: \n\n![willitiscale-api.png](resources/willitscale-api.png)\n\n# Remote Deployment\n\n## Requirements\n\nA functional Kubernetes cluster (GKE, EKS, minikube, etc) accessible through kubectl, to orchestrate the platform containers.\n\n## Create a new deployment\n\nTo create a new deployment run the following command from the root folder of this repository:  \n\n`kubectl create -f k8s/kubernetes.deployment.yaml`\n\nThis will automatically pull the  `willitscale-r-server` and `willitscale-api` from the Docker registry.\n\n## Access the remote cluster locally\n\nIf you still want the GraphQL Playground to be reachable locally at `http://localhost:10000`, use:\n\n` kubectl port-forward svc/willitscale-api 10000:10000`\n\nThe Playground is now reachable at [http://localhost:10000/](http://localhost:10000/), and forwarding your request to the remote `willitscale-api`.\n\n# API documentation\n\nYour server documentation is available in the GraphQL Playground. Two predictive queries are available at this stage:\n\n- A **`predictCapacity`** GraphQL mutation returning:\n\n  - The network’s `nodes vs throughput` (scalability) chart points\n  - The network’s `peak capacity` point\n  - Quantified scalability bottlenecks (contention / coherency)\n\n- A **`predictLatency`** GraphQL mutation returning:\n  - The network’s `nodes vs latency` chart points\n\nWe originally planned on returning the network’s `latency at peak capacity` point, from the `predictLatency` mutation.\n\nInstead, we built `willitscale-client` so that this point is directly visible on a chart, and we started implementing a metric-agnostic `makePrediction` mutation in `willitscale-api` that will ultimately serve the same purpose in a wider variety of scenarios (e.g. predicting the network's throughput (Xp) for a given concurrency (p)).\n\n# About Predictions\n\n## Measurements used in predictions\n\nTo answer the `\"Will it scale ?\"` question, the Platform needs to be fed with performance measurements formatted as a JSON.   \n\nAll available predictions are currently using the demo dataset in Vienna's Technical University student, M. Schäffer's whitepaper, about the \"Performance and Scalability of Private Ethereum Blockchains\". Markus and his team ran thousands of measurements to surface the the insights below.\n\n![research_m-schaffer_tuw.jpg](resources/research_m-schaffer_tuw.jpg)\n\nAs this platform evolves with the feedback of the community, more performance measurements from different applications, networks and systems will be added to the list of available demonstration datasets (e.g. Substrate / Polkadot performance measurements).\n\n## Dealing with prediction failures\n\nNobel Prize recipient and quantum physicist Niels Bohr used to say that **\"Prediction is very difficult, especially when it's about the future.\"**\n\nIn the realm of Data Science, it's important to embrace that predictions can fail. In our experience, there are two main reasons for that :\n\n**1. Bad performance measurements**\n\nPredictions can fail if they detect uncommon patterns in the performance measurements they process. It is important to always chart your measurements once, and remove noisy coordinates from your mesurements, before submitting them to the predictive engine.\n\nSometimes, what appears to be a clean set of measurements is in fact a perfectly wrong series of measurements.\nRethinking your entire performance testing protocol might help in such case.\n\n**2. Wrong mathematical models**\n\nPredictions can fail if we try to fit performance measurements to the wrong mathematical models, or if we use the wrong mathematical functions to this model in the code (e.g. `nls` versus `nlxb` nonlinear regression functions in R).\n\nIn the future, it would make sense to increase the number of mathematical models available in this repository.\n\n## Contention \u0026 Coherency\n\nAll systems experience contention and coherency penalties, undermining their ability to scale. The mathematical models we are using lets you quantify these penalties, to surface general areas of improvement of the system's capacity, latency and overall scalability.\n\nContention is a state of conflict over access to a shared resource (e.g conflicting DApp transactions accessing the same data region). It forces transactions to be dealt with in a serialized way.   \n\nCoherency is a state where the data in a cache is up to date with the system's memory. Ensuring it requires extra, costly synchronisation efforts from your system.\n\nWe would suggest adding scalability bottlenecks checks to a CI/CD pipeline, to validate the scalability of builds before they're deployed to production.\n\n## TODO\n\nBelow are some of the things we thought of adding to the scope of the platform, as we were developing this first version. Provided they match what the community would like to do moving forward, they will be properly turn into issues in due time.\n\n### Benchmarking\n\n- Properly thank Markus and his team for his great work with the thesis\n- Add more sample Substrate / Polkadot and other blockchain benchmark datasets\n\n### Predictive analysis\n\n- Describe the mathematical models currently used in the platform\n- Finish implementing the `makePrediction` mutation\n\n### Visualization\n\n- Enable user benchmark payload input on `willitscale-client`\n- Compare the scalability of different blockchain on `willitscale-client`\n\n### Typings\n\n- Use `npm run codegen` to generate TypeScript code from the `willitscale-api` GraphQL schema\n- Implement type checks in the `willitscale-client` and `willitscale-api`\n\n### Containerization\n\n- Add Dockerfile to `willitscale-client`\n\n### Deployment\n\n- Add deployment information to the README\n\n\n## About Stacktical (DSLA Protocol)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://storage.googleapis.com/stacktical-public/stacktical_logo_v2-dark.png\" width=\"512\" title=\"Stacktical (DSLA Protocol)\"\u003e\n\u003c/p\u003e\n\nStacktical is a french fintech company specialized in IT Service Management (ITSM) and IT Service Governance. \n\nTheir flagship product, [DSLA Protocol](https://stacktical.com), is an autonomous blockchain protocol to document, bargain and enforce service commitments between third party service providers and their customers, using peer-to-peer, electronic Service Level Agreement (SLA) contracts.\n\nAs outsourcing application and network services increasingly expose individuals and corporations to service disruptions, DSLA Protocol enables outsourced third party service providers to offer verifiable, more transparent service level guarantees to their customers, to continuously adapt to changing service level needs, and to gracefully mitigate the economic impact of bad service levels using the DSLA cryptocurrency token. \n\nFor more information about Stacktical and DSLA, please go to [stacktical.com](https://stacktical.com)\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstacktical%2Fwillitscale-polkadot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstacktical%2Fwillitscale-polkadot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstacktical%2Fwillitscale-polkadot/lists"}