{"id":30703202,"url":"https://github.com/dfinity/decentralization","last_synced_at":"2026-03-11T12:16:12.102Z","repository":{"id":203915686,"uuid":"686054553","full_name":"dfinity/decentralization","owner":"dfinity","description":"Platform Decentralization analysis","archived":false,"fork":false,"pushed_at":"2026-01-08T03:12:29.000Z","size":7197,"stargazers_count":2,"open_issues_count":6,"forks_count":4,"subscribers_count":2,"default_branch":"main","last_synced_at":"2026-01-16T08:30:22.985Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dfinity.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-09-01T16:19:36.000Z","updated_at":"2026-01-05T12:06:26.000Z","dependencies_parsed_at":null,"dependency_job_id":"b6cf9b74-4ad8-43a4-b4ba-070b9351e8c8","html_url":"https://github.com/dfinity/decentralization","commit_stats":null,"previous_names":["dfinity/decentralization"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/dfinity/decentralization","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfinity%2Fdecentralization","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfinity%2Fdecentralization/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfinity%2Fdecentralization/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfinity%2Fdecentralization/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dfinity","download_url":"https://codeload.github.com/dfinity/decentralization/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfinity%2Fdecentralization/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30380935,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-11T06:09:32.197Z","status":"ssl_error","status_checked_at":"2026-03-11T06:09:17.086Z","response_time":84,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-09-02T16:57:15.965Z","updated_at":"2026-03-11T12:16:11.609Z","avatar_url":"https://github.com/dfinity.png","language":"Python","readme":"# Network Allocation Optimization - Configuration Guide\n\nThis README explains how to use the `main.py` script and configure the `config.json` file for running node allocation optimizations.\n\n---\n\n## 🚀 How to Run Target topology (concentration tool) `main.py`\n\nThis project is setup using [poetry](https://python-poetry.org/docs/#installation).\n\nYou can run the tool in one of two ways:\n\n1. Using `poetry`:\n```bash\npoetry install\nmake run\n```\n2. Using system `python3`:\n```bash\npip install -r requirements.txt\npython ./topology_optimizer/main.py --config-file ./topology_optimizer/config.json\n```\n\nThis will run the optimizer and generate output files (e.g., plots and JSON reports) based on the configuration provided.\n\n## 🚀 How to Run Ic topology `main.py`\n\n1. Using `poetry`:\n```bash\npoetry run python3 ic_topology/main.py\n```\n\n---\n\n## ⚙️ `config.json` Parameters Explained\n\n```json\n{\n  \"nodes_file\": \"./data/network_data/current_nodes_20250507_113743.csv\",\n  \"topology_file\": \"./data/topology/current_topology.csv\",\n  \"node_pipeline_file\": \"./data/node_pipelines/node_pipeline.csv\",\n  \"scenario\": \"./data/cluster_scenarios\",\n  \"mode\": \"minimize_node_swaps\",\n  \"no_synthetic_countries\": 6,\n  \"enforce_sev_constraint\": false,\n  \"enforce_health_constraint\": false,\n  \"enforce_blacklist_constraint\": true,\n  \"enforce_per_node_provider_assignation\": false\n}\n```\n\n| Parameter                      | Type      | Description |\n|-------------------------------|-----------|-------------|\n| `nodes_file`                  | `str`     | Path to the CSV file containing the list of currently known nodes (with metadata like `node_id`, `node_provider`, `dc_id`, `region`, etc.). |\n| `topology_file`               | `str`     | Path to the CSV file defining the current network topology, including `subnet_id`, `subnet_size`, and subnet types. |\n| `node_pipeline_file`          | `str`     | CSV file with upcoming (pipeline) nodes to include in the allocation analysis. These are the nodes that are not yet voted in, but will be, thus they need to be taken into consideration. |\n| `blacklist_file`              | `str`     | YAML file listing blacklisted node IDs, data centers, or providers to be excluded from assignment. The latest file can be sourced from [the dre repo](https://github.com/dfinity/dre/blob/main/cordoned_features.yaml) |\n| `scenario`             | `str`     | JSON file or Directory containing JSON files that represent clustering scenarios. |\n| `mode`                        | `str`     | Optimization mode. Valid values are: `minimize_node_swaps` and `minimize_new_nodes`. |\n| `no_synthetic_countries`      | `int`     | Number of synthetic countries to inject into the solver (used for the generation of synthetic nodes). |\n| `enforce_sev_constraint`      | `bool`    | If `true`, the allocation will ensure SEV (Secure Encrypted Virtualization) constraints for subnets are enforced. |\n| `enforce_health_constraint`| `bool`    | If `true`, only healthy nodes (not `DOWN` or `DEGRADED`) will be considered for allocation. |\n| `enforce_blacklist_constraint`| `bool`    | If `true`, blacklisted nodes will be excluded from all subnet assignments. |\n| `enforce_per_node_provider_assignation`| `bool`    | If `true` each node provider will have at least one of their nodes assigned to a subnet if they have more than 4 nodes. |\n\n---\n\n## 📂 Output\n\nThe script stores all generated files in the `./output/` directory. The exact outputs depend on the chosen `mode` (`minimize_node_swaps` or `minimize_new_nodes`) and the input scenario.\n\n\n\n### Mode: `minimize_node_swaps`\n\nWhen `mode` is set to `\"minimize_node_swaps\"`, the following are produced:\n\n- `current_node_allocation_\u003cscenario\u003e.png`  \n  → Bar chart showing the current node allocation per subnet, by node provider.\n\n- `subnet_node_changes_\u003cscenario\u003e.json`  \n  → JSON with details on which nodes were dropped, moved, or newly assigned per subnet.\n\n- `subnet_change_summary_\u003cscenario\u003e.png`  \n  → Summary plot of reassignments (moved in/out, newly assigned, dropped) per subnet.\n\n### Mode: `minimize_new_nodes`\n\nWhen `mode` is set to `\"minimize_new_nodes\"`, the following are produced:\n\n- `node_provider_topology_matrix.png`, `data_center_topology_matrix.png`, etc.  \n  → Matrix plots showing the distribution of nodes by attribute (node provider, country, etc.), including current subnet capacity overlays.\n\n- `country_double_row_topology_matrix.png`  \n  → Matrix plot with country-level distribution using double-row spacing.\n\n- `node_provider_node_allocation.png`, `data_center_node_allocation.png`, etc.  \n  → Stacked bar charts showing the optimized assignment of nodes per attribute, annotated with Nakamoto coefficients.\n\n### Notes\n\n- Each clustering scenario run will generate its own dedicated set of output files.\n- In both modes, the file `blacklisted_nodes_\u003cscenario\u003e.csv` is generated. It contains details of all nodes marked as blacklisted, including the reason.\n\n## Updating dependencies\nTo add new dependencies with poetry do:\n```bash\npoetry add \u003cpackage-name\u003e\n```\nAfter that update the `requirements.txt` by running:\n```bash\npoetry export \u003e requirements.txt\n```\n\n## Running tests\nOut test suite consists of use cases presented in the `tests` folder. To run them do:\n```bash\nmake test\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdfinity%2Fdecentralization","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdfinity%2Fdecentralization","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdfinity%2Fdecentralization/lists"}