{"id":13646937,"url":"https://github.com/EMSL-MSC/NWPerf","last_synced_at":"2025-04-21T21:32:00.667Z","repository":{"id":6036586,"uuid":"7260908","full_name":"EMSL-MSC/NWPerf","owner":"EMSL-MSC","description":"Cluster performance visualization","archived":false,"fork":false,"pushed_at":"2024-03-06T01:06:43.000Z","size":1053,"stargazers_count":9,"open_issues_count":4,"forks_count":4,"subscribers_count":12,"default_branch":"master","last_synced_at":"2024-08-02T01:26:31.382Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EMSL-MSC.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2012-12-20T16:44:31.000Z","updated_at":"2021-12-13T20:19:48.000Z","dependencies_parsed_at":"2024-02-02T23:45:47.046Z","dependency_job_id":null,"html_url":"https://github.com/EMSL-MSC/NWPerf","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMSL-MSC%2FNWPerf","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMSL-MSC%2FNWPerf/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMSL-MSC%2FNWPerf/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMSL-MSC%2FNWPerf/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EMSL-MSC","download_url":"https://codeload.github.com/EMSL-MSC/NWPerf/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223880347,"owners_count":17219104,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T01:03:15.980Z","updated_at":"2024-11-09T20:30:48.095Z","avatar_url":"https://github.com/EMSL-MSC.png","language":"JavaScript","readme":"NWPerf\n======\n\nCluster performance data collection tools\n\n\nDependencies\n============\n\nThe NWPerf collection scripts depend heavily on the [ZeroMQ](http://www.zeromq.org/) Python modules. You can this library on various Linux Platforms:\n\n* Ubuntu/debian:\n\n    `apt-get install python-zmq`\n\n* Redhat:\n\n    `yum install python-zmq`\n\n\n\nCeph Point Storage\n==================\n\nIn order to use the ceph point storage tools (gen_cview,nwperf-ceph-store.py) you will need a configured [Ceph](www.ceph.com) rados system.  You then create a rados pool for the cluster of points you want to store.  You then need to populate the pool with these files:\n\n  * hostorder - a ordered list of hostnames that will be stored in points tables\n  * hostorder.sizelog - A history of size changes, initially contains only the host count\n  * pointdesc - descriptions of the metrics stored in the database. a basic one covering many collectl and ganglia data point is included in the examples directory\n  * pointindex - a list of all metrics stored in the pool\n\nPool Creation Steps\n-------------------\nWe will create a pool for a cluster called io, with 38 nodes names io1 to io38. It is assumed that you already have some sort of collection system setup. In this case we are using the nwperf-ganglia.py script, and will gather points from it. we can gather a point list from a running ganglia on port 8649.\n\n  1. Create rados pool\n\n    ```\n    rados mkpool io.points\n    ceph osd pool set io.points size 3\n    ```\n    It is not required to set the redundancy level to 3, the default is 2\n  1. Populate basic files\n    ```\nseq -f io%g 1 38 \u003e /tmp/hostorder\necho 0 38 \u003e /tmp/hostorder.sizelog\nnc io1 8649 | grep NAME | cut -d\\\" -f 2 | sort -u \u003e /tmp/pointindex\nrados -p io.points put hostorder /tmp/hostorder\nrados -p io.points put hostorder.sizelog /tmp/hostorder.sizelog\nrados -p io.points put pointdesc ~/NWPerf/examples/pointdesc\nrados -p io.points put pointindex /tmp/pointindex\nrados -p io.points ls\n```\n    The final command allows you to verify that all files are present in the rados pool.\n\n  1. Collect Data\n    ```\n    nwperf-ceph-store.py -n -c io -p /tmp/io.pid\n    ```\n  1. Output\n    ```\n    gen_cview -c io -o /dev/shm/io/ -r 60\n    ```\n","funding_links":[],"categories":["JavaScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FEMSL-MSC%2FNWPerf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FEMSL-MSC%2FNWPerf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FEMSL-MSC%2FNWPerf/lists"}