{"id":20229160,"url":"https://github.com/tensorbfs/tensornetworkbenchmarks","last_synced_at":"2025-03-03T13:23:09.181Z","repository":{"id":45149038,"uuid":"443679325","full_name":"TensorBFS/TensorNetworkBenchmarks","owner":"TensorBFS","description":"Benchmark OMEinsum.jl and pytorch tensor network contraction backends","archived":false,"fork":false,"pushed_at":"2023-07-31T00:43:17.000Z","size":130,"stargazers_count":4,"open_issues_count":0,"forks_count":0,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-01-13T23:45:28.785Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TensorBFS.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-01-02T04:16:44.000Z","updated_at":"2024-12-17T12:16:48.000Z","dependencies_parsed_at":"2024-11-14T07:34:43.900Z","dependency_job_id":"e9a4e351-d695-40cd-98cb-4e0f28e9c284","html_url":"https://github.com/TensorBFS/TensorNetworkBenchmarks","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TensorBFS%2FTensorNetworkBenchmarks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TensorBFS%2FTensorNetworkBenchmarks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TensorBFS%2FTensorNetworkBenchmarks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TensorBFS%2FTensorNetworkBenchmarks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TensorBFS","download_url":"https://codeload.github.com/TensorBFS/TensorNetworkBenchmarks/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241670623,"owners_count":20000433,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-14T07:34:31.811Z","updated_at":"2025-03-03T13:23:09.136Z","avatar_url":"https://github.com/TensorBFS.png","language":"Python","readme":"# Benchmarking tensor network contractions\n\nDevice information:\n1. NVIDIA A100-PCIE 80G with NVIDIA Driver Version 470.82.01 and CUDA Version 11.4\n2. NVIDIA V100-SXM2 16G with NVIDIA Driver Version 470.63.01 and CUDA Version 11.4\n3. Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz\n\nContraction order `scripts/tensornetwork.json` is generated by the following code (you do not need to run this, because we have put the contraction order in the scripts folder)\n```julia\njulia\u003e using OMEinsum, OMEinsumContractionOrders, Graphs\n\njulia\u003e function random_regular_eincode(n, k; optimize=nothing)\n            g = Graphs.random_regular_graph(n, k)\n            ixs = [minmax(e.src,e.dst) for e in Graphs.edges(g)]\n            return EinCode((ixs..., [(i,) for i in     Graphs.vertices(g)]...), ())\n           end\nrandom_regular_eincode (generic function with 1 method)\n\njulia\u003e code = random_regular_eincode(220, 3);\n\njulia\u003e optcode_tree = optimize_code(code, uniformsize(code, 2), TreeSA(sc_target=29, βs=0.1:0.1:20,\n                                                             ntrials=5, niters=30, sc_weight=2.0));\n\njulia\u003e timespace_complexity(optcode_tree, uniformsize(code, 2))\n(33.17598124621909, 28.0)\n\njulia\u003e writejson(\"tensornetwork_permutation_optimized.json\", optcode_tree)\n```\n\n## Setup\n\n* Install [pytorch](https://pytorch.org/get-started/locally/).\n* Install [Julia](https://julialang.org/downloads/) and related packages by typing\n\n```bash\n$ cd scripts\n$ julia --project -e 'using Pkg; Pkg.instantiate()'\n```\n\n(NOTE: if you want to update your local environment, just run `julia --project -e 'using Pkg; Pkg.update()`)\n\n## pytorch\n\n```bash\n$ cd scripts\n$ python benchmark_pytorch.py gpu\n$ python benchmark_pytorch.py cpu\n```\n\n#### Timing\n\n* on A100, the minimum time is ~0.12s, 10 execusions take ~1.35s\n* on V100, the minimum time is ~0.12s, 10 execusions take ~1.76s\n* on CPU (MKL backend, single thread), it is 38.87s (maybe MKL is not set properly?)\n\n## OMEinsum.jl\n\n```bash\n$ cd scripts\n$ JULIA_NUM_THREADS=1 julia --project benchmark_OMEinsum.jl gpu\n$ JULIA_NUM_THREADS=1 julia --project benchmark_OMEinsum.jl cpu\n```\n\n#### Timing\n* on A100, the minimum time is ~0.16s, 10 execusions take ~2.25s\n* on V100, the minimum time is ~0.13s, 10 execusions take ~1.39s\n* on CPU (MKL backend, single thread), it is 23.05s\n\nNote: The Julia garbadge collection time is avoided.\n\n\n## Notes\nThe python scripts are contributed by @Fanerst, there are other people in the discussion and provide helpful advices, please check the [original post](https://github.com/under-Peter/OMEinsum.jl/issues/133#issuecomment-1003662057).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorbfs%2Ftensornetworkbenchmarks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorbfs%2Ftensornetworkbenchmarks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorbfs%2Ftensornetworkbenchmarks/lists"}