{"id":15639296,"url":"https://github.com/lantiga/pytorch2c","last_synced_at":"2025-04-30T07:05:59.370Z","repository":{"id":137507297,"uuid":"82105778","full_name":"lantiga/pytorch2c","owner":"lantiga","description":"A Python module for compiling PyTorch graphs to C","archived":false,"fork":false,"pushed_at":"2018-02-02T11:02:30.000Z","size":48,"stargazers_count":91,"open_issues_count":1,"forks_count":9,"subscribers_count":7,"default_branch":"master","last_synced_at":"2025-04-30T07:05:53.348Z","etag":null,"topics":["compiled-graphs","deep-learning","graph","python","pytorch","tensor"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lantiga.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-02-15T20:54:44.000Z","updated_at":"2025-02-02T13:24:31.000Z","dependencies_parsed_at":"2023-04-12T22:40:02.457Z","dependency_job_id":null,"html_url":"https://github.com/lantiga/pytorch2c","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lantiga%2Fpytorch2c","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lantiga%2Fpytorch2c/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lantiga%2Fpytorch2c/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lantiga%2Fpytorch2c/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lantiga","download_url":"https://codeload.github.com/lantiga/pytorch2c/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251658201,"owners_count":21622819,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["compiled-graphs","deep-learning","graph","python","pytorch","tensor"],"created_at":"2024-10-03T11:25:18.885Z","updated_at":"2025-04-30T07:05:59.335Z","avatar_url":"https://github.com/lantiga.png","language":"Python","readme":"# pytorch2c\n\n**NOTE: PyTorch is evolving rapidly. With the advent of tracing during execution and the upcoming GraphExecutor in ATen, that will be the way to run computation graphs in C++.**\n\n~~**NOTE: this project is currently under being reworked; instead of graph traversal, it will be based on the new tracing functionality being implemented in PyTorch after 0.2.0. This will allow cleaner code, more compact emitted code and proper handling of recurrent models. **~~\n\nA Python module for compiling (static) [PyTorch](http://pytorch.org) graphs to C (relying on TH and THNN). \n\nPyTorch2c inspects the computation graph and emits C code that performs the same computation. As long as a network is static (i.e. the graph doesn't change dynamically) it should produce a C source file that links to TH and THNN and can be compiled stand-alone. Interestingly, compiled graphs can be tested automatically by comparing what PyTorch produces to what the compiled code produces, given the same input.\n\nCaveats: \n* things are guaranteed to change in the PyTorch graph dept. Hopefully we'll be able to catch up with the changes as they happen.\n* in these initial phases there are lots of layers and operations missing (help is very welcome)\n* I'm developing on macOS and Python 3.5 at the moment\n* PyTorch2c currently supports PyTorch version 0.1.10\n\n## TODO\n\n* [x] Solve storage serialization issues\n* [ ] Complete testing infrastructure (generate a number of input-output pairs)\n* [x] Generate CMakeLists.txt as part of output for tests\n* [-] Implement wrappers for the complete API (in progress)\n\n## Trying things out\n\nInstall [PyTorch](http://pytorch.org), clone this repository and `cd pytorch2c`. Then run the following scripts to download PyTorch and build TH and THNN:\n```\nsh scripts/get_deps.sh\nsh scripts/build_deps.sh\n```\nNow you can execute tests with `sh scripts/run_test.sh [test-name]`, where `test-name` is the name of the corresponding Python script in the `test` directory, e.g.\n```\nsh scripts/run_test.sh base\nsh scripts/run_test.sh feedforward\nsh scripts/run_test.sh mnist # currently broken due to PyTorch being in flux (issue with ConvNdBackward not being inspectable)\n```\nTests return `1` if the value of the output tensor from the compiled code matches the value of the output tensor computed from PyTorch while compiling.\n\nTo see the compiled files, look into the `out` directory.\n\n## Example\n\nExample on a simple feedforward network:\n```python\nimport torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport torch2c\n\n# define the network\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfc1 = nn.Linear(10,20)\nfc1.weight.data.normal_(0.0,1.0)\nfc1.bias.data.normal_(0.0,1.0)\n\nfc2 = nn.Linear(20,2)\nfc2.weight.data.normal_(0.0,1.0)\nfc2.bias.data.normal_(0.0,1.0)\n\nmodel = lambda x: F.log_softmax(fc2(F.relu(fc1(x))))\n\n# create an input variable\ndata = Variable(torch.rand(10,10))\n\n# compile the graph and the test\ntorch2c.compile(model(data),'feedforward',out_path,compile_test=True)\n```\n\nGenerated output (don't look at the ugly storage reading stuff for now):\n```C\n#ifndef __FEEDFORWARD__\n#define __FEEDFORWARD__\n\n#include \"TH.h\"\n#include \"THNN.h\"\n\nvoid feedforward(THFloatTensor *x_4510941984, THFloatTensor *x_4510944688)\n{\n  THFloatStorage *storage_x_4510941880 = THFloatStorage_newWithSize(2);\n  {\n  FILE *f = fopen(\"data/x_4510941880.th\",\"rb\");\n  if (!f) {\n  THError(\"cannot open file data/x_4510941880.th for reading\");\n  }\n  long size;\n  size_t result = fread(\u0026size,sizeof(long),1,f);\n  char *bytes = (char *) storage_x_4510941880-\u003edata;\n  uint64_t remaining = sizeof(float) * storage_x_4510941880-\u003esize;\n  result = fread(bytes,sizeof(float),storage_x_4510941880-\u003esize,f);\n  fclose(f);\n  }\n  THLongStorage *size_x_4510941880 = THLongStorage_newWithSize1(2);\n  THLongStorage *stride_x_4510941880 = THLongStorage_newWithSize1(1);\n  THFloatTensor *x_4510941880 = THFloatTensor_newWithStorage(storage_x_4510941880,0,size_x_4510941880,stride_x_4510941880);\n  THLongStorage_free(size_x_4510941880);\n  THLongStorage_free(stride_x_4510941880);\n  THFloatStorage *storage_x_4510941776 = THFloatStorage_newWithSize(40);\n  {\n  FILE *f = fopen(\"data/x_4510941776.th\",\"rb\");\n  if (!f) {\n  THError(\"cannot open file data/x_4510941776.th for reading\");\n  }\n  long size;\n  size_t result = fread(\u0026size,sizeof(long),1,f);\n  char *bytes = (char *) storage_x_4510941776-\u003edata;\n  uint64_t remaining = sizeof(float) * storage_x_4510941776-\u003esize;\n  result = fread(bytes,sizeof(float),storage_x_4510941776-\u003esize,f);\n  fclose(f);\n  }\n  THLongStorage *size_x_4510941776 = THLongStorage_newWithSize2(2,20);\n  THLongStorage *stride_x_4510941776 = THLongStorage_newWithSize2(20,1);\n  THFloatTensor *x_4510941776 = THFloatTensor_newWithStorage(storage_x_4510941776,0,size_x_4510941776,stride_x_4510941776);\n  THLongStorage_free(size_x_4510941776);\n  THLongStorage_free(stride_x_4510941776);\n  THFloatStorage *storage_x_4510941672 = THFloatStorage_newWithSize(20);\n  {\n  FILE *f = fopen(\"data/x_4510941672.th\",\"rb\");\n  if (!f) {\n  THError(\"cannot open file data/x_4510941672.th for reading\");\n  }\n  long size;\n  size_t result = fread(\u0026size,sizeof(long),1,f);\n  char *bytes = (char *) storage_x_4510941672-\u003edata;\n  uint64_t remaining = sizeof(float) * storage_x_4510941672-\u003esize;\n  result = fread(bytes,sizeof(float),storage_x_4510941672-\u003esize,f);\n  fclose(f);\n  }\n  THLongStorage *size_x_4510941672 = THLongStorage_newWithSize1(20);\n  THLongStorage *stride_x_4510941672 = THLongStorage_newWithSize1(1);\n  THFloatTensor *x_4510941672 = THFloatTensor_newWithStorage(storage_x_4510941672,0,size_x_4510941672,stride_x_4510941672);\n  THLongStorage_free(size_x_4510941672);\n  THLongStorage_free(stride_x_4510941672);\n  THFloatStorage *storage_x_4510941568 = THFloatStorage_newWithSize(200);\n  {\n  FILE *f = fopen(\"data/x_4510941568.th\",\"rb\");\n  if (!f) {\n  THError(\"cannot open file data/x_4510941568.th for reading\");\n  }\n  long size;\n  size_t result = fread(\u0026size,sizeof(long),1,f);\n  char *bytes = (char *) storage_x_4510941568-\u003edata;\n  uint64_t remaining = sizeof(float) * storage_x_4510941568-\u003esize;\n  result = fread(bytes,sizeof(float),storage_x_4510941568-\u003esize,f);\n  fclose(f);\n  }\n  THLongStorage *size_x_4510941568 = THLongStorage_newWithSize2(20,10);\n  THLongStorage *stride_x_4510941568 = THLongStorage_newWithSize2(10,1);\n  THFloatTensor *x_4510941568 = THFloatTensor_newWithStorage(storage_x_4510941568,0,size_x_4510941568,stride_x_4510941568);\n  THLongStorage_free(size_x_4510941568);\n  THLongStorage_free(stride_x_4510941568);\n  THFloatTensor *x_4510617224 = THFloatTensor_new();\n  THFloatTensor *addBuffer_x_4510617224 = THFloatTensor_new();\n  THNN_FloatLinear_updateOutput(NULL,x_4510941984,x_4510617224,x_4510941568,x_4510941672,addBuffer_x_4510617224);\n  THFloatTensor *x_4510961736 = THFloatTensor_new();\n  THNN_FloatThreshold_updateOutput(NULL,x_4510617224,x_4510961736,0,0,0);\n  THFloatTensor *x_4510961888 = THFloatTensor_new();\n  THFloatTensor *addBuffer_x_4510961888 = THFloatTensor_new();\n  THNN_FloatLinear_updateOutput(NULL,x_4510961736,x_4510961888,x_4510941776,x_4510941880,addBuffer_x_4510961888);\n  THFloatTensor *x_4510962040 = THFloatTensor_new();\n  THNN_FloatLogSoftMax_updateOutput(NULL,x_4510961888,x_4510962040);\n  THFloatTensor_copy(x_4510944688,x_4510962040);\n  THFloatTensor_free(x_4510962040);\n  THFloatTensor_free(x_4510961888);\n  THFloatTensor_free(addBuffer_x_4510961888);\n  THFloatTensor_free(x_4510961736);\n  THFloatTensor_free(x_4510617224);\n  THFloatTensor_free(addBuffer_x_4510617224);\n  THFloatTensor_free(x_4510941568);\n  THFloatStorage_free(storage_x_4510941568);\n  THFloatTensor_free(x_4510941672);\n  THFloatStorage_free(storage_x_4510941672);\n  THFloatTensor_free(x_4510941776);\n  THFloatStorage_free(storage_x_4510941776);\n  THFloatTensor_free(x_4510941880);\n  THFloatStorage_free(storage_x_4510941880);\n}\n#endif\n```\n\n## License\n\nMIT license http://www.opensource.org/licenses/mit-license.php/\n\nCopyright (C) 2017 Luca Antiga, Orobix Srl\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flantiga%2Fpytorch2c","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flantiga%2Fpytorch2c","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flantiga%2Fpytorch2c/lists"}