{"id":17754635,"url":"https://github.com/dragonflyrobotics/neuroxide","last_synced_at":"2025-09-10T07:33:10.373Z","repository":{"id":256103740,"uuid":"851417663","full_name":"DragonflyRobotics/Neuroxide","owner":"DragonflyRobotics","description":"Ultrafast PyTorch-like AI Framework Written from Ground-Up in Rust","archived":false,"fork":false,"pushed_at":"2024-10-22T03:23:19.000Z","size":1145,"stargazers_count":1,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"dev","last_synced_at":"2024-10-22T21:44:45.985Z","etag":null,"topics":["artificial-intelligence","framework","pytorch","rust","rust-lang"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DragonflyRobotics.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-09-03T04:07:47.000Z","updated_at":"2024-10-22T01:11:46.000Z","dependencies_parsed_at":"2024-10-22T23:23:05.087Z","dependency_job_id":null,"html_url":"https://github.com/DragonflyRobotics/Neuroxide","commit_stats":null,"previous_names":["dragonflyrobotics/neuroxide"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DragonflyRobotics%2FNeuroxide","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DragonflyRobotics%2FNeuroxide/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DragonflyRobotics%2FNeuroxide/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DragonflyRobotics%2FNeuroxide/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DragonflyRobotics","download_url":"https://codeload.github.com/DragonflyRobotics/Neuroxide/tar.gz/refs/heads/dev","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246691493,"owners_count":20818521,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","framework","pytorch","rust","rust-lang"],"created_at":"2024-10-26T14:07:40.064Z","updated_at":"2025-09-10T07:33:10.335Z","avatar_url":"https://github.com/DragonflyRobotics.png","language":"Rust","readme":"# Neuroxide\n[![Rust](https://github.com/DragonflyRobotics/Neuroxide/actions/workflows/rust.yml/badge.svg)](https://github.com/DragonflyRobotics/Neuroxide/actions/workflows/rust.yml)\n\nWelcome! This project attempts to rewrite the PyTorch framework (maintaining a consistent API call) in Rust in hopes of a faster, hard-typed AI framework. This project is currently in its Alpha phase, so feel free to contribute or contact me at my [email](kshahusa@gmail.com)! As this project is in its early phases, documentation will be sparse, but a quick overview of the development scope will be provided below.\n\n## News\nA major part of the timeline (MatMul) is completed, tested, and benchmarked. With this, some primitive broadcasting for multiplication is here but is not recommended. Users are recommended to still manually broadcast any constant terms to the size of argument size. Here is a sample of matmul and gradient in action!\n\n```rust\nuse std::sync::{Arc, RwLock};\n\nuse neuroxide::{ops::{add::AddOp, matmul::MatMulOp, mul::MulOp, pow::PowOp}, types::{device::Device, tensor::Tensor, tensordb::{DTypes, TensorDB}}};\nuse neuroxide::ops::op_generic::Operation;\n\nfn main() {\n    let a: Vec\u003cf32\u003e = vec![1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0, 4.0];\n    let b: Vec\u003cf32\u003e = vec![5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0];\n    let a_shape = vec![2, 2, 2];\n    let b_shape = vec![2, 2, 3];\n\n    let tensor_db = Arc::new(RwLock::new(TensorDB::new(DTypes::F32)));\n    let c1 = Tensor::new(\u0026tensor_db, vec![6.0; 12], vec![2, 2, 3], Device::CUDA, true);\n    let c2 = Tensor::new(\u0026tensor_db, vec![5.0; 12], vec![2, 2, 3], Device::CUDA, true);\n    let c3 = Tensor::new(\u0026tensor_db, vec![2.0; 12], vec![2, 2, 3], Device::CUDA, true);\n\n\n    let a = Tensor::new(\u0026tensor_db, a, a_shape, Device::CUDA, true);\n    let b = Tensor::new(\u0026tensor_db, b, b_shape, Device::CUDA, false);\n    let c = MatMulOp::forward(\u0026vec![\u0026a, \u0026b]);\n    let d = MulOp::forward(\u0026vec![\u0026c, \u0026c1]);\n    let e = AddOp::forward(\u0026vec![\u0026d, \u0026c2]);\n    let f = PowOp::forward(\u0026vec![\u0026e, \u0026c3]);\n    println!(\"{}\", f);\n    let grad = f.backward(None);\n    println!(\"{}\", grad.get(\u0026a.id).unwrap());\n}\n```\n\n## Table of Contents\n- [Usage](#usage)\n- [Sample Code](#sample-code)\n- [Goal](#goal)\n- [Contributing](#contributing)\n\n## Usage\nHere is how a contributor/developer might use the project.\n1. `git clone git@github.com:DragonflyRobotics/Neuroxide.git`\n2. Modify the `src/bin.rs` to contain your personal programs\n3. `cargo run` \n\n### External Use (expert)\nYou must compile the library via `cargo build` and copy the file from the `target` folder. You can then link this to your Rust projects to use. You can also try installing like this:\n`cargo install --git git@github.com:DragonflyRobotics/Neuroxide.git`\n\n## Sample Code\nHere are some basic operations (we hope you see the similarity to PyTorch):\n\n_Forward Pass_\n```rust\nlet db = Arc::new(RwLock::new(TensorDB::new(DTypes::F64)));\nlet mut c1c = Tensor::new(\u0026db, vec![15.0], vec![1], Device::CPU, false);\nlet mut c2c = Tensor::new(\u0026db, vec![6.0], vec![1], Device::CPU, false);\nlet mut result = AddOp::forward(\u0026vec![\u0026c1c, \u0026c2c]);\n```\n\n_Backward Pass_\n```rust\nlet db = Arc::new(RwLock::new(TensorDB::new(DTypes::F64)));\nlet x = Tensor::new(\u0026db, vec![5.0], vec![1], Device::CPU, true);\nlet c1c = Tensor::new(\u0026db, vec![15.0], vec![1], Device::CPU, false);\nlet c2c = Tensor::new(\u0026db, vec![6.0], vec![1], Device::CPU, false);\nlet r1 = MulOp::forward(\u0026vec![\u0026x, \u0026c1c]);\nlet r2 = MulOp::forward(\u0026vec![\u0026x, \u0026c2c]);\nlet mut result = AddOp::forward(\u0026vec![\u0026r1, \u0026r2]);\nresult = MulOp::forward(\u0026vec![\u0026result, \u0026x]);\nprintln!(result.data[0], 525.0));\n\nlet grad = result.backward(None);\nprintln!(grad.get(\u0026x.id).unwrap().data[0])\n```\n\n_Forward Pass CUDA_\n```rust\nlet db = Arc::new(RwLock::new(TensorDB::new(DTypes::F32)));\nlet mut c1c = Tensor::new(\u0026db, vec![15.0], vec![1], Device::CUDA, false);\nlet mut c2c = Tensor::new(\u0026db, vec![6.0], vec![1], Device::CUDA, false);\nlet mut result = AddOp::forward(\u0026vec![\u0026c1c, \u0026c2c]);\n```\n\n_Partial Backward to Selective Leaves_\n```rust\nlet db = Arc::new(RwLock::new(TensorDB::new(DTypes::F64)));\nlet x1 = Tensor::new(\u0026db, vec![5.0], vec![1], Device::CPU, true);\nlet x2 = Tensor::new(\u0026db, vec![6.0], vec![1], Device::CPU, true);\nlet x3 = Tensor::new(\u0026db, vec![7.0], vec![1], Device::CPU, true);\nlet x4 = Tensor::new(\u0026db, vec![8.0], vec![1], Device::CPU, true);\n\n\nlet result = x1.clone() * (x2.clone() + x3) + x4;\nprintln!(result.data[0]);\nlet grad = result.backward(Some(vec![x2.id.clone()]));\nprintln!(grad.get(\u0026x2.id).unwrap().data[0]);\n```\n**Note:** You can avoid the clunky notation and simply operate on tensors using `+`, `-`, `*`, and `/`! \n\n_Simple Neural Network_\n```rust\nextern crate blas_src;\nuse std::{sync::{Arc, RwLock}, time::{SystemTime, UNIX_EPOCH}};\n\nuse neuroxide::{layers::linear::Linear, types::{device::Device, tensor::Tensor, tensordb::{DTypes, TensorDB}}};\nuse neuroxide::ops::op_generic::Operation;\nuse neuroxide::optimizers::simple_descent::SimpleDescent;\nuse rand::Rng;\n\n#[macro_use]\nextern crate neuroxide;\n\nfn main() {\n    let start = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();\n    let db = Arc::new(RwLock::new(TensorDB::new(DTypes::F32)));\n    let pow_const = Tensor::\u003cf32\u003e::new(\u0026db, vec![2.0; 1], vec![1], Device::CPU, false);\n    let mut linear1 = Linear::new(\u0026db, 16, 16, true);\n    let mut linear2 = Linear::new(\u0026db, 16, 16, true);\n    let mut optim = SimpleDescent::new(\u0026db, 0.0000001);\n    optim.add_parameters(\u0026linear1.parameters());\n    optim.add_parameters(\u0026linear2.parameters());\n    for iteration in 0..1500 {\n        let num: f32 = rand::thread_rng().gen_range(0..100) as f32; \n        let input = Tensor::\u003cf32\u003e::new(\u0026db, vec![num; 16], vec![1, 16], Device::CPU, false);\n        let output = Tensor::\u003cf32\u003e::new(\u0026db, vec![num * 2.0; 16], vec![1, 16], Device::CPU, false);\n\n\n        let mut c = linear1.forward(\u0026input);\n        c = linear2.forward(\u0026c);\n\n        let loss = pow!(c - output, pow_const);\n        let grad = loss.backward(None);\n\n        optim.step(\u0026grad);\n\n        println!(\"Epoch: {} Loss: {}\", iteration, loss);\n    } \n\n    let input = Tensor::\u003cf32\u003e::new(\u0026db, vec![4.0; 16], vec![16], Device::CPU, false);\n\n    let c = linear1.forward(\u0026input);\n    let c = linear2.forward(\u0026c);\n    println!(\"{}\", c);\n\n\n    let end = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();\n    println!(\"Time taken: {:?} seconds\", end-start);\n}\n```\n**Note:** The layers are manually implemented here but built-in `neuroxide.nn.Linear` functionality is coming soon!\n\n## Goal\nPython has many benefits, mainly its flexibility, which makes it an avid language for AI/ML. The tradeoff is the clunky interpreter, alternation between Python and C++ bindings, and lack of multiprocessing, which make it inefficient and slow for many high-performance applications. This project attempts to maintain the comforts of the PyTorch syntax while leveraging a hard-typed, efficient language to create a powerful AI engine for cutting-edge projects. \n\n## Contributing\nWe appreciate any contributions to this project to help it grow and encompass the full functionality of an AI engine. Please refer to our [contributing guidelines](https://github.com/DragonflyRobotics/Neuroxide/blob/dev/CONTRIBUTING.md) for details. \n\n## License\nThis project has a GNU License, which can be found [here](https://github.com/DragonflyRobotics/Neuroxide/blob/dev/LICENSE).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdragonflyrobotics%2Fneuroxide","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdragonflyrobotics%2Fneuroxide","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdragonflyrobotics%2Fneuroxide/lists"}