{"id":17153139,"url":"https://github.com/axect/radient","last_synced_at":"2025-04-13T12:44:08.781Z","repository":{"id":205575819,"uuid":"714541268","full_name":"Axect/Radient","owner":"Axect","description":"Reverse mode Automatic Differentiation in Rust","archived":false,"fork":false,"pushed_at":"2024-10-06T01:27:44.000Z","size":107,"stargazers_count":4,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-04-12T16:10:15.762Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Axect.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE-APACHE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2023-11-05T06:39:53.000Z","updated_at":"2025-02-10T04:09:29.000Z","dependencies_parsed_at":"2023-11-30T23:31:05.478Z","dependency_job_id":null,"html_url":"https://github.com/Axect/Radient","commit_stats":null,"previous_names":["axect/revad_new"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Axect%2FRadient","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Axect%2FRadient/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Axect%2FRadient/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Axect%2FRadient/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Axect","download_url":"https://codeload.github.com/Axect/Radient/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248717240,"owners_count":21150387,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-14T21:45:23.754Z","updated_at":"2025-04-13T12:44:08.758Z","avatar_url":"https://github.com/Axect.png","language":"Rust","readme":"# Radient\n\nRadient is a Rust library designed for automatic differentiation. It leverages the power of computational graphs to perform forward and backward passes for gradient calculations.\n\n## Features\n\n- Implementation of computational graphs.\n- Forward and backward propagation for gradient computation.\n- Support for various operations like exponential, logarithmic, power, and trigonometric functions.\n\n## Examples\n\n### Example 1: Basic Operations with Symbols\n\n```rust\nuse radient::prelude::*;\n\n// Example with symbol : ln(x + y) * tanh(x - y)^2\nfn main() {\n    let mut graph = Graph::default();\n\n    let x = graph.var(2.0);\n    let y = graph.var(1.0);\n    let x_sym = Expr::Symbol(x);\n    let y_sym = Expr::Symbol(y);\n    let expr_sym = (\u0026x_sym + \u0026y_sym).ln() * (\u0026x_sym - \u0026y_sym).tanh().powi(2);\n\n    graph.compile(expr_sym);\n\n    let result = graph.forward();\n    println!(\"Result: {}\", result);\n\n    graph.backward();\n    let gradient_x = graph.get_gradient(x);\n    println!(\"Gradient x: {}\", gradient_x);\n}\n```\n\n### Example 2: Obtain gradient of a function\n\nFor gradient, you have two options:\n\n1. `gradient`: Concise but relatively slow (but not too much)\n2. `gradient_cached`: Fast but little bit verbose\n\n#### 2.1: `gradient`\n\n```rust\nuse radient::prelude::*;\n\nfn main() {\n    let value = vec![2f64, 1f64];\n    // No cached gradient - concise but relatively slow\n    let (result, gradient) = gradient(f, \u0026value);\n    println!(\"result: {}, gradient: {:?}\", result, gradient);\n}\n\nfn f(x_vec: \u0026[Expr]) -\u003e Expr {\n    let x = \u0026x_vec[0];\n    let y = \u0026x_vec[1];\n\n    (x.powi(2) + y.powi(2)).sqrt()\n}\n```\n\n#### 2.2: `gradient_cached`\n\n```rust\nuse radient::prelude::*;\n\nfn main() {\n    // Compile the graph\n    let mut graph = Graph::default();\n    graph.touch_vars(2);\n    let symbols = graph.get_symbols();\n    let expr = f(\u0026symbols);\n    graph.compile(expr);\n\n    // Compute\n    let value = vec![2f64, 1f64];\n    let (result, grads) = gradient_cached(\u0026mut graph, \u0026value);\n\n    println!(\"result: {}, gradient: {:?}\", result, grads);\n}\n\nfn f(x_vec: \u0026[Expr]) -\u003e Expr {\n    let x = \u0026x_vec[0];\n    let y = \u0026x_vec[1];\n\n    (x.powi(2) + y.powi(2)).sqrt()\n}\n```\n\n### Example 3: Single layer perceptron (low-level)\n\n```rust\nuse peroxide::fuga::*;\nuse radient::prelude::*;\n\n// Single Layer Perceptron to solve the classification problem\n//\n// y = sigmoid(sum(w * x))\n//\n// - x : 1, input (1+2-dim vector)\n// - y : label (0 or 1)\n// - w : weight (3-dim vector)\nfn main() {\n    // Data Generation\n    let n = 100;\n\n    // Group 1 (Normal(2, 0.5), Normal(1, 0.5))\n    let n1_x1 = Normal(2.0, 0.5);\n    let n1_x2 = Normal(1.0, 0.5);\n\n    // Group 2 (Normal(-3, 0.5), Normal(-2, 0.5))\n    let n2_x1 = Normal(-3.0, 0.5);\n    let n2_x2 = Normal(-2.0, 0.5);\n\n    let group1: Vec\u003c_\u003e = n1_x1\n        .sample(n)\n        .into_iter()\n        .zip(n1_x2.sample(n))\n        .map(|(x1, x2)| vec![1.0, x1, x2])\n        .collect();\n    let group2: Vec\u003c_\u003e = n2_x1\n        .sample(n)\n        .into_iter()\n        .zip(n2_x2.sample(n))\n        .map(|(x1, x2)| vec![1.0, x1, x2])\n        .collect();\n    let label1 = vec![0f64; n];\n    let label2 = vec![1f64; n];\n\n    let data: Vec\u003c_\u003e = group1.into_iter().chain(group2).collect();\n    let labels: Vec\u003c_\u003e = label1.into_iter().chain(label2).collect();\n\n    // Declare Graph\n    let mut graph = Graph::default();\n    graph.touch_vars(7); // w \u0026 x \u0026 y = 3 + 3 + 1\n    let w = graph.get_symbols()[0..3].to_vec();\n    let x = graph.get_symbols()[3..6].to_vec();\n    let y = graph.get_symbols()[6].clone();\n    let expr: Expr = w.into_iter().zip(x).map(|(w, x)| w * x).sum();\n    let y_hat = expr.sigmoid();\n    let loss = (y - y_hat.clone()).powi(2);\n    println!(\"loss: {:?}\", loss);\n    graph.compile(loss);\n\n    // Train\n    let lr = 0.1;\n    let epochs = 100;\n    let mut loss_history = vec![0f64; epochs];\n    let mut wx = vec![0f64; 7];\n    for li in loss_history.iter_mut() {\n        let mut loss_sum = 0f64;\n        for (x, y) in data.iter().zip(labels.iter()) {\n            wx[3..6].copy_from_slice(x);\n            wx[6] = *y;\n            let (loss, grad) = gradient_cached(\u0026mut graph, \u0026wx);\n            loss_sum += loss;\n\n            // Update weights\n            for i in 0..3 {\n                wx[i] -= lr * grad[i];\n            }\n        }\n        *li = loss_sum / n as f64;\n    }\n    loss_history.print();\n\n    // Test\n    let mut correct = 0;\n    graph.compile(y_hat);\n    for (x, y) in data.iter().zip(labels) {\n        wx[3..6].copy_from_slice(x);\n        wx[6] = y;\n        graph.reset();\n        graph.subs_vars(\u0026wx);\n        let y_hat = graph.forward();\n        let c_hat = if y_hat \u003e 0.5 { 1.0 } else { 0.0 };\n        if c_hat == y {\n            correct += 1;\n        }\n    }\n    let total = 2 * n;\n    println!(\"Accuracy: {}%\", correct as f64 / total as f64 * 100f64);\n    println!(\"Weights: {:?}\", wx);\n}\n```\n\n### Example 4: Single layer perceptron (matrix version)\n\n```rust\nuse peroxide::fuga::*;\nuse radient::prelude::*;\n\n// Single Layer Perceptron to solve the classification problem\n//\n// y = sigmoid(sum(w * x))\n//\n// - x : 1, input (1+2-dim vector)\n// - y : label (0 or 1)\n// - w : weight (3-dim vector)\nfn main() {\n    // Data Generation\n    let n = 100;\n\n    let n1_x1 = Normal(2.0, 0.5);\n    let n1_x2 = Normal(1.0, 0.5);\n\n    let n2_x1 = Normal(-3.0, 0.5);\n    let n2_x2 = Normal(-2.0, 0.5);\n\n    let group1: Vec\u003c_\u003e = n1_x1\n        .sample(n)\n        .into_iter()\n        .zip(n1_x2.sample(n))\n        .map(|(x1, x2)| vec![1.0, x1, x2])\n        .collect();\n    let group2: Vec\u003c_\u003e = n2_x1\n        .sample(n)\n        .into_iter()\n        .zip(n2_x2.sample(n))\n        .map(|(x1, x2)| vec![1.0, x1, x2])\n        .collect();\n    let label1 = vec![0f64; n];\n    let label2 = vec![1f64; n];\n\n    let data: Vec\u003c_\u003e = group1.into_iter().chain(group2).collect();\n    let labels: Vec\u003c_\u003e = label1.into_iter().chain(label2).collect();\n\n    // Declare Graph\n    let mut graph = Graph::default();\n    let w = graph.symbol();\n    let x = graph.symbol();\n    let y = graph.symbol();\n    let y_hat = (w * x).sigmoid();\n    let loss = (\u0026y - \u0026y_hat) * (\u0026y - \u0026y_hat); \n    println!(\"loss: {:?}\", loss);\n    graph.compile(loss);\n\n    // Train\n    let lr = 0.1;\n    let epochs = 100;\n    let mut loss_history = vec![0f64; epochs];\n\n    let mut w = zeros_shape(1, 3, Col);\n    for li in loss_history.iter_mut() {\n        let mut loss_sum = 0f64;\n        for (x, y) in data.iter().zip(labels.iter()) {\n            let vals = vec![\n                w.clone(),\n                matrix(x.clone(), 3, 1, Col),\n                matrix(vec![*y], 1, 1, Col),\n            ];\n            let (loss, grad) = gradient_cached(\u0026mut graph, \u0026vals);\n            loss_sum += loss[(0,0)];\n\n            // Update weights\n            w = w - lr * \u0026grad[0];\n        }\n        *li = loss_sum / n as f64;\n    }\n\n    loss_history.print();\n\n    // Test\n    let mut correct = 0;\n    let mut wrong = 0;\n    graph.compile(y_hat);\n\n    for (x, y) in data.iter().zip(labels) {\n        let vals = vec![\n            w.clone(),\n            matrix(x.clone(), 3, 1, Col),\n            matrix(vec![y], 1, 1, Col),\n        ];\n        graph.reset();\n        graph.subs_vars(\u0026vals);\n        let y_hat = graph.forward();\n        let c_hat = if y_hat[(0,0)] \u003e 0.5 { 1.0 } else { 0.0 };\n        if c_hat == y {\n            correct += 1;\n        } else {\n            wrong += 1;\n        }\n    }\n\n    let total = correct + wrong;\n    println!(\"correct: {}, wrong: {}, total: {}\", correct, wrong, total);\n    println!(\"Accuracy: {}%\", correct as f64 / total as f64 * 100f64);\n    println!(\"Weights:\\n{}\", w);\n}\n```\n\n## Getting Started\n\nTo use Radient in your project, add the following to your `Cargo.toml`:\n\n```toml\n[dependencies]\nradient = \"0.3\"\n```\n\nThen, add the following code in your Rust file:\n\n```rust\nuse radient::*;\n```\n\n## License\n\nRadient is licensed under the Apache2.0 or MIT license - see the [LICENSE-APACHE](./LICENSE-APACHE) \u0026 [LICENSE-MIT](./LICENSE-MIT) file for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faxect%2Fradient","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faxect%2Fradient","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faxect%2Fradient/lists"}