{"id":26127351,"url":"https://github.com/milesonerd/neurenix","last_synced_at":"2025-04-13T16:53:12.476Z","repository":{"id":281355553,"uuid":"944686201","full_name":"MilesONerd/neurenix","owner":"MilesONerd","description":"Empowering Intelligent Futures, One Edge at a Time.","archived":false,"fork":false,"pushed_at":"2025-04-13T13:58:12.000Z","size":1110,"stargazers_count":1,"open_issues_count":5,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-04-13T14:41:08.906Z","etag":null,"topics":["ai","ai-agents-framework","artificial-intelligence","deep-learning","deep-neural-networks","framework","machine-learning","ml","neural-network","neurenix","phynexus","python"],"latest_commit_sha":null,"homepage":"https://neurenix.github.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MilesONerd.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-03-07T19:37:16.000Z","updated_at":"2025-04-11T02:05:05.000Z","dependencies_parsed_at":"2025-04-05T15:22:28.042Z","dependency_job_id":"12cb2225-7baa-48c6-b496-321794af7a8f","html_url":"https://github.com/MilesONerd/neurenix","commit_stats":null,"previous_names":["milesonerd/neurenix"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MilesONerd%2Fneurenix","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MilesONerd%2Fneurenix/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MilesONerd%2Fneurenix/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MilesONerd%2Fneurenix/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MilesONerd","download_url":"https://codeload.github.com/MilesONerd/neurenix/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248732401,"owners_count":21152849,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-agents-framework","artificial-intelligence","deep-learning","deep-neural-networks","framework","machine-learning","ml","neural-network","neurenix","phynexus","python"],"created_at":"2025-03-10T18:08:02.981Z","updated_at":"2025-04-13T16:53:12.470Z","avatar_url":"https://github.com/MilesONerd.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Neurenix\n\nNeurenix is ​​an AI framework optimized for embedded devices (Edge AI), with support for multiple GPUs and distributed clusters. The framework specializes in AI agents, with native support for multi-agent, reinforcement learning, and autonomous AI.\n\n## Social\n\n[![Bluesky](https://img.shields.io/badge/Bluesky-0285FF?logo=bluesky\u0026logoColor=fff\u0026style=for-the-badge)](https://bsky.app/profile/neurenix.bsky.social)\n[![Discord](https://img.shields.io/badge/Discord-5865F2?style=for-the-badge\u0026logo=discord\u0026logoColor=white)](https://discord.gg/Eqnhr8tK2G)\n[![GitHub](https://img.shields.io/badge/GitHub-000000?style=for-the-badge\u0026logo=github\u0026logoColor=white)](https://github.com/neurenix)\n[![Mastodon](https://img.shields.io/badge/Mastodon-6364FF?style=for-the-badge\u0026logo=Mastodon\u0026logoColor=white)](https://fosstodon.org/@neurenix)\n[![X/Twitter](https://img.shields.io/badge/X-000000?style=for-the-badge\u0026logo=x\u0026logoColor=white)](https://x.com/neurenix)\n\n## Main Features\n\n- **Hot-swappable backend functionality**:  \n  - Added DeviceManager class for runtime device switching  \n  - Created Genesis system for automatic hardware detection and selection  \n  - Modified Tensor class to support hot-swapping between devices  \n\n- **ONNX support**:  \n  - Implemented ONNXConverter for model import/export  \n  - Added convenience functions for easy ONNX integration  \n  - Support for converting between Neurenix and other ML frameworks  \n\n- **API support**:  \n  - Added RESTful, WebSocket, and gRPC server implementations  \n  - Created APIManager for centralized server management  \n  - Provided convenience functions for serving models\n\n- **Dynamic imports from neurenix.binding with NumPy fallbacks for activation functions**:  \n  - relu, sigmoid, tanh, softmax, log_softmax, leaky_relu, elu, selu, gelu  \n\n- **CPU implementations for BLAS operations**:  \n  - GEMM, dot product, GEMV  \n\n- **CPU implementations for convolution operations**:  \n  - conv2d, conv_transpose2d  \n\n- **Conditional compilation for hardware-specific operations**:  \n  - CUDA, ROCm, and WebGPU support for BLAS and convolution operations  \n  - Proper error handling for unsupported hardware configurations  \n\n- **Binding functions for tensor operations**:  \n  - backward, no_grad, zero_grad, weight_decay\n \n- **WebAssembly SIMD and WASI-NN support for browser-based tensor operations**  \n- **Hardware acceleration backends**:  \n  - Vulkan for cross-platform GPU acceleration  \n  - OpenCL for heterogeneous computing  \n  - oneAPI for Intel hardware acceleration  \n  - DirectML for Windows DirectX 12 acceleration  \n  - oneDNN for optimized deep learning primitives  \n  - MKL-DNN for Intel CPU optimization  \n  - TensorRT for NVIDIA GPU optimization\n\n- **Automatic quantization support**:  \n  - INT8, FP16, and FP8 precision  \n  - Model pruning capabilities  \n  - Quantization-aware training  \n  - Post-training quantization with calibration  \n \n- **Graph Neural Networks (GNNs)**:  \n  - Implemented various GNN layers (GCN, GAT, GraphSAGE, etc.)  \n  - Added pooling operations for graph data  \n  - Provided graph data structures and utilities  \n  - Implemented common GNN models  \n\n- **Fuzzy Logic**:  \n  - Added fuzzy sets with various membership functions  \n  - Implemented fuzzy variables and linguistic variables  \n  - Created fuzzy rule systems with different operators  \n  - Implemented Mamdani, Sugeno, and Tsukamoto inference systems  \n  - Added multiple defuzzification methods  \n\n- **Federated Learning**:  \n  - Implemented client-server architecture for federated learning  \n  - Added various aggregation strategies (FedAvg, FedProx, FedNova, etc.)  \n  - Implemented security mechanisms (secure aggregation, differential privacy)  \n  - Added utilities for client selection and model compression  \n\n- **AutoML \u0026 Meta-learning**:  \n  - Implemented hyperparameter search strategies (Grid, Random, Bayesian, Evolutionary)  \n  - Added neural architecture search capabilities  \n  - Implemented model selection and evaluation utilities  \n  - Created pipeline optimization tools  \n  - Added meta-learning algorithms for few-shot learning  \n \n- **Distributed training technologies**:  \n  - MPI for high-performance computing clusters  \n  - Horovod for distributed deep learning  \n  - DeepSpeed for large-scale model training  \n\n- **Memory management technologies**:  \n  - Unified Memory (UM) for seamless CPU-GPU memory sharing  \n  - Heterogeneous Memory Management (HMM) for advanced memory optimization  \n\n- **Specialized hardware acceleration**:  \n  - GraphCore IPU support for intelligence processing  \n  - FPGA support via multiple frameworks:  \n    - OpenCL for cross-vendor FPGA programming  \n    - Xilinx Vitis for Xilinx FPGAs  \n    - Intel OpenVINO for Intel FPGAs\n\n- **DatasetHub**: mechanism that allows users to easily load datasets by providing a URL or file path\n- **CLI**\n- **Continual Learning Module**: Allows models to be retrained and updated with new data without forgetting previously learned information. Implements several techniques:\n  - Elastic Weight Consolidation (EWC)\n  - Experience Replay\n  - L2 Regularization\n  - Knowledge Distillation\n  - Synaptic Intelligence\n\n- **Asynchronous and Interruptible Training Module**: Provides functionality for asynchronous training with continuous checkpointing and automatic resume, even in unstable environments. Features include:\n  - Continuous checkpointing with atomic writes\n  - Automatic resume after interruptions\n  - Resource monitoring and proactive checkpointing\n  - Signal handling for graceful interruptions\n  - Distributed checkpointing for multi-node training\n \n- **Docker Support**:\n  - Container management\n  - Image building and management\n  - Volume management\n  - Network configuration\n  - Registry integration\n- **Kubernetes Support**:\n  - Deployment management\n  - Service configuration\n  - Pod management\n  - ConfigMap handling\n  - Secret management\n  - Job scheduling\n  - Cluster management\n\n- **Apache Arrow Support**:\n  - Efficient in-memory columnar data format\n  - Seamless conversion between Arrow tables and Neurenix tensors\n  - Support for various data types and operations\n\n- **Parquet Support**:\n  - High-performance columnar storage format\n  - Reading and writing Parquet files\n  - Dataset management with partitioning support\n  - Integration with Arrow for efficient data processing\n\n- **SHAP (SHapley Additive exPlanations)**:\n  - KernelSHAP for model-agnostic explanations\n  - TreeSHAP for tree-based models\n  - DeepSHAP for deep learning models\n\n- **LIME (Local Interpretable Model-agnostic Explanations)**:\n  - Support for tabular, text, and image data\n  - Customizable sampling and kernel functions\n\n- **Additional Explanation Techniques**\n  - Feature importance analysis\n  - Partial dependence plots\n  - Counterfactual explanations\n  - Activation visualization\n\n- **Multi-Scale Model Architectures**:\n  - MultiScaleModel: Base class for multi-scale architectures\n  - PyramidNetwork: Feature pyramid network implementation\n  - UNet: U-Net architecture with skip connections\n\n- **Multi-Scale Pooling Operations**:\n  - MultiScalePooling: Base class for multi-scale pooling\n  - PyramidPooling: Pyramid pooling module (PPM)\n  - SpatialPyramidPooling: Spatial pyramid pooling (SPP)\n\n- **Feature Fusion Mechanisms**:\n  - FeatureFusion: Base class for feature fusion\n  - ScaleFusion: Fusion of features from different scales\n  - AttentionFusion: Attention-based fusion of multi-scale features\n\n- **Multi-Scale Transformations**:\n  - MultiScaleTransform: Base class for multi-scale transforms\n  - Rescale: Rescaling to multiple scales\n  - PyramidDownsample: Pyramid downsampling for multi-scale representations\n  - GaussianPyramid: Gaussian pyramid implementation\n  - LaplacianPyramid: Laplacian pyramid implementation\n \n- **Zero-shot Learning**\n- **NVIDIA Tensor Cores support**\n- **WebAssembly multithreaded support**\n- **gRPC-Streaming support**\n- **Neuroevolution + Evolutionary Algorithms Support**:\n  - Genetic Algorithms: Implementation of population-based optimization with selection, crossover, and mutation operators\n  - NEAT (NeuroEvolution of Augmenting Topologies): Algorithm for evolving both neural network topologies and weights\n  - HyperNEAT: Extension of NEAT that uses CPPNs to generate large-scale neural networks with geometric regularities\n  - CMA-ES (Covariance Matrix Adaptation Evolution Strategy): State-of-the-art evolutionary algorithm for continuous optimization\n  - Evolution Strategies: Implementation of various ES variants with adaptive learning rates and population-based optimization\n \n- **Hybrid Neuro-Symbolic Models Support**:\n  - Symbolic reasoning components with rule-based inference\n  - Neural-symbolic integration with multiple interaction modes\n  - Differentiable logic with support for fuzzy and probabilistic logic\n  - Knowledge distillation between symbolic and neural systems\n  - Advanced reasoning paradigms (constraint satisfaction, logical inference, abductive/deductive/inductive reasoning)\n \n- **Multi-Agent Systems (MAS) Support**:\n  - Agent framework with reactive, deliberative, and hybrid agent types\n  - Communication protocols and message passing infrastructure\n  - Coordination mechanisms (task allocation, auctions, contract nets, voting)\n  - Multi-agent learning algorithms (independent learners, joint action learners, team learning)\n  - Environment abstractions for agent interaction\n\n## Documentation\n\n[Neurenix Documentation](https://neurenix.readthedocs.io/en/latest/)\n\n## License\n\nThis project is licensed under the [Apache License 2.0](LICENSE).\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmilesonerd%2Fneurenix","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmilesonerd%2Fneurenix","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmilesonerd%2Fneurenix/lists"}