Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/futurecomputing4ai/hadamard-derived-linear-binding
A Walsh Hadamard Derived Linear Vector Symbolic Architecture :fire:
https://github.com/futurecomputing4ai/hadamard-derived-linear-binding
fourier-transform hadamard hadamard-derived-linear-binding hadamard-transforms hlb holographic-reduced-representations hrr linear-binding vector-symbolic-architecture
Last synced: about 1 month ago
JSON representation
A Walsh Hadamard Derived Linear Vector Symbolic Architecture :fire:
- Host: GitHub
- URL: https://github.com/futurecomputing4ai/hadamard-derived-linear-binding
- Owner: FutureComputing4AI
- Created: 2024-10-14T16:55:20.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2024-11-04T13:48:03.000Z (about 2 months ago)
- Last Synced: 2024-11-04T14:38:45.792Z (about 2 months ago)
- Topics: fourier-transform, hadamard, hadamard-derived-linear-binding, hadamard-transforms, hlb, holographic-reduced-representations, hrr, linear-binding, vector-symbolic-architecture
- Language: Python
- Homepage: https://arxiv.org/abs/2410.22669
- Size: 52.7 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Hadamard-derived linear Binding (HLB)
Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\mathbb{R}^d$ are 'bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this binding operation, along with an inverse operation, allowing one to construct symbolic-style manipulations over real-valued vectors. Most VSAs were developed before deep learning and automatic differentiation became popular and instead focused on efficacy in hand-designed systems. In this work, we introduce the Hadamard-derived linear Binding (HLB), which is designed to have favorable computational efficiency, efficacy in classic VSA tasks, and perform well in differentiable systems.### Requirements
CSPS
```properties
conda create --name csps python=3.9 -y && conda activate csps
```- [PyTorch](https://pytorch.org/get-started/locally/) v1.13.1+cu116
XML
```properties
conda create --name xml python=3.9 -y && conda activate xml
```- [PyTorch](https://pytorch.org/get-started/locally/) v2.0.1+cu118
- [Pyxclib](https://github.com/kunaldahiya/pyxclib)
- ```pip install cython==3.0.10```
- ```pip install tabulate ```Classical VSA Tasks
- [Torchhd](https://torchhd.readthedocs.io/en/stable/) ```pip install torch-hd```
### Datasets
* CSPS
- MNIST, SVHN, CIFAR10, and CIFAR100 are the standard datasets that come with PyTorch library. MiniImageNet can be
downloaded from [Kaggle](https://www.kaggle.com/datasets/arjunashok33/miniimagenet).
* XML
- For XML experiments pre-processed features are used. All the datasets can be downloaded from the
benchmark [website](http://manikvarma.org/downloads/XC/XMLRepository.html).### Code
The code is organized in two folders. CSPS code is in the ```CSPS Exp/``` folder and XML code is in the ```XML Exp/```
folder. For both of them, individual folders are used by the name of the dataset. For example, network, and training
files related to CIFAR10 datasets are the ```cifar10/``` subfolder, and so on. We have compared the proposed HLB method
with HRR, VTB, and MAP vector symbolic architectures. Results of HRR are taken from previous papers. Our training code
file name for VTB, MAP, and HLB methods are named as $train_{method name}.py$Similarly, the ```XML Exp/``` folder contains subfolders for each dataset containing training files. Once again, the
training files are named as $train_{method name}.py$ Two types of data loader are used. ```dataset_fast.py``` is a fast
dataloader for smaller datasets (Bibtex, Mediamill, Delicious). ```dataset_sparse.py``` is a dataloader for loading
larger data files. Code regarding the classical VSA tasks is in the ```Classical VSA Tasks/``` folder.### Citations
[![Paper](https://img.shields.io/badge/NeurIPS-2024-7a09d6.svg?longCache=true&style=flat)](https://neurips.cc/virtual/2024/poster/93583)
[![Paper](https://img.shields.io/badge/paper-ArXiv-ff0a0a.svg?longCache=true&style=flat)](https://arxiv.org/abs/2410.22669)To get more information about the proposed method and experiments, please go through the [paper](https://arxiv.org/abs/2410.22669). If you use this work or find this useful, cite the paper as:
```bibtex
@article{alam2024walsh,
title={A Walsh Hadamard Derived Linear Vector Symbolic Architecture},
author={Alam, Mohammad Mahmudul and Oberle, Alexander and Raff, Edward and Biderman, Stella and Oates, Tim and Holt, James},
journal={arXiv preprint arXiv:2410.22669},
year={2024}
}
```