Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/LogiTorch/logitorch
LogiTorch is a PyTorch-based library for logical reasoning on natural language
https://github.com/LogiTorch/logitorch
Last synced: 5 days ago
JSON representation
LogiTorch is a PyTorch-based library for logical reasoning on natural language
- Host: GitHub
- URL: https://github.com/LogiTorch/logitorch
- Owner: LogiTorch
- License: apache-2.0
- Created: 2021-08-11T14:56:38.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-09-10T18:52:00.000Z (about 2 months ago)
- Last Synced: 2024-09-20T06:51:53.068Z (about 2 months ago)
- Language: Python
- Homepage: https://logitorch.ai
- Size: 530 KB
- Stars: 66
- Watchers: 4
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- Awesome-LLM-Reasoning - LogiTorch - based library for logical reasoning on natural language. (Other Useful Resources / 2023)
README
# LogiTorch
![image](docs/source/_static/logo_with_text.jpg)
[![Downloads](https://static.pepy.tech/badge/logitorch)](https://pepy.tech/project/logitorch)
LogiTorch is a PyTorch-based library for logical reasoning on natural language, it consists of:
- Textual logical reasoning datasets
- Implementations of different logical reasoning neural architectures
- A simple and clean API that can be used with PyTorch Lightning## π¦ Installation
```console
foo@bar:~$ pip install logitorch==0.0.1a2
```Or
```console
foo@bar:~$ pip install git+https://github.com/LogiTorch/logitorch.git
```## π Documentation
You can find the documentation for LogiTorch on [ReadTheDocs](https://logitorch.readthedocs.io).
## π₯οΈ Features
### π Datasets
Datasets implemented in LogiTorch:
- [x] [AR-LSAT](https://arxiv.org/abs/2104.06598) (MIT LICENSE)
- [x] [ConTRoL](https://arxiv.org/abs/2011.04864) (GitHub LICENSE)
- [x] [LogiQA](https://arxiv.org/abs/2007.08124) (GitHub LICENSE)
- [x] [ReClor](https://arxiv.org/abs/2002.04326) (Non-Commercial Research Use)
- [x] [RuleTaker](https://arxiv.org/abs/2002.05867) (APACHE-2.0 LICENSE)
- [x] [ProofWriter](https://arxiv.org/abs/2012.13048) (APACHE-2.0 LICENSE)
- [x] [SNLI](https://arxiv.org/abs/1508.05326) (CC-BY-SA-4.0 LICENSE)
- [x] [MultiNLI](https://arxiv.org/abs/1704.05426) (CC-BY-SA-4.0 LICENSE)
- [x] [RTE](https://tac.nist.gov/publications/2010/additional.papers/RTE6_overview.proceedings.pdf) ([TAC User Agreements](https://tac.nist.gov//data/forms/index.html))
- [x] [Negated SNLI](https://aclanthology.org/2020.emnlp-main.732/) (MIT LICENSE)
- [x] [Negated MultiNLI](https://aclanthology.org/2020.emnlp-main.732/) (MIT LICENSE)
- [x] [Negated RTE](https://aclanthology.org/2020.emnlp-main.732/) (MIT LICENSE)
- [x] [PARARULES Plus](https://github.com/Strong-AI-Lab/PARARULE-Plus) (MIT LICENSE)
- [x] [AbductionRules](https://arxiv.org/abs/2203.12186) (MIT LICENSE)
- [x] [FOLIO](https://arxiv.org/abs/2209.00840) (CC-BY-SA-4.0 LICENSE)
- [x] [FLD](https://proceedings.mlr.press/v202/morishita23a.html) (CC-BY-SA-4.0 LICENSE)
- [x] [LogiQA2.0](https://arxiv.org/abs/2007.08124) (CC-BY-SA-4.0 LICENSE)
- [ ] [LogiQA2.0 NLI](https://arxiv.org/abs/2007.08124)
- [ ] [HELP](https://aclanthology.org/S19-1027.pdf)
- [ ] [SimpleLogic](https://arxiv.org/abs/2205.11502)
- [ ] [RobustLR](https://arxiv.org/abs/2205.12598)
- [ ] [LogicNLI](https://aclanthology.org/2021.emnlp-main.303/)### π€ Models
Models implemented in LogiTorch:
- [x] [RuleTaker](https://arxiv.org/abs/2002.05867)
- [x] [ProofWriter](https://arxiv.org/abs/2012.13048)
- [x] [BERTNOT](https://arxiv.org/abs/2105.03519)
- [x] [PRover](https://arxiv.org/abs/2010.02830)
- [x] [FLDProver](https://proceedings.mlr.press/v202/morishita23a.html)
- [ ] [TINA](https://suchanek.name/work/publications/emnlp-2022.pdf)
- [ ] [FaiRR](https://arxiv.org/abs/2203.10261)
- [ ] [LReasoner](https://arxiv.org/abs/2105.03659)
- [ ] [DAGN](https://arxiv.org/abs/2103.14349)
- [ ] [Focal Reasoner](https://arxiv.org/abs/2105.10334)
- [ ] [AdaLoGN](https://arxiv.org/abs/2203.08992)
- [ ] [Logiformer](https://arxiv.org/abs/2205.00731)
- [ ] [LogiGAN](https://arxiv.org/abs/2205.08794)
- [ ] [MERit](https://arxiv.org/abs/2203.00357)
- [ ] [APOLLO](https://arxiv.org/abs/2212.09282)
- [ ] [LAMBADA](https://arxiv.org/abs/2212.13894)
- [ ] [Chainformer](https://aclanthology.org/2023.findings-acl.588/)
- [ ] [IDOL](https://aclanthology.org/2023.findings-acl.513/)
## π§ͺ Example Usage### Training Example
```python
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from torch.utils.data.dataloader import DataLoaderfrom logitorch.data_collators.ruletaker_collator import RuleTakerCollator
from logitorch.datasets.qa.ruletaker_dataset import RuleTakerDataset
from logitorch.pl_models.ruletaker import PLRuleTakertrain_dataset = RuleTakerDataset("depth-5", "train")
val_dataset = RuleTakerDataset("depth-5", "val")ruletaker_collate_fn = RuleTakerCollator()
train_dataloader = DataLoader(
train_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)
val_dataloader = DataLoader(
val_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)model = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
checkpoint_callback = ModelCheckpoint(
save_top_k=1,
monitor="val_loss",
mode="min",
dirpath="models/",
filename="best_ruletaker",
)trainer = pl.Trainer(callbacks=[checkpoint_callback], accelerator="gpu", gpus=1)
trainer.fit(model, train_dataloader, val_dataloader)
```### Pipeline Example
We provided pre-configured pipelines for some datasets to train models.
```python
from logitorch.pipelines.qa_pipelines import ruletaker_pipeline
from logitorch.pl_models.ruletaker import PLRuleTakermodel = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
ruletaker_pipeline(
model=model,
dataset_name="depth-5",
saved_model_name="models/",
saved_model_path="best_ruletaker",
batch_size=32,
epochs=10,
accelerator="gpu",
gpus=1,
)```
### Testing Example
```python
from logitorch.pl_models.ruletaker import PLRuleTaker
from logitorch.datasets.qa.ruletaker_dataset import RULETAKER_ID_TO_LABELmodel = PLRuleTaker.load_from_checkpoint("models/best_ruletaker.ckpt")
context = "Bob is smart. If someone is smart then he is kind."
question = "Bob is kind."pred = model.predict(context, question)
print(RULETAKER_ID_TO_LABEL[pred])
```## Citing
Users of LogiTorch should distinguish the datasets and models of our library from the originals. They should always credit and cite both our library and the original data source, as in ``We used LogiTorch's \cite{helwe2022logitorch} re-implementation of BERTNOT \cite{hosseini2021understanding}''.
If you want to cite LogiTorch, please refer to the publication in the [Empirical Methods in Natural Language Processing](https://2022.emnlp.org/):
```code
@inproceedings{helwe2022logitorch,
title={LogiTorch: A PyTorch-based library for logical reasoning on natural language},
author={Helwe, Chadi and Clavel, Chlo\'e and Suchanek, Fabian},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year={2022}
}
```## Acknowledgments
This work was partially funded by ANR-20-CHIA-0012-01 (βNoRDFβ).