Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/astariul/encode-attend-navigate-pytorch

Encode-attend-navigate unofficial Pytorch implementation
https://github.com/astariul/encode-attend-navigate-pytorch

colab deep-learning gpu hacktoberfest machine-learning notebook pytorch rl tsp tsp-problem tsp-solver

Last synced: 11 days ago
JSON representation

Encode-attend-navigate unofficial Pytorch implementation

Awesome Lists containing this project

README

        

encode-attend-navigate-pytorch



Pytorch implementation of encode-attend-navigate, a Deep Reinforcement Learning based TSP solver.

## Get started

### Run on Colab

You can leverage the free GPU on Colab to train this model. Just run this notebook :
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Tggr-QIQSyt7jnjZRuBp5wBt6eDoC1-c?usp=sharing)

### Run locally

Clone the repository :

```console
git clone https://github.com/astariul/encode-attend-navigate-pytorch.git
cd encode-attend-navigate-pytorch
```

---

Install dependencies :

```console
pip install -r requirements.txt
```

---

Run the code :

```console
python main.py
```

---

You can specify your own configuration file :

```console
python main.py config=my_conf.yaml
```

---

Or directly modify parameters from the command line :

```console
python main.py lr=0.002 max_len=100 batch_size=64
```

### Expected results

I ran the code with the following command line :

```console
python main.py enc_stacks=1 lr=0.0002 p_dropout=0.1
```

On Colab, with a `Tesla T4` GPU, it tooks 1h 46m for the training to complete.

Here is the training curves :

---

After training, here is a few example of path generated :

## Implementation

This code is a direct translation of the [official TF 1.x implementation](https://framagit.org/MichelDeudon/encode-attend-navigate), by @MichelDeudon.

Please refer to their README for additional details.

---

To ensure the Pytorch implementation produces the same results as the original implementation, I compared the outputs of each layer given the same inputs and check if they are the same.

You can find (and run) these tests on this Colab notebook : [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1HChapUUC_3cZoZsG1A3WJLwclQRsyuR2?usp=sharing)