Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pskrunner14/trading-bot
Stock Trading Bot using Deep Q-Learning
https://github.com/pskrunner14/trading-bot
ai-agents deep-q-learning q-learning reinforcement-learning stock-price-prediction stock-trading stock-trading-bot trading-algorithms trading-bot
Last synced: about 1 month ago
JSON representation
Stock Trading Bot using Deep Q-Learning
- Host: GitHub
- URL: https://github.com/pskrunner14/trading-bot
- Owner: pskrunner14
- License: mit
- Created: 2018-08-13T10:44:08.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2023-12-03T13:06:25.000Z (about 1 year ago)
- Last Synced: 2024-08-01T06:22:11.307Z (4 months ago)
- Topics: ai-agents, deep-q-learning, q-learning, reinforcement-learning, stock-price-prediction, stock-trading, stock-trading-bot, trading-algorithms, trading-bot
- Language: Jupyter Notebook
- Homepage:
- Size: 9.42 MB
- Stars: 923
- Watchers: 50
- Forks: 332
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-systematic-trading - Stock Trading Bot using Deep Q-Learning - Learning | ![GitHub stars](https://badgen.net/github/stars/pskrunner14/trading-bot) | ![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg) | (Machine Learning / Cryptocurrencies)
- awesome-systematic-trading - Stock Trading Bot using Deep Q-Learning - commit/pskrunner14/trading-bot/master) ![GitHub Repo stars](https://img.shields.io/github/stars/pskrunner14/trading-bot?style=social) | Python | - Stock Trading Bot using Deep Q-Learning (Backtest + live trading / Machine Learning / Reinforcement Learning Focused)
README
# Overview
This project implements a Stock Trading Bot, trained using Deep Reinforcement Learning, specifically Deep Q-learning. Implementation is kept simple and as close as possible to the algorithm discussed in the paper, for learning purposes.
## Introduction
Generally, Reinforcement Learning is a family of machine learning techniques that allow us to create intelligent agents that learn from the environment by interacting with it, as they learn an optimal policy by trial and error. This is especially useful in many real world tasks where supervised learning might not be the best approach due to various reasons like nature of task itself, lack of appropriate labelled data, etc.
The important idea here is that this technique can be applied to any real world task that can be described loosely as a Markovian process.
## Approach
This work uses a Model-free Reinforcement Learning technique called Deep Q-Learning (neural variant of Q-Learning).
At any given time (episode), an agent abserves it's current state (n-day window stock price representation), selects and performs an action (buy/sell/hold), observes a subsequent state, receives some reward signal (difference in portfolio position) and lastly adjusts it's parameters based on the gradient of the loss computed.There have been several improvements to the Q-learning algorithm over the years, and a few have been implemented in this project:
- [x] Vanilla DQN
- [x] DQN with fixed target distribution
- [x] Double DQN
- [ ] Prioritized Experience Replay
- [ ] Dueling Network Architectures## Results
Trained on `GOOG` 2010-17 stock data, tested on 2019 with a profit of $1141.45 (validated on 2018 with profit of $863.41):
![Google Stock Trading episode](./extra/visualization.png)
You can obtain similar visualizations of your model evaluations using the [notebook](./visualize.ipynb) provided.
## Some Caveats
- At any given state, the agent can only decide to buy/sell one stock at a time. This is done to keep things as simple as possible as the problem of deciding how much stock to buy/sell is one of portfolio redistribution.
- The n-day window feature representation is a vector of subsequent differences in Adjusted Closing price of the stock we're trading followed by a sigmoid operation, done in order to normalize the values to the range [0, 1].
- Training is prefferably done on CPU due to it's sequential manner, after each episode of trading we replay the experience (1 epoch over a small minibatch) and update model parameters.## Data
You can download Historical Financial data from [Yahoo! Finance](https://ca.finance.yahoo.com/) for training, or even use some sample datasets already present under `data/`.
## Getting Started
In order to use this project, you'll need to install the required python packages:
```bash
pip3 install -r requirements.txt
```Now you can open up a terminal and start training the agent:
```bash
python3 train.py data/GOOG.csv data/GOOG_2018.csv --strategy t-dqn
```Once you're done training, run the evaluation script and let the agent make trading decisions:
```bash
python3 eval.py data/GOOG_2019.csv --model-name model_GOOG_50 --debug
```Now you are all set up!
## Acknowledgements
- [@keon](https://github.com/keon) for [deep-q-learning](https://github.com/keon/deep-q-learning)
- [@edwardhdlu](https://github.com/edwardhdlu) for [q-trader](https://github.com/edwardhdlu/q-trader)## References
- [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/abs/1312.5602)
- [Human Level Control Through Deep Reinforcement Learning](https://deepmind.com/research/publications/human-level-control-through-deep-reinforcement-learning/)
- [Deep Reinforcement Learning with Double Q-Learning](https://arxiv.org/abs/1509.06461)
- [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952)
- [Dueling Network Architectures for Deep Reinforcement Learning](https://arxiv.org/abs/1511.06581)