https://github.com/entrpn/fingym
A tool for developing reinforcement learning algorithms focused in stock prediction
https://github.com/entrpn/fingym
artificial-intelligence artificial-intelligence-algorithms evolution-strategies evolutionary-algorithms evolutionary-strategy python q-learning reinforcement-agents reinforcement-learning reinforcement-learning-environments reinforcement-learning-playground stock-market stock-price-prediction
Last synced: about 1 month ago
JSON representation
A tool for developing reinforcement learning algorithms focused in stock prediction
- Host: GitHub
- URL: https://github.com/entrpn/fingym
- Owner: entrpn
- License: apache-2.0
- Created: 2020-01-21T04:17:26.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-05-04T16:14:56.000Z (about 3 years ago)
- Last Synced: 2025-04-09T21:09:28.918Z (about 1 month ago)
- Topics: artificial-intelligence, artificial-intelligence-algorithms, evolution-strategies, evolutionary-algorithms, evolutionary-strategy, python, q-learning, reinforcement-agents, reinforcement-learning, reinforcement-learning-environments, reinforcement-learning-playground, stock-market, stock-price-prediction
- Language: Python
- Homepage: http://entrpn.com/fingym/
- Size: 20.7 MB
- Stars: 17
- Watchers: 3
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.rst
- License: LICENSE.txt
Awesome Lists containing this project
README
FinGym
**********.. image:: https://badge.fury.io/py/fingym.svg
:target: https://badge.fury.io/py/fingym**Fingym is a toolkit for developing reinforcement learning algorithms tailored specifically for stock market trading.** This is the ``fingym`` open-source library, which gives you access to a standardized set of environments.
`See What's New section below <#what-s-new>`_
``fingym`` makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. You can use it from Python code.
If you're not sure where to start, we recommend beginning with the
`docs `_ on our site... contents:: **Contents of this document**
:depth: 2Basics
======There are two basic concepts in reinforcement learning: the
environment (namely, the outside world) and the agent (namely, the
algorithm you are writing). The agent sends `actions` to the
environment, and the environment replies with `observations` and
`rewards` (that is, a score).The core `fingym` interface is `Env `_, which is
the unified environment interface. There is no interface for agents;
that part is left to you. The following are the ``Env`` methods you
should know:- `reset(self)`: Reset the environment's state. Returns `observation`.
- `step(self, action)`: Step the environment by one timestep. Returns `observation`, `reward`, `done`, `info`.Supported systems
-----------------We currently support Python 3.5 -- 3.7.
Installation
============You can perform a minimal install of ``fingym`` with:
.. code:: shell
git clone git clone https://github.com/entrpn/fingym
cd fingym
pipenv shell
pipenv install -e .If you prefer, you can do a minimal install of the packaged version directly from PyPI:
.. code:: shell
pip install fingym
Environments
============See the `fingym site `_.
Observations
============See the `fingym site `_.
Actions
=======See the `fingym site `_.
Examples
========See the ``examples`` directory.
- Run `examples/agents/buy_and_hold_agent.py `_ to run a simple buy and hold agent.
- Run `examples/agents/random_agent.py `_ to run a simple random agent.
- Run `examples/agents/dqn_agent.py `_ to run a dqn agent.
- Run `examples/agents/evolutionary_agent.py `_ to run a generic algorithm.
- Run `examples/agents/evolutionary_agent_w_crossover.py `_ to run a generic algorithm using crossover.Testing
=======If you cloned this repo, add fingym to python path:
>> export PYTHONPATH=$PYTHONPATH:/path/to/fingym/fingym
We are using `pytest `_ for tests. You can run them via:
.. code:: shell
pytest
.. _See What's New section below:
What's new
==========
- 2020-02-05: First release. 3 year spy intraday minute steps. 10 year daily steps.
- 2020-02-26: More environments from different symbols.
- 2020-04-14: Renamed package from `gym` to `fingym`
- 2020-05-01: Added random walk environments and alphavantage environment.
- 2020-05-07: Added IEX environment with caching to reduce consumption of message quotes.