Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/soheil-mp/auto-pilot-gaming
A deep reinforcement learning framework for training AI agents in classic control and Atari environments. ๐
https://github.com/soheil-mp/auto-pilot-gaming
Last synced: about 1 month ago
JSON representation
A deep reinforcement learning framework for training AI agents in classic control and Atari environments. ๐
- Host: GitHub
- URL: https://github.com/soheil-mp/auto-pilot-gaming
- Owner: soheil-mp
- Created: 2024-12-05T22:07:49.000Z (about 1 month ago)
- Default Branch: master
- Last Pushed: 2024-12-05T22:14:20.000Z (about 1 month ago)
- Last Synced: 2024-12-05T23:20:33.724Z (about 1 month ago)
- Language: Python
- Homepage:
- Size: 9.77 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ๐ฎ Auto Pilot Gaming
[![Python](https://img.shields.io/badge/Python-3.8%2B-blue?style=for-the-badge&logo=python)](https://www.python.org/)
[![PyTorch](https://img.shields.io/badge/PyTorch-Latest-red?style=for-the-badge&logo=pytorch)](https://pytorch.org/)
[![License](https://img.shields.io/badge/License-MIT-green?style=for-the-badge)](LICENSE)
[![Gymnasium](https://img.shields.io/badge/Gymnasium-Latest-orange?style=for-the-badge)](https://gymnasium.farama.org/)
A deep reinforcement learning framework for training AI agents in classic control and Atari environments. ๐[Features](#features) โข [Installation](#-installation) โข [Usage](#-usage) โข [Documentation](#-documentation) โข [Contributing](#-contributing)
---
## ๐ Project Structure
```
adaptive-game-ai/
โโโ src/
โ โโโ agents/
โ โ โโโ dqn_agent.py # DQN implementation
โ โโโ utils/
โ โ โโโ preprocessing.py # Frame processing utilities
โ โ โโโ replay_buffer.py # Experience replay implementation
โ โ โโโ visualization.py # Training visualization
โ โโโ config/
โ โ โโโ hyperparameters.py # Training configurations
โ โโโ play.py # Script to run trained agents
โโโ models/ # Saved model checkpoints
โโโ logs/ # Training logs
โโโ videos/ # Recorded gameplay videos
โโโ tests/ # Unit tests
โโโ requirements.txt # Project dependencies
```## โจ Features
### ๐ง Core Features
- Deep Q-Learning (DQN) implementation
- Experience replay mechanism
- Frame stacking & preprocessing
- ฮต-greedy exploration strategy
- Real-time training visualization
- Video recording of trained agents### ๐ฏ Capabilities
- Multi-game support (both classic control and Atari games)
- Model checkpointing with best model saving
- Real-time visualization of training progress
- Performance metrics tracking
- Gameplay video recording
- Support for both image-based and vector-based environments## ๐ ๏ธ Tech Stack
### Core Dependencies
- Python 3.8+
- PyTorch 2.0+
- Gymnasium[atari] 0.29+
- NumPy 1.24+
- OpenCV 4.8+
- Matplotlib 3.7+### Optional Tools
- CUDA-capable GPU
- MoviePy (for video recording)
- Pygame (for environment rendering)## ๐ Prerequisites
Before you begin, ensure you have the following:
- โ Python 3.8 or higher installed
- โ pip (Python package manager)
- โ Virtual environment (recommended)
- โ CUDA-capable GPU (optional, for faster training)## ๐ Installation
1. **Clone the repository:**
```bash
git clone [repository-url]
cd adaptive-game-ai
```2. **Set up virtual environment:**
```bash
python -m venv venv# Windows
.\venv\Scripts\activate# Unix/MacOS
source venv/bin/activate
```3. **Install dependencies:**
```bash
pip install -r requirements.txt# For video recording support
pip install "gymnasium[other]" moviepy
```## ๐ฎ Usage
### Training Your AI
```bash
python -m src.agents.dqn_agent --env CartPole-v1 \
--episodes 1000 \
--save-path models/cartpole_dqn.pt \
--target-reward 195
```Available Training Options:
| Option | Description | Default |
|--------|-------------|---------|
| `--env` | Environment name | Required |
| `--episodes` | Number of training episodes | 1000 |
| `--save-path` | Model save location | Required |
| `--device` | Training device (cuda/cpu) | auto |
| `--target-reward` | Target reward to consider solved | 195.0 |### Playing Games
```bash
python -m src.play --env CartPole-v1 \
--model models/cartpole_dqn.pt \
--episodes 5 \
--record \
--video-dir videos/cartpole
```Available Play Options:
| Option | Description | Default |
|--------|-------------|---------|
| `--env` | Environment name | Required |
| `--model` | Path to trained model | Required |
| `--episodes` | Number of episodes | 5 |
| `--render` | Enable visualization | True |
| `--record` | Record gameplay videos | False |
| `--video-dir` | Directory to save videos | videos |### Video Recording
The agent's gameplay can be recorded using the `--record` flag. Videos are saved in MP4 format with the following naming convention:
```
{env_name}_{timestamp}-episode-{episode_number}.mp4
```Example video directory structure:
```
videos/
โโโ cartpole_test/
โโโ CartPole-v1_20240101_120000-episode-0.mp4
โโโ CartPole-v1_20240101_120000-episode-1.mp4
โโโ CartPole-v1_20240101_120000-episode-2.mp4
```## ๐งช Implementation Details
Our implementation leverages state-of-the-art techniques in deep reinforcement learning:
- ๐ DQN with experience replay buffer
- ๐ผ๏ธ CNN architecture for image processing
- ๐ Advanced reward shaping
- ๐ฏ Frame stacking (4 frames)
- ๐ Epsilon-greedy exploration
- ๐ Real-time training visualization
- ๐ฅ Gameplay video recording## ๐ค Contributing
Contributions are welcome! Here's how you can help:
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.