An open API service indexing awesome lists of open source software.

https://github.com/zeekersky/target_strike_game

Target Strike game is an unity based compititive game. This game is created for CS662 - Mobile VR & AI course offered at IIT Mandi. Here the source files are given.
https://github.com/zeekersky/target_strike_game

behavioral-cloning gail game-development mlagents ppo reinforcement-learning-algorithms unity3d

Last synced: about 2 months ago
JSON representation

Target Strike game is an unity based compititive game. This game is created for CS662 - Mobile VR & AI course offered at IIT Mandi. Here the source files are given.

Awesome Lists containing this project

README

        

# Target Strike

**Target Strike** is a Unity-based shooting game that challenges a trained ML agent and a human player to compete in precision shooting by targeting randomly spawned objects in separate environments. The ML agent learns to maximize its score autonomously through reinforcement learning, while the human player tests their skills using manual controls. The game tracks and compares both players' scores to evaluate the performance difference between AI and human gameplay.

## Game Overview

In *Target Strike*, both the ML agent and the human player aim to score points by shooting at targets and avoiding obstacles. Rewards are awarded for successful shots, while penalties are applied for missed shots and collisions.

You can **watch demo gameplay** by **clicking the following game poster**.

[![Watch the video](https://github.com/Zeekersky/Target_Strike_Game/blob/main/TargetStrikeGamePoster.jpg)](https://youtu.be/h2Xp72Lh_KU)

### Game Components

1. **ML Agent**: An AI-driven player that learns to navigate, aim, and shoot targets autonomously using reinforcement learning.
2. **Human Player**: A manually controlled player using standard movement and shooting controls in Unity.

## Features

- **Reinforcement Learning**: The ML agent learns effective strategies to target and shoot with minimal mistakes.
- **Reward System**: Dynamic in-game rewards and penalties are given based on shooting accuracy, target hits, and collision events.
- **Score Comparison**: Track total rewards for both the ML agent and the human player to evaluate the performance and efficiency of each.
- **Menu**: Proper main menu is there. Also by pressing `Esc` button, one can pause the game.
- **Camera Switch**: This game has 3 camera setting. One can switch either `FPP` mode by pressing `1` or `TPP` mode by pressing `2` or `Top View` mode by pressing `3` button.

## Reward Structure

1. **Target Hit**: +1 reward for successfully hitting a target with the laser.
2. **Missed Shot**: -0.2 penalty for each missed shot.
3. **Wall Collision**: -0.1 penalty if the player collides with a wall.
4. **Target Collision**: -0.05 penalty if the player collides directly with the target instead of shooting it.

## Project Structure

- **Scripts**
- **AgentController.cs**: Handles movement, shooting mechanics, and reward tracking for the ML agent.
- **PlayerController.cs**: Manages movement and shooting controls for the human player.
- **GunController.cs**: Shared shooting functionality for both players, including laser visualization and hit detection.
- **EnvBehavior.cs**: Manages the ML agent’s training ground, handling target spawning and resetting.
- **PlayerEnvBehavior.cs**: Manages the human player’s training ground with target spawning and resetting.
- Remaining scripts are for main menu, pause menu setup and scoreboard implementation.

- **Training Grounds**
- **AgentTrainingGround**: Dedicated environment where the ML agent operates.
- **PlayerTrainingGround**: Separate environment designed for human player gameplay.

## Getting Started

### Prerequisites

- **Unity 2020.3 or higher**
- **ML-Agents** Follow This [Github Repo of MLAgents](https://github.com/Unity-Technologies/ml-agents).

### Installation

1. **Clone the Repository**:
```bash
git clone https://github.com/Zeekersky/Target-Strike.git
```

```bash
cd Target-Strike
```

2. **Copy Library Folder**:
- Copy `Library` folder from this given [link](https://drive.google.com/drive/folders/1kbSbZiRYR7hZ0JhDMHK7TYtsvLy1ongQ?usp=sharing).

3. **Open in Unity**:
- Open the project in Unity and ensure all necessary packages are imported, especially `ML-Agents`.

### Scene Setup

1. **Two Scenes `Menu` and `TargetStrike`** are there in the `Scene` folder. Ensure that you have selected `Menu` scene before start the game.
2. Ensure that each object has the appropriate components and scripts assigned in the Inspector.

## Gameplay Instructions

### ML Agent

1. Run the Unity scene, and the ML agent will play autonomously, attempting to maximize its total reward.
2. **Rewards**: The top side of game screen show the agent’s rewards and player's rewards to show learning progression and success.

### Human Player

1. Use the **PlayerController** to control the human player in `PlayerTrainingGround`.
2. **Controls**:
- **Move**: Use `WASD` or arrow keys for navigation.
- **Shoot**: Press `Space` to shoot a laser toward the target.
3. **Score Tracking**: The console logs the human player's total rewards to allow performance comparison with the ML agent.

## Key Scripts

- **AgentController.cs**: Implements the ML agent’s autonomous movement, shooting, and reward mechanics.
- **PlayerController.cs**: Provides manual controls to the human player with behavior similar to the ML agent.
- **GunController.cs**: Manages shooting mechanics, including raycasting for hit detection.
- **EnvBehavior.cs**: Controls ML agent environment setup, target spawning, and reward logging.
- **PlayerEnvBehavior.cs**: Controls the human player environment setup and target spawning.

## Reward Tracking

- **Score Logging**: Both players’ scores are logged in the console, showing total rewards after each episode for easy comparison. Also both are showing in corner of Game Screen.

## Debugging Tips

- **Episode Tracking**: The console logs the total rewards per episode, useful for observing performance trends.
- **Reward Tuning**: Adjust reward values in `AgentController.cs` and `PlayerController.cs` to influence agent behavior and encourage better accuracy.

## Contribution

Contributions are welcome! To contribute, fork the repository, make your changes, and submit a pull request.