Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/smellslikeml/ActionAI

Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints
https://github.com/smellslikeml/ActionAI

classifier edgetpu human-action-recognition human-activity-recognition jetson-nano lstm machinelearning pose-estimation raspberry-pi scikit-learn tensorflow2

Last synced: about 1 month ago
JSON representation

Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints

Lists

README

        

# ActionAI 🤸

[![Python 3.x](https://img.shields.io/badge/python-3.x-blue.svg)](https://www.python.org/downloads/release/python-370/) [![Join the chat at https://gitter.im/action-ai/community](https://badges.gitter.im/action-ai/community.svg)](https://gitter.im/action-ai/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
![stars](https://img.shields.io/github/stars/smellslikeml/ActionAI)
![forks](https://img.shields.io/github/forks/smellslikeml/ActionAI)
![license](https://img.shields.io/github/license/smellslikeml/ActionAI)
![twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Fgithub.com%2Fsmellslikeml%2FActionAI)

ActionAI is a python library for training machine learning models to classify human action. It is a generalization of our [yoga smart personal trainer](https://www.hackster.io/yogai/yogai-smart-personal-trainer-f53744), which is included in this repo as an example.



## Getting Started
These instructions will show how to prepare your image data, train a model, and deploy the model to classify human action from image samples. See deployment for notes on how to deploy the project on a live stream.

### Installation

Add the smellslikeml PPA and install with the following:
```
sudo add-apt-repository ppa:smellslikeml/ppa
sudo apt update

# Install with:
sudo apt-get install actionai
```

Make sure to configure the working directory with:
```
actionai configure
```

### Using the CLI

Organize your training data in subdirectories like the example below. The `actionai` cli will automatically create a dataset from subdirectories of videos where each subdirectory is a category label.

```
.
└── dataset/
├── category_1/
│ └── *.mp4
├── category_2/
│ └── *.mp4
├── category_3/
│ └── *.mp4
└── ...
```

Then you can train a model with:
```
actionai train --data=/path/to/your/data/dir --model=/path/to/your/model/dir
```

And then run inference on a video with:
```
actionai predict --model=/path/to/your/model/dir --video=/path/to/your/video.mp4
```

View the default `config.ini` file included in this branch for additional configurations. You can pass your own config file using the `--cfg` flag.

## Contributing

Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.

## License

This project is licensed under the GNU General Public License v3.0 - see the [LICENSE.md](LICENSE.md) file for details

## References

* [Hackster post](https://www.hackster.io/yogai/yogai-smart-personal-trainer-f53744)
* [YogAI article](https://www.raspberrypi.org/blog/yoga-training-with-yogai-and-a-raspberry-pi-smart-mirror-the-magpi-issue-80/)
* [Convolutional Pose Machine](https://arxiv.org/pdf/1602.00134.pdf)
* [Pose estimation for mobile](https://github.com/edvardHua/PoseEstimationForMobile)
* [Pose estimation tensorflow implementation](https://github.com/ildoonet/tf-pose-estimation)