Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/llSourcell/InMyFeelings_Challenge
This is the code for "In My Feelings Challenge AI" By Siraj Raval on Youtube
https://github.com/llSourcell/InMyFeelings_Challenge
Last synced: 3 months ago
JSON representation
This is the code for "In My Feelings Challenge AI" By Siraj Raval on Youtube
- Host: GitHub
- URL: https://github.com/llSourcell/InMyFeelings_Challenge
- Owner: llSourcell
- Created: 2018-07-22T17:21:29.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-06-11T11:03:40.000Z (over 5 years ago)
- Last Synced: 2024-04-27T19:34:11.828Z (7 months ago)
- Language: JavaScript
- Size: 14.6 KB
- Stars: 67
- Watchers: 5
- Forks: 35
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Coding Challenge - Due Date, August 1 2018 at 12 PM PST
This is the code for this [this](https://youtu.be/prswDGGmYaE) video on Youtube by Siraj Raval as part of the #InMyFeelingsChallenge dance competition. The challenge is to create your own AI to dance to this song and submit it via Twitter, Facebook, Youtube, Instagram, or LinkedIn (or all of them) using the #InMyFeelingsChallenge hashtag. There are 3 methods to do this
### Method 1 (Hacky way)
1. Run the real-time pose detection model in your browser.
2. Hold up your phone or another screen to the webcam, while a video of a human dancing plays.
3. Record your screen while the real-time pose estimate follows the human dance.
4. In Final Cut Pro, or a video editing program of your choice, apply a color mask so all colors except the color of the pose estimate model are made not visible.
5. Export the video and upload!### Method 2 (Cleaner programmatic way)
1. Modify the code in this repository so instead of the demo applying pose estimation to webcam video, it applies it to a video on your desktop, records, and saves it.
2. Use a javascript library like [chroma.js](https://github.com/gka/chroma.js/) to apply a color mask programmatically to the video, making all colors except for the pose estimate model color not visible.
3. Upload the final result!### Method 3 (For the realest Wizards)
1. Train an LSTM Neural network on a dataset of Shiggy dance videos similar to what [carykh](https://www.youtube.com/watch?v=Sc7RiNgHHaE&t=273s) did for trancey dance videos.
2. Upload the result!### Rewards
I'll definitely give a social media shoutout to some of the best submissions! Good luck Wizards, lets light up this challenge and show the world what AI can do.Run real-time pose estimation in the browser using TensorFlow.js.
[Try it here!](https://montrealai.github.io/posenet-v3/)
PoseNet can be used to estimate either a single pose or multiple poses, meaning there is a version of the algorithm that can detect only one person in an image/video and one version that can detect multiple persons in an image/video.
This is a Pure Javascript implementation of [PoseNet](https://github.com/tensorflow/tfjs-models/tree/master/posenet). Thank you [TensorFlow.js](https://js.tensorflow.org) for your flexible and intuitive APIs.
[Refer to this blog post](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) for a high-level description of PoseNet running on Tensorflow.js.
## Credits
Credits for this code go to the Tensorflow team at Google