Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/claudiolemos/humansynth
Interactive p5.js multimedia application for video and sound generation on the browser
https://github.com/claudiolemos/humansynth
art creative interactive javascript p5js sound visual
Last synced: 24 days ago
JSON representation
Interactive p5.js multimedia application for video and sound generation on the browser
- Host: GitHub
- URL: https://github.com/claudiolemos/humansynth
- Owner: claudiolemos
- Created: 2020-04-06T00:37:12.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-06-13T22:50:32.000Z (over 4 years ago)
- Last Synced: 2024-12-03T13:49:54.002Z (about 1 month ago)
- Topics: art, creative, interactive, javascript, p5js, sound, visual
- Language: JavaScript
- Homepage: https://docs.google.com/document/d/1TlDyeLyXpG9GfML8pw8H3Gsa-l51i7CyFNkxxdAyQhI
- Size: 58 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# HumanSynth
HumanSynth is an interactive multimedia application that has the goal of generating video and sound using the human body. With the help of a camera and a projector, the user will be able to interact with the application, seeing the result of their actions in real time.
## Members
* Cláudio Fischer Lemos
* Henrique Reis Sendim Rodrigues
* Sandro Miguel Tavares Campos## Overview
### Sound
The sound in our application will be generated by frequency modulation synthesis. The dependable variables in this will be the base frequency, modulation frequency, amount of modulation and amplitude. With the help of body tracking, we can use the movement of the user’s body parts (hands, legs, head, ...) to create unusual and unexpected sounds.
### Video
As for the video, we will be using segmentation to separate the body from the background, which will be replaced by images and animations, like for example, the format of the sound wave of the sound that is at the moment being generated.
### Technologies
In order to create this application, we will be using machine learning models that will be executed in real-time, on the browser. To facilitate and speed up the development process, we will use ml5js that allows us to integrate BodyPix (for body segmentation) and PoseNet (for body tracking) from the TensorFlow framework. At the same time, the p5.js Javascript library will be use to generate sound.
## Applications and use cases
One of the main use cases for this application will be its installation in galleries or museums focused in audiovisual art and technology. It could also be reworked into having a focus on education.