https://github.com/jprjr/rtmp-janus
RTMP server that forwards audio/video into Janus videorooms.
https://github.com/jprjr/rtmp-janus
Last synced: 4 months ago
JSON representation
RTMP server that forwards audio/video into Janus videorooms.
- Host: GitHub
- URL: https://github.com/jprjr/rtmp-janus
- Owner: jprjr
- License: mit
- Created: 2020-07-27T23:11:52.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-09-28T12:48:05.000Z (almost 5 years ago)
- Last Synced: 2024-04-17T17:16:16.755Z (about 1 year ago)
- Language: Go
- Size: 31.3 KB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# RTMP to Janus Videoroom
## Warning I am not a Go programmer
This is very much in the work-in-progress/proof-of-concept phase!
I really don't know go, I learned just enough so I could make use of the [`pion/webrtc`](https://github.com/pion/webrtc) library.
This is adapted from the Janus example in Pion's [`example-webrtc-applications`](https://github.com/pion/example-webrtc-applications) repo.
## Usage
```bash
rtmp-janus ws://janus-host:port# example: rtmp-janus :1935 ws://127.0.0.1:8188
```## What does this do?
When launched, this app:
* Starts listening for incoming RTMP sessions.
* Connects to Janus gateway via websocket, establishes a session.When an RTMP connection is received, it parses the RTMP "key" for a room ID.
Example: `rtmp://127.0.0.1:1935/live/1234` - the `1234` part of that URL becomes the room ID.
The `live` part of the URL (the application name) can be whatever you'd like, it's ignored.
The app then joins the janus videoroom and establishes a WebRTC session.
I'm assuming all incoming RTMP sessions are using H264 for video and AAC for audio.
H264 data is re-packed from FLV format into Annex-B format, but otherwise passed
as-is (no decoding/encoding).AAC audio is resampled to 48000kHz, stereo audio and encoded to Opus using ffmpeg.
## TODO
* Figure out what events from Janus I should handle (I just blast audio/video).
* Check more sources for timing info (like the SPS NAL).
* Support more audio samplerates (maybe?).
* Generalize audio decoding, support more input audio codecs.## LICENSE
MIT (see `LICENSE`)