This is a try to stream video sources through WebRTC using simple mechanism.
It embeds a HTTP server that implements API and serve a simple HTML page that use them through AJAX.
The WebRTC signaling is implemented throught HTTP requests:
-
/api/call : send offer and get answer
-
/api/hangup : close a call
-
/api/addIceCandidate : add a candidate
-
/api/getIceCandidate : get the list of candidates
The list of HTTP API is available using /api/help.
Nowdays there is 3 builds on CircleCI :
- for x86_64 on Ubuntu Xenial
- for armv7 crosscompiling with gcc-linaro-arm-linux-gnueabihf-raspbian-x64 (this build is running on Raspberry Pi2 and NanoPi NEO)
- for armv6+vfp crosscompiling with gcc-linaro-arm-linux-gnueabihf-raspbian-x64 (this build is running on Raspberry PiB and should run on a Raspberry Zero)
The webrtc stream name could be :
- an alias defined using -n argument then the corresponding -u argument will be used to create the capturer
- an "rtsp://" url that will be openned using an RTSP capturer based on live555
- an "screen://" url that will be openned by webrtc::DesktopCapturer::CreateScreenCapturer
- an "window://" url that will be openned by webrtc::DesktopCapturer::CreateWindowCapturer
- a V4L2 capture device name
It is based on :
mkdir ../webrtc
pushd ../webrtc
fetch webrtc
gn gen out/Release --args='is_debug=false use_custom_libcxx=false rtc_use_h264=true ffmpeg_branding="Chrome" rtc_include_tests=false rtc_include_pulse_audio=false use_sysroot=false is_clang=false treat_warnings_as_errors=false'
ninja -C out/Release
popd
make live555
make WEBRTCROOT=<path to WebRTC> WEBRTCBUILD=<Release or Debug>
where WEBRTCROOT and WEBRTCBUILD indicate how to point to WebRTC :
-
$WEBRTCROOT/src should contains source (default is $ (pwd)/../webrtc) - $WEBRTCROOT/src/out/$WEBRTCBUILD should contains libraries (default is Release)
These 3 directories should be generated in the same location: live webrtc webrtc-streamer
how to build google's webrtc windows: https://chromium.googlesource.com/chromium/src/+/master/docs/windows_build_instructions.md
this is going to install the other tools used here (e.g. git)
then do: cd src gn gen out/Release --args="is_debug=false use_custom_libcxx=false rtc_use_h264=true ffmpeg_branding="Chrome" rtc_include_tests=false rtc_include_pulse_audio=false use_sysroot=false is_clang=false treat_warnings_as_errors=false" ninja -C out/Release webrtc
download latest live from: http://www.live555.com/liveMedia/public/ (I used live.2018.08.28a.tar.gz) the directory should be placed on the same level as webrtc-streamer and it should be called live unzip VS_live.zip into the live directory. A directory by the name VisualStudio should be created under live
Open the solution live\VisualStudio\live555.sln with VisualStudio 2017 and compile. The following 4 files would be created under the directory live\lib64 BasicUsageEnvironment.lib groupsock.lib liveMedia.lib UsageEnvironment.lib
Get the webrtc-streamer with: git clone --recurse-submodules https://github.com/mpromonet/webrtc-streamer.git
Open webrtc-streamer\VisualStudio\webrtc-streamer.sln with VisualStudio 2017 and compile
If it does not compile - you might need to change manually the file /inc/SessionSink.h Delete the line: static uint8_t H26X_marker[] = { 0, 0, 0, 1}; as being declared outside the class and add this line: uint8_t H26X_marker[] = { 0, 0, 0, 1}; immediately following this line: if ( (strcmp(mime, "video/H264") == 0) || (strcmp(mime, "video/H265") == 0) ) {
This version allows playing raw h264 files. To do that run it as: webrtc-streamer.exe file:d:\camd\webrtc-streamer\videos\airshow.264 or you can use the Debugger to run it.
./webrtc-streamer [-H http port] [-S[embeded stun address]] -[v[v]] [url1]...[urln]
./webrtc-streamer [-H http port] [-s[external stun address]] -[v[v]] [url1]...[urln]
./webrtc-streamer -V
-v[v[v]] : verbosity
-V : print version
-H [hostname:]port : HTTP server binding (default 0.0.0.0:8000)
-S[stun_address] : start embeded STUN server bind to address (default 0.0.0.0:3478)
-s[stun_address] : use an external STUN server (default stun.l.google.com:19302)
-t[username:password@]turn_address : use an external TURN relay server (default disabled)
-a[audio layer] : spefify audio capture layer to use (default:3)
-n name -u url : register a name for an url
[url] : url to register in the source list
Arguments of '-H' is forwarded to option listening_ports of civetweb, then it is possible to use the civetweb syntax like '-H8000,9000' or '-H8080r,8443s'.
webrtc-streamer rtsp://217.17.220.110/axis-media/media.amp \
rtsp://85.255.175.241/h264 \
rtsp://85.255.175.244/h264 \
rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov
You can access to the WebRTC stream coming from an RTSP url using webrtcstreamer.html page with the RTSP url as argument, something like:
https://rtsp2webrtc.herokuapp.com/webrtcstreamer.html?rtsp://217.17.220.110/axis-media/media.amp
Instead of using the internal HTTP server, it is easy to display a WebRTC stream in a HTML page served by another HTTP server. The URL of the webrtc-streamer to use should be given creating the WebRtcStreamer instance :
var webRtcServer = new WebRtcStreamer(<video tag>, <webrtc-streamer url>);
A short sample HTML page using webrtc-streamer running locally on port 8000 :
<html>
<head>
<script src="lib/request.min.js" ></script>
<script src="webrtcstreamer.js" ></script>
<script>
var webRtcServer = new WebRtcStreamer("video",location.protocol+"//"+window.location.hostname+":8000");
window.onload = function() { webRtcServer.connect("rtsp://pi2.local:8554/unicast") }
window.onbeforeunload = function() { webRtcServer.disconnect() }
</script>
</head>
<body>
<video id="video" />
</body>
</html>
A simple way to publish WebRTC stream to a Janus Gateway Video Room is to use the JanusVideoRoom interface
var janus = new JanusVideoRoom(<janus url>, <webrtc-streamer url>)
A short sample to publish WebRTC streams to Janus Video Room could be :
<html>
<head>
<script src="lib/request.min.js" ></script>
<script src="janusvideoroom.js" ></script>
<script>
var janus = new JanusVideoRoom("https://janus.conf.meetecho.com/janus", null);
janus.join(1234, "rtsp://pi2.local:8554/unicast","pi2");
janus.join(1234, "rtsp://217.17.220.110/axis-media/media.amp","media");
</script>
</head>
</html>
This way the communication between Janus API and WebRTC Streamer API is implemented in Javascript running in browser.
The same logic could be implemented in NodeJS using the same JS API :
global.request = require('then-request');
var JanusVideoRoom = require('./html/janusvideoroom.js');
var janus = new JanusVideoRoom("http://192.168.0.15:8088/janus", "http://192.168.0.15:8000")
janus.join(1234,"mmal service 16.1","video")
A simple way to publish WebRTC stream to a Jitsi Video Room is to use the XMPPVideoRoom interface
var xmpp = new XMPPVideoRoom(<xmpp server url>, <webrtc-streamer url>)
A short sample to publish WebRTC streams to a Jitsi Video Room could be :
<html>
<head>
<script src="libs/strophe.min.js" ></script>
<script src="libs/strophe.muc.min.js" ></script>
<script src="libs/strophe.disco.min.js" ></script>
<script src="libs/strophe.jingle.sdp.js"></script>
<script src="libs/jquery-1.12.4.min.js"></script>
<script src="libs/request.min.js" ></script>
<script src="xmppvideoroom.js" ></script>
<script>
var xmpp = new XMPPVideoRoom("meet.jit.si", null);
xmpp.join("testroom", "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov","Bunny");
</script>
</head>
</html>
You can start the application using the docker image :
docker run -p 8000:8000 -it mpromonet/webrtc-streamer
You can expose V4L2 devices from your host using :
docker run --device=/dev/video0 -p 8000:8000 -it mpromonet/webrtc-streamer
The container entry point is the webrtc-streamer application, then you can :
-
get the help using :
docker run -p 8000:8000 -it mpromonet/webrtc-streamer -h
-
run the container registering a RTSP url using :
docker run -p 8000:8000 -it mpromonet/webrtc-streamer -n raspicam -u rtsp://pi2.local:8554/unicast