Audio sources are added to the server as source
in the [stream]
section of the configuration file /etc/snapserver.conf
. Every source must be fed with a fixed sample format, that can be configured per stream (e.g. 48000:16:2
).
The following notation is used in this paragraph:
<angle brackets>
: the whole expression must be replaced with your specific setting[square brackets]
: the whole expression is optional and can be left out[key=value]
: if you leave this option out, "value" will be the default for "key"
The general format of an audio source is:
TYPE://host/path?name=<name>[&codec=<codec>][&sampleformat=<sampleformat>][&chunk_ms=<chunk ms>][&controlscript=<control script filename>]
Within the [stream]
section there are some global parameters valid for all source
s:
sampleformat
: Default sample format of the stream source, e.g.48000:16:2
codec
: The codec to use to save bandwith, one of:flac
[default]: lossless codec, mean codec latency ~26msogg
: lossy codecopus
: lossy low latency codec, only supports 48kHz, if your stream has a different sample rate, automatic resampling will be applied, introducing further latecypcm
: lossless, uncompresssed. No latency.
chunk_ms
: Default source stream read chunk size [ms]. The server will continously read this number of milliseconds from the source into a buffer, before this buffer is passed to the encoder (thecodec
above)buffer
: Buffer [ms]. The end-to-end latency, from capturing a sample on the server until the sample is played-out on the clientsend_to_muted
:true
orfalse
: Send audio to clients that are muted
source
parameters have the form key=value
, they are concatenated with an &
character.
Supported parameters for all source types:
name
: The mandatory source namecodec
: Override the global codecsampleformat
: Override the global sample formatchunk_ms
: Override the globalchunk_ms
dryout_ms
: Supported by non-blocking sourced: when no new data is read from the source, send silence to the clientscontrolscript
: Script to control the stream source and read and provide meta data, see stream_plugin.md
Available audio source types are:
Captures audio from a named pipe
pipe:///<path/to/pipe>?name=<name>[&mode=create][&dryout_ms=2000]
mode
can be create
or read
. Sometimes your audio source might insist in creating the pipe itself. So the pipe creation mode can by changed to "not create, but only read mode", using the mode
option set to read
Launches librespot and reads audio from stdout
librespot:///<path/to/librespot>?name=<name>[&dryout_ms=2000][&username=<my username>&password=<my password>][&devicename=Snapcast][&bitrate=320][&wd_timeout=7800][&volume=100][&onevent=""][&normalize=false][&autoplay=false][&cache=""][&disable_audio_cache=false][&killall=false][¶ms=extra-params]
Note that you need to have the librespot binary on your machine and the sampleformat will be set to 44100:16:2
Parameters used to configure the librespot binary (see librespot-org options):
username
: Username to sign in withpassword
: Passworddevicename
: Device namebitrate
: Bitrate (96, 160 or 320). Defaults to 320volume
: Initial volume in %, once connected [0-100]onevent
: The path to a script that gets run when one of librespot's events is triggerednormalize
: Enables volume normalisation for librespotautoplay
: Autoplay similar songs when your music endscache
: Path to a directory where files will be cacheddisable_audio_cache
: Disable caching of the audio dataparams
: Optional string appended to the librespot invocation. This allows for arbitrary flags to be passed to librespot, for instanceparams=--device-type%20avr
. The value has to be properly URL-encoded.
Parameters introduced by Snapclient:
killall
: Kill all running librespot instances before launching librespotwd_timeout
: Restart librespot if it doesn't create log messages for x seconds
Launches shairport-sync and reads audio from stdout
airplay:///<path/to/shairport-sync>?name=<name>[&dryout_ms=2000][&devicename=Snapcast][&port=5000][&password=<my password>]
Note that you need to have the shairport-sync binary on your machine and the sampleformat will be set to 44100:16:2
Parameters used to configure the shairport-sync binary:
devicename
: Advertised nameport
: RTSP listening portpassword
: Passwordparams
: Optional string appended to the shairport-sync invocation. This allows for arbitrary flags to be passed to shairport-sync, for instanceparams=--on-start=start.sh%20--on-stop=stop.sh
. The value has to be properly URL-encoded.
Reads PCM audio from a file
file:///<path/to/PCM/file>?name=<name>
Launches a process and reads audio from stdout
process:///<path/to/process>?name=<name>[&dryout_ms=2000][&wd_timeout=0][&log_stderr=false][¶ms=<process arguments>]
wd_timeout
: kill and restart the process if there was no message logged for x seconds to stderr (0 = disabled)log_stderr
: Forward stderr log messages to Snapclient loggingparams
: Params to start the process with
Receives audio from a TCP socket (acting as server)
tcp://<listen IP, e.g. 127.0.0.1>:<port>?name=<name>[&mode=server]
default for port
(if omitted) is 4953, default for mode
is server
Mopdiy configuration would look like this (running GStreamer in client mode)
[audio]
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! wavenc ! tcpclientsink host=127.0.0.1
Receives audio from a TCP socket (acting as client)
tcp://<server IP, e.g. 127.0.0.1>:<port>?name=<name>&mode=client
Mopdiy configuration would look like this (running GStreamer in server mode):
[audio]
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! wavenc ! tcpserversink host=127.0.0.1
Captures audio from an alsa device
alsa://?name=<name>&device=<alsa device>[&send_silence=false][&idle_threshold=100][&silence_threshold_percent=0.0]
device
: alsa device name or identifier, e.g.default
orhw:0,0
orhw:0,0,0
idle_threshold
: switch stream state from playing to idle after receivingidle_threshold
milliseconds of silencesilence_threshold_percent
: percent (float) of the max amplitude to be considered as silencesend_silence
: forward silence to clients when stream state isidle
The output of any audio player that uses alsa can be redirected to Snapcast by using an alsa loopback device:
-
Setup the alsa loopback device by loading the kernel module:
sudo modprobe snd-aloop
The loopback device can be created during boot by adding
snd-aloop
to/etc/modules
-
The loopback device should show up in
aplay -l
aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Loopback [Loopback], device 0: Loopback PCM [Loopback PCM] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 0: Loopback [Loopback], device 1: Loopback PCM [Loopback PCM] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 1: Intel [HDA Intel], device 0: CX20561 Analog [CX20561 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Intel [HDA Intel], device 1: CX20561 Digital [CX20561 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 2: CODEC [USB Audio CODEC], device 0: USB Audio [USB Audio] Subdevices: 0/1 Subdevice #0: subdevice #0
In this example the loopback device is card 0 with devices 0 and 1, each having 8 subdevices.
The devices are addressed withhw:<card idx>,<device idx>,<subdevice num>
, e.g.hw:0,0,0
.
If a process plays audio usinghw:0,0,x
, then the audio will be looped back tohw:0,1,x
-
Configure your player to use a loopback device
For mopidy (gstreamer) in
mopidy.conf
:output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! alsasink device=hw:0,0,0
For mpd: in
mpd.conf
audio_output_format "48000:16:2" audio_output { type "alsa" name "My ALSA Device" device "hw:0,0,0" # optional # auto_resample "no" # mixer_type "hardware" # optional # mixer_device "default" # optional # mixer_control "PCM" # optional # mixer_index "0" # optional }
-
Configure Snapserver to capture the loopback device:
[stream] source = alsa://?name=SomeName&device=hw:0,1,0
Read and mix audio from other stream sources
meta:///<name of source#1>/<name of source#2>/.../<name of source#N>?name=<name>
Plays audio from the active source with the highest priority, with source#1
having the highest priority and source#N
the lowest.
Use codec=null
for stream sources that should only serve as input for meta streams