Running Frigate in Docker in Proxmox LXC with remote nas Share (cifs) #1111
Replies: 46 comments 106 replies
-
I think this has changed a bit with Proxmox7, this is what it took to work for me. Note, I'm running Proxmox on Dell 620/720 era hardware, which has one of the first generation QuickSync installs, as such I've not bothered with any hardware video decoding (renderD128 device), just the USB Coral passthrough. lxc.conf
Key lines: Other notes:
With this, you should be able closer to run the recommended Docker Compose configuration, eg:
If you have issues, I would recommend following the Get started with the USB Accelerator guide inside the LXC container before even trying to get Frigate to work. If you can't run the test model, Frigate will not work either. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm looking into installing frigate on LXC. I'm actually using LXD on Debian rather than Proxmox but it's the same concept. I wonder though why we need to run docker anyway on LXC... can we install frigate straight on the container without docker? |
Beta Was this translation helpful? Give feedback.
-
in portainer how can i setup them?
I setup it |
Beta Was this translation helpful? Give feedback.
-
when I try to start docker I get an error. Hopefully you may know why? Nov 04 21:38:58 Docker systemd[1]: docker.service: Consumed 114ms CPU time. root@Docker ~# docker ps |
Beta Was this translation helpful? Give feedback.
-
May I ask why run it in an LXC when you can run docker directly on proxmox and not have to struggle with the LXC passthrough? |
Beta Was this translation helpful? Give feedback.
-
Hello I am trying to install Frigate on ProxMox 7.1-7 LXC. I have Odyssey Blue with a Coral M.2 card. I have been following this guide, and also some other posts I have seen on Reddit, however I am (so far) unable to get it to work. I am posting what I have done so far with hope that someone can help me and maybe then convert it to another guide (using PCI/m.2 instead of USB) First I downloaded Debian-11 standard Template from local storage templates list: Then I created a new CT container. I have named it “frigatelxc”, unchecked “unprivileged container” and created a password. After creation I have checked “Nesting” under Options->Features Then opened shell and edited the container conf (in my case ’nano /etc/pve/lxc/101.conf’) This is what I have entered:
Then I started the container and ran following commands to install docker:
I then created CIFS mount to a Public SMB share on my Unraid server by running:
Then entered into the file:
Then ran
Then I installed Portainer:
Then created Docker using:
I changed db location to local folder I have created because pointing to my SMB share resulted in “database is locked” error in Frigate later I then created config.yml on my unraid smb share In it I entered:
I also set up MQTT and a couple cameras. Frigate then runs fine with CPU detector. However after changing to:
I get error on startup:
I have then tried to test out Coral within the LXC by following this guide: https://coral.ai/docs/m2/get-started/#2a-on-linux I went through every strep in guide successfully: (had to install some of the tools like pip or lspci) Specifically running “lspci -nn | grep 089a” returns “03:00.0 System peripheral: Device 1ac1:089a” and running “ls /dev/apex_0” returns “/dev/apex_0” as expected I then installed the PyCoral library following the steps. Everything is going fine until the very last step when I run:
I get the following error:
Any idea on what I should try next? Thank you! |
Beta Was this translation helpful? Give feedback.
-
To clarify, did you install |
Beta Was this translation helpful? Give feedback.
-
Working LXC config for m.2 Coal in Proxmox.
|
Beta Was this translation helpful? Give feedback.
-
I now am in the same boat as above... frigate keeps crashing due to no USB Coral (I think). I get:
and this:
|
Beta Was this translation helpful? Give feedback.
-
Ah, got it working... had wrong pointers for USB:
|
Beta Was this translation helpful? Give feedback.
-
Isn't there any way to run Frigate without docker? I don't like the idea of running a container inside a container. I want to switch to Frigate to run in a LXC but I don't see a standalone install, it's all docker. |
Beta Was this translation helpful? Give feedback.
-
Why dont you just run docker directly on the host? Its not REALLY need for an LXC. I found Frigate more stable just running docker on the host and trash LXC
With best regards,
Aleksander Lyse
…On 28 Apr 2022, 15:58 +0200, norey ***@***.***>, wrote:
Isn't there any way to run Frigate without docker? I don't like the idea of running a container inside a container. I want to switch to Frigate to run in a LXC but I don't see a standalone install, it's all docker.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I prefer to have every service in its own environment for simplifying management and backup but I get your point. |
Beta Was this translation helpful? Give feedback.
-
For me I just have everything on my NAS (files, config, clips), and just docker-compose on the “server”, so its nothing to backup. Everything is backup on the NAS regardless
With best regards,
Aleksander Lyse
…On 28 Apr 2022, 17:52 +0200, norey ***@***.***>, wrote:
I prefer to have every service in its own environment for simplifying management and backup but I get your point.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Is there a install method with only LXC without docker? |
Beta Was this translation helpful? Give feedback.
-
Ok I'm really struggling getting this to work so hope someone can help Single Coral PCI, Proxmox, LXC (Alpine).
Proxmox shows fine: Alpine looks to show fine: If I try to add /dev/apex_0 in devices in portainer on the frigate container frigate wont start which obvisouly means: What am I missing or doing wrong here? I tried adding this as mentioned in other posts but no difference: Any help? Also if I reboot proxmox it doesn't work in proxmox till I run |
Beta Was this translation helpful? Give feedback.
-
Anyone have experience installing Docker in Proxmox LXC by using bash script from https://tteck.github.io/Proxmox/ ? |
Beta Was this translation helpful? Give feedback.
-
Hi all how can I upgrade to 0.12, I have Frigate in an LXC in Proxmox and I've installed using the method in this thread Thanks |
Beta Was this translation helpful? Give feedback.
-
Hello to all, Has anyone already upgraded to Proxmox v8 with the new kernel? Thanks in advanced. Best regards |
Beta Was this translation helpful? Give feedback.
-
I wanted to share something i was struggling with, in hopes google leads you here as you install frigate (or possibly any other container trying to do iGPU passthrough). For reference, using an Intel NUC 11. IssueI was pulling my hair out on trying to get this working with my Intel NUC 11 with iGPU passthrough. This is the error i kept getting: 2023-07-26 00:03:32.912652443 [2023-07-26 00:03:32] ffmpeg.Sideyard.detect ERROR : [AVHWDeviceContext @ 0x5607dc9409c0] No VA display found for device /dev/dri/renderD128.
2023-07-26 00:03:32.912654178 [2023-07-26 00:03:32] ffmpeg.Sideyard.detect ERROR : Device creation failed: -22.
2023-07-26 00:03:32.912655278 [2023-07-26 00:03:32] ffmpeg.Sideyard.detect ERROR : [h264 @ 0x5607dc6ee300] No device available for decoder: device type vaapi needed for codec My Settings (before the issue)I had these settings up to this point: arch: amd64
cores: 4
features: mount=nfs,nesting=1
hostname: frigate
memory: 1024
net0: name=eth0,bridge=vmbr0,firewall=1,gw=<MY-GATEWAY-IP>,hwaddr=<MY-MAC-ADDY>,ip=<MY-FRIGATE-IP>/23,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-101-disk-0,size=8G
swap: 1024
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.cgroup.devices.allow: c 189:* rwm
lxc.apparmor.profile: unconfined
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0, 0
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.cap.drop: SolutionWhat I didn't know was that there's a new version 2 of lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.cgroup.devices.allow: c 189:* rwm to be cgroup version 2, which is what my proxmox VE8 uses lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup2.devices.allow: c 189:* rwm ResultAnd now the error is gone! Additionally, i had to do a few other things to get here: |
Beta Was this translation helpful? Give feedback.
-
Thanks guys I followed these directions and things seem to be actually working coral wise ... I haven't dug into the VAINFO stuff yet - but thought I'd add another success story here.
I also added a few more debug steps which might be helpful to people: In the LXC Container to test if the TPU was working I ran the following (which is basically the coral getting stated guide) echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
apt-get install python3-pycoral -y
mkdir coral && cd coral
git clone https://github.com/google-coral/pycoral.git
cd pycoral
bash examples/install_requirements.sh classify_image.py
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg Once that was verified I ran the frigate container but used it to run this same script: docker run \
--rm \
--shm-size="128mb" \
--device=/dev/bus/usb:/dev/bus/usb \
-v /etc/localtime:/etc/localtime \
-v /frigate:/config:rw \
-v /frigate/clips:/media/frigate/clips:rw \
-v /frigate/recordings:/media/frigate/recordings:rw \
--privileged \
-p 5000:5000 \
-p 1935:1935 \
-it \
-e FRIGATE_RTSP_PASSWORD="altiods" \
--entrypoint bash \
ghcr.io/blakeblackshear/frigate:stable This spins up the container so you can poke around. Next added 2 things: apt-get update && apt-get install usbutils git
ls /dev/bus/usb/002/ -l And then I ran the test code again: echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
apt-get install python3-pycoral -y
mkdir coral && cd coral
git clone https://github.com/google-coral/pycoral.git
cd pycoral
bash examples/install_requirements.sh classify_image.py
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
|
Beta Was this translation helpful? Give feedback.
-
This guide was written in 2021 using Debian 10 and older Frigate. Is there a newer/latest/up-to-date guide available somewhere? |
Beta Was this translation helpful? Give feedback.
-
Maybe this is useful https://youtu.be/DmbFq5dMsFo |
Beta Was this translation helpful? Give feedback.
-
Thank you, @burnsie-la your post (basically starting with the tteck docker script and walking easily thru) was helpful and the best way I found to install frigate 12 standalone (not part of HomeAssistant and connecting to HA seperatly) and Proxmox 8.0.3. I did ubuntu 22.04 as my preferance. I had to take a two other steps:
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file 1a) update LXC config optionally based on preferance: swap: 1024 No other LXC changes were needed as the tteck script to create docker privlaged adds everything else needed. It notably, includes lxc.cgroup2.devices.allow: a which covers all hardware.
2a) remember that sometimes a reboot my lose the proper USB port and rebooting a second time is needed. Info that may help others: I've seen no loss in Inference Speed (7.63 ms) vs nativly installing on the host. Testing python3-pycoral in the LXC host is NOT needed or desired as the frigate docker has what's needed. I had errors about missing dependancies when trying and there was no reason to figure this out. Suggest others also install within docker:
version: "3" version: '3.3' |
Beta Was this translation helpful? Give feedback.
-
@jfradkin33 |
Beta Was this translation helpful? Give feedback.
-
Full unprivileged config working for me: LXC config:
Note: fuse for running docker fuse-journalfs See this for calculating idmap: https://bookstack.swigg.net/books/linux/page/lxc-gpu-access Only on LXC host:
Both on the LXC host and Docker host:
docker-compose:
If you'd like to test if the tpu is working on the LXC host or Docker host you can use this script. But in order to run it you have to install the "gasket-dkms" and "libedgetpu1-std" package on the system you want to test as well.
|
Beta Was this translation helpful? Give feedback.
-
I also have issues with getting my M2 Coral to work in a Proxmox LXC with docker/portainer. When I type On the host I get With Where could my error be? The Frigate log:
compose file:
LXC file:
Frigate config:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I think I have a strange issue . I managed to have coral usb working at 1st try but cannot have HW accell.. following this guide: Can someone please point me in the right direction? This is my docker compose
and this my LXC config
thanks! |
Beta Was this translation helpful? Give feedback.
-
Hello! At proxmox --> vm lxc container --> docker portainer i use image ghcr.io/blakeblackshear/frigate:master-e3eae53-tensorrt (i have nvidia gpu passthrough). Where i can find latest version of frigate tensorrt image in order to update? |
Beta Was this translation helpful? Give feedback.
-
sorry to bump this up again. what would I add to the lxc config for multiple pci-e TPUs? |
Beta Was this translation helpful? Give feedback.
-
Below my journey of running "it" on a proxmox machine with:
I give absolutely no warranty/deep support on what I write below. This write up is a mix of all kinds of information from all over the internet like (but not limited to):
I have an intel NUC8i5 with a samsung 980 Pro SSD and 16GB Ram.
Proxmox: pve-manager/6.4-5/6c7bf5de (running kernel: 5.4.106-1-pve)
Disclaimer: I AM NO EXPERT AT ALL and I might not be able to help with hard questions!
What I did:
In Proxmox go to local storage and download turnkey core linux:
Create a new CT (LXC Container):
untick unpriviliged
The password you choose here is the one you can later use to loging via proxmox on the shell/ssh with username root and the chosen password.
the rest is up to you :-).
After creation do NOT start the container and go to options and features and select nesting:
then via the proxmox host shell go to
/etc/pve/lxc
and edit the container file vianano 10x.conf
(choose right number of LXC container.Put this in (attention, this is mine, the rest is up to you):
Go to options of the LXC container and select "start on boot"
Start the LXC container and go to the shell and use "next - next - install".
then crtl-c to go to the prompt and do:
for CIFS:
make a new file in /etc called fstab with the command nano fstab
and put this in (adapted to your needs):
//192.168.1.1/frigate/clips /shared/frigate/clips cifs username=frigate,password=frigate,rw,users,dir_mode=0777,file_mode=0777 0 0
mount -a
I am no docker expert and thus I installed portainer on the LXC. go to the LXC shell and type:
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
then you should be able to go to http://[ip of LXC container]:9000
The database of frigate needs to runs separately and thus you need to put this in you frigate config.yml:
Above is thus the path in the
docker
containerThen several links between the docker and the LXC container need to be there (mounts they are called)
in the LXC container do:
docker run --name frigate --privileged --shm-size=1g --mount type=tmpfs,target=/tmp/cache,tmpfs-size=2000000000 -v /shared/frigate/config:/config:ro -v /dev/bus/usb:/dev/bus/usb -v /etc/localtime:/etc/localtime:ro -v /shared/frigate/clips:/media/frigate/clips:rw -v /shared/frigate/db:/media/frigate/db:rw --device-cgroup-rule="c 189:* rmw" --device=/dev/dri/renderD128 -d -p 5000:5000 -e FRIGATE_RTSP_PASSWORD='password' blakeblackshear/frigate:0.8.4-amd64
This will create the frigate container in docker running on an LXC container on Proxmox --> incepted
I then go to portainer to
and then I deploy the docker container with this:
sorry, I don't know how to get this in the docker run command (tried but no luck).
Then "your" config.yml frigate config file need to be created in the LXC container (at least that's what I did):
nano /shared/frigate/config/config.yml
Then the hard part (at least for me) begins to create you yaml frigate config. Evey time you edit this you can restart frigate via portainer or docker command.
I have 5 very high res cams running this way with 6 other VMs running on proxmox and till so far it runs fine.
I must admit that sometime I have very vague proxmox issues and I did some firmware updates on the SSD and the nuc itsself and this seems to have helped.
Also this helped (I think/hope):
https://forum.proxmox.com/threads/random-crashes-reboots-mit-proxmox-ve-6-1-auf-ex62-nvme-hetzner.63597/page-3
Good luck!
Beta Was this translation helpful? Give feedback.
All reactions