I want to use Jellyfin on Proxmox, if that is a thing. After reading a post here where most people recommended Debian as host OS I want to make a VM running Debian and install Jellyfin Server there.
Now I have a few questions:
-
I see many people install Jellyfin via docker. Does that have any advantages? I would prefer to avoid docker as it adds a level of complexity for me.
-
where do I save my media? I have a loose plan to run a second VM running openMediaVault where all my HDDs are passed through and then use NFS to mount a folder on the Jellyfin VM. Is that a sane path?
-
what do I have to consider on Proxmox, to get the best hardware results on Jellyfin? Do I need some special passthrough magic to get it running smoothly? I don’t have a dedicated GPU, does that make the configuration easier?
Just run it in an lxc? I’ve installed it using: https://tteck.github.io/Proxmox/
Thank you for your answer. I maybe want to add some features in the future, like all those *arr- programs. Wouldn’t it be easier to have everything in one VM instead of many LXC?
Use LXC unless that’s for some reason not possible. It has less overhead than VMs. How many services you put into one container is for you to decide. I have one for jellyfin and one for the arrs and download client. Splitting everything into more containers might be beneficial, if something stops working. You can then fix or use a backup for the one thing without inhibiting the other services.
Unless you want to use docker. Then, as others have mentioned, make one VM and put all your dockers there.
I have arr in lxc also, I just map a folder from the host into the lxc containers. It’s working flawless, plus it’s quite flexible.
I also have a few things running in docker, but if I can get it in lxc I do that.
And it’s so easy doing with the scripts from the page I linked to you:)
Another benefit to LXC is you can map devices, including GPU, to multiple LXC while keeping them accessible to the host. For my home setup I currently have 3 LXC with access to the iGPU, 1 for jellyfin+caddy via podman nested, 1 for moonfire-nvr via podman nested, and been trying to use 1 to figure out hardware transcoding with owncast through multiple install methods but no luck so far. I’ve also been playing with mapping rtl-sdr v3 devices, zigbee stick, zwave stick, and coral usb for a variety of projects lately.
edit: I forgot to answer the question and went straight to ranting, lol. LXC is like a bare-metal VM. You can install & run multiple things on them like a normal VM including podman or docker.
you can map devices, including GPU, to multiple LXC while keeping them accessible to the host
You can do this with the iGPU for VMs too, using either GVT-g for older Intel iGPUs or SR-IOV for newer ones. I’m using my iGPU in a Windows VM as well as in Docker containers on the host.
I’d highly recommend to take a deeper look into Docker. While it might look complicated at first, it really isn’t. Once you get the gist of it, you’r setup life will me much simpler in the future.
In a nutshell: Say you need to run jellyfin (or whatever)
Generally, you’d need to install jellyfin from the repos or download it’s binary, etc… Then you’d have to dig through the configuration process, where files are scattered all across the system. Probably, in some cases, you’d have to copy/move/symlink media files around, etc.
With Docker however, you just spin up the jellyfin as a container, and bind the necessery configuration and media files to that container, which is usually a one-liner.
So instead of having scattered config files all around the place, you can have something like ~/Docker/configs/jellyfinn and bind that folder (or file) to the containers /etc/jellyfin. And you can use the same approach to have your media files in ~/Movies and bind thst to jellyfin /data folder. These are just examples, you’ll just have to look where the docker containers expect the files to be, which is usually well documented.
And the final step is to bind the ports of the container to the host, so you can interact with the service as if it was running on the host.
+1 for using Docker.
I run an AdGuard Home, Plex, Unifi Controller and Wireguard on a Raspberry Pi. When I upgraded to the Pi4 from a Pi3 I just had to plug my portable HDD into the new Pi. Copy over the docker-compose.yml and configure the disk to mount on boot. No messing around having to install and reconfigure each of the apps. No need for Plex to redownload all its metadata as it used to when I migrated in the past.
So I run Jellyfin on a Ubuntu container, just wanting to note that while the config files live somewhere on the system, you don’t actually need to touch them. All configuration can be performed via the web interface so it’s all abstracted out. It’s not any easier to use Docker in that respect at all. What you’re describing as bind ports mean that your Docker host also needs access to the files/folders, then you map it via bind folders.
Same thing in my case, I make sure that Proxmox has access to the files, then map the folder into the container and then Jellyfin can access it directly. No fiddling around with Jellyfin configs.
If you’re using NFS, I’d argue it’s easier not to use Docker. Just install Jellyfin, setup NFS client to mount the folder and then configure Jellyfin to find the folder. Job done.
Docker containerizes jellyfin. If you use proxmox you don’t need to containerize in a “container”.
That makes sense, docker is off the table.
Edit: or is it? Not decided yet.
My setup: jellyfin in a debian12 LXC installed normally with the official documentation for Debian (no docker) My medias are on a different drive than the OS, i just add a mount point to the container, although, this needs to be done via the CLI (you can avoid the CLI if it’s in the same drive I think, not sure)
Jellyfin is very conveniently packaged in docker, so while it may seem daunting, I highly recommend at least trying that route.
Running an nfs mount, docker or not, should be perfectly fine. Jellyfin just uses normal storage so won’t care if it’s nfs. No real special considerations with proxmox either, especially without worrying about a dedicated GPU. Just spin up a Debian guest and go.
The other comment made sense to me, why contain a container. But you are right, I will learn more about docker, it seems like a great tool.
Thank you for your confirmation with NFS. Just read about it yesterday, in search of an alternative to samba, what all the windows user seem to use.
You “contain the container” because the VM provides storage and compute for docker (the docker container needs to run “somewhere”).
I use a VM on proxmox to run a jellyfin container. VM mounts needed NFS dirs for config and media. Then create a systemd service to start/stop the container.
I understand that I can use a VM to run docker, but:
Wouldn’t make a LXC more sense than a VM with docker inside? And what are the advantages of running jellyfin in a container instead of a normal installation? The VM is already kind of a container, what benefits do I get from yet another container inside? I am curious to learn more!
Jellyfin is also conveniently packaged as a .deb and provide a repo for Ubuntu/Debian. It’s pretty easy to spin up a Debian container, add the repo, and apt install jellyfin, IMHO easier than doing the same thing with a VM, then docker…