pyrosis

joined 5 months ago
[–] [email protected] 1 points 1 month ago

When I was experimenting with this it didn't seem like you had to distribute the cert to the service itself. As long as the internal service was an https port. The certificate management was still happening on the proxy.

The trick was more getting the host names right and targeting the proxy for the hostname resolution.

Either way IP addresses are much easier but it is nice to observe a stream being completely passed through. I'm sure it takes a load off the proxy and stabilizes connections.

[–] [email protected] 0 points 1 month ago (2 children)

This would be correct if you are terminating ssl at the proxy and it's just passing it to http. However, if you can enable SSL on the service it's possible to take advantage of full passthru if you care about such things.

[–] [email protected] 1 points 4 months ago (1 children)

Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I've learned over time in optimizing systems.

[–] [email protected] 1 points 4 months ago (1 children)

Bookmark this if you utilize zfs at all. It will serve you well.

https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it's easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it's a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it's a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it's silly to create vm filesystems like btrfs if you're vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it's just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It's a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It's also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.

[–] [email protected] 0 points 4 months ago (4 children)

I use using docker networks but that's me. They are created for every service and it's easy to target the gateway. Just make sure DNS is correct for your hostnames.

Lately I've been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don't match outside and in?

So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.

It actually can break passthru via sni if they don't use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.

So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like...

Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.

server {
    listen 443 ssl;
    listen [::]:443 ssl;  # Listen on IPv6 address

    server_name jellyfin.example.net;

    ssl_certificate /path/to/ssl_certificate.crt;
    ssl_certificate_key /path/to/ssl_certificate.key;

    location / {
        proxy_pass https://jellyfin.example.net:8920;  # Use FQDN
        ...
    }
}
[–] [email protected] 0 points 4 months ago (5 children)

Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It's better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

I went the zfs route because I'm familiar with it and I appreciate it's native sharing options built into the filesystem. It's cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.

[–] [email protected] 0 points 4 months ago (8 children)

Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks...

If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

Either way have fun.

[–] [email protected] 1 points 4 months ago

Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.