tvcvt

joined 1 year ago
[–] [email protected] 1 points 1 month ago

I’m making some assumptions, namely that you’re using an unprivileged LXC container and the mount point is a bind mount.

Unprivileged LXC shift user ID numbers so that an escape won’t result in root access to the host. The root user (uid 0) in the container is actually uid 100000 from the perspective of the Proxmox host.

What I usually do is set ownership of my bind mounts to that high-numbered ID (so something like chown -R 100000:100000 /path/to/bind/mount) from Proxmox. Then the root user in the container will be able to set whatever permissions you need directly.

[–] [email protected] 3 points 4 months ago (1 children)

There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (https://discourse.practicalzfs.com/t/hard-drives-in-zfs-pool-constantly-seeking-every-second/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small volblocksize that PVE uses to make zvols for its Vans under ZFS.

If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.

Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.

[–] [email protected] 7 points 4 months ago (2 children)

My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.

[–] [email protected] 6 points 5 months ago

You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.

[–] [email protected] 5 points 6 months ago (1 children)

You’ve got some decent answers already, but since you’re getting interested in ZFS, I wanted to make sure you know about discourse.practicalzfs.com. It’s the successor to the ZFS subreddit and it’s a great place to get expert advice.

[–] [email protected] 1 points 7 months ago

Is this urbackup-docker in a VM or an LXC? If the latter, you don’t need to add it in storage at all; you can bind mount the folder and use it directly. Here’s some info on that. If it’s in a VM and you want to use the directory directly (as in not just make a disk image inside the directory to pass as a block device) you’ll have to do some file sharing to the VM.

[–] [email protected] 2 points 8 months ago

It sounds like you’ve got your solution already, but just in case someone stumbles on this later, I thought I’d mention autofs.

I’m coming to prefer it over fstab entries because it handles disconnections nicely and attempts to reconnect. Worth checking out for those who haven’t played with it.

[–] [email protected] 1 points 8 months ago

Could be. If that’s the case, it’s nothing I’ve noticed. I’ve got a 32gb VM and I’m running a bunch of LXC and docker containers on it without issue.

[–] [email protected] 2 points 9 months ago (2 children)

I’ve never heard anyone else mention them, but I’ve had really good luck with https://www.ssdnodes.com for the past several years. I don’t recall ever using their support, but I did have a policy question before buying when I first signed up and they were pretty quick to reply. I think I found them on LowEndBox.

[–] [email protected] 11 points 9 months ago

I second mailcow. It’s what I’ve been using for years and it’s pretty great.

One thing I’ll add is before you take the plunge, make sure your VPS address isn’t on a block list somewhere. Pay a visit to mxtoolbox.com and you should find some resources there.

[–] [email protected] 2 points 9 months ago (1 children)

I’m a fan of the UniFi and Omada lines, but for your use case, I’d be looking for any AP that could run OpenWRT. That’s a super-powerful Linux-based router OS that meets all your needs and will present a nice web interface for each AP, no controller needed.

Check the project’s site for hardware compatibility, but I’ve had good luck with the GL.iNet travel routers and I bet some of their bigger models would do the trick for you.

[–] [email protected] 3 points 9 months ago

I completely agree with this. Seems like a stellar use for either Cloudflare Tunnels or Tailscale’s similar Funnel feature.

Connect it only to the gramos deployment and that will be the only piece of your setup available publicly.

view more: next ›