Selfhosted

38707 readers
677 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

Hello, it's me again. I read a lot about how unreliable micro SD cards are if you use your RPi to selfhost some stuff. Now I wanted to ask if some of you might have recommendations for cheap but reliable external SSDs. I did some research on Amazon but there are some brands I never heard before (Intenso, SSK, Netac, etc.) and don't know if they can be trusted.

2
 
 

It's a bad title, but I'm trying to figure out how to describe what I want.

First, I got my photoprism working thru cloudflare. Now, on the same domain I would like an email address.

So mysite.com gets routed to 56.654.234.12 let's say by cloudflare such that a global user never sees my ip. But mail.mysite.com that's different, they don't proxy email so if you do a reverse lookup you can find the origin IP.

I heard about tunnels so I stupidly signed up for that, only to learn that a tunnel just lets you into an internal network. So an SMTP server can't get emails from outside that way.

Ideally, somehow I could setup one user at Gmail or proton mail, then somehow setup the same or different [email protected] and I could then use mailu, mailcow, mail docker to house my [email protected] which routes mail thru Gmail or protonmail. I know all this makes little sense because I don't know the proper way, so that's my question for you smart people who have done this twice over. Could someone point me to the best way of setting up a local mail server that routes thru cloudflare but is not easily reverse looked up? Is that even a problem at all?

3
 
 

Not sure if there's a pre-existing solution to this, so I figured I'd just ask to save myself some trouble. I'm running out of space in my Gmail account and switching email providers isn't something I'm interested in. I don't want to pay for Google Drive and I already self-host a ton of other things, so I'm wondering if there is a way to basically offload the storage for the account.

It's been like 2 decades since I set up an email server, but it's possible to have an email client download all the messages from Gmail and remove them from the server. I would like to set up a service on my servers to do that and then act as mail server for my clients. Gmail would still be the outgoing relay and the always-on remote mailbox, but emails would eventually be stored locally where I have plenty of space.

All my clients are VPN'd together with Tailscale, so the lack of external access is not an issue. I'm sure I could slap something roughshod together with Linux packages but if there's a good application for doing this out there already, I'd rather use it and save some time.

Any suggestions? I run all my other stuff in Kubernetes, so if there's one with a Helm chart already I'd prefer it. Not opposed to rolling my own image if needed though.

4
 
 

TL;DR:

  • I can't decide between Debian and the new "immutable" Fedora server variants
  • Currently I use Debian with pretty much everything being containerised, and it works fine.
  • I'm neither very good at what I'm doing, nor want to spend my weekends troubleshooting. Opting for something new could cause some headaches I guess?
  • How did you set up CoreOS? Are there simple ways?
  • Would you recommend me something different?

My backstory with Debian

I will soon set up a new home server and need your opinion and experiences.

I'm using Debian as the OS for my current one.
While it doesn't match my "taste" perfectly, as I slightly prefer RedHat stuff, I really don't have much preference, since I don't interact with the host much anyway.
Everything is containerised via Docker, and I don't even know why I like Rocky-/ Alma more. I tried Alma once and it just clicked better, I can't explain it...
But that doesn't mean I dislike Debian, not at all!

Still, at that time I decided to go with Debian, since it's the standard for most selfhosters, has the best software support, and is completely community run, opposed to RHEL and its clones.

At that time I didn't know Distrobox/ Toolbx, and I really wanted to install CasaOS (basically a simplified Cockpit + Portainer for less techy people), because I was a total noob back than and didn't want to do everything via CLI.

Nowadays, I found alternatives, like Cockpit, and I also do more via the terminal.
And if I want to install something that doesn't support my host OS, then I just enter my Toolbx and install it there.

Still, I absolutely don't regret going for Debian. It was a good choice. It's solid and doesn't get in my way.


What has changed in the last year(s)

In the last year now, I really began to enjoy using image based distros, especially Fedora Atomic.
I really love Atomic as desktop distro, because it is pretty close to upstream, while still being stable (as in how often things change).

For a desktop workstation, that's great, because DEs for example get only better with each update imo, and I want to be as close to upstream as possible, without sacrificing reliability, like on a rolling release.
The two major releases each year cycle is great for that.

But for a server, even with the more stable kernel that's used in CoreOS from what I've heard, I think that's maybe too unstable?

I think Debian is less maintenance, because it doesn't change as often, and also doesn't require rebooting after each transaction.

But, on the contrary, I wouldn't loose much to the "immutability", because I use containers for everything anyway.
Having way better security (sane SELinux setup, rootless containers, untampered OSA, etc.) and the ability to roll back in case something doesn't work, while self updating, sounds very promising.


Setting up CoreOS; FCOS vs FIOT

The major thing that's keeping me away from CoreOS/ uCore is all the ignition-butane-stuff.
From what I've heard, it's needlessly complicated for home use, and FCOS is best suited for fleets/ clusters of servers, not just for one.

Fedora IOT seems to be simpler, but doesn't have the same great defaults and features as uCore, since there isn't an IOT variant of uBlue.
But hey, at least I have my Anaconda installer.

What do you think about installing IOT, and then rebasing to uCore?
Or, do you think FCOS is just not the right thing for my use case?

In general, do you think that it is worth it, compared to plain old Debian?


Pros vs. cons

Anyway. I'm really thinking about all of this for a long time now, and can't decide.

On the one side, it all sounds promising and great.
But, on the other side, selfhosting isn't a primary hobby of mine. I just want a solid setup I don't have to maintain much after setting everything up. Image based server OSs are still very new and often unheard of, and being an early adopter might cause a lot of headache in that case when it comes to servers.


The "right" use case?

Just in case no one has tried FCOS or FIOT here, I will continue using Debian for my main server, and only use Fedora IOT for my Octoprint server, which only gets turned on sporadically, and would greatly benefit from that.

But if there are positive experiences, then I might give it a try.


Alternatives

Or, would you recommend me something entirely different?

NixOS for example sounds great in theory, but is way too complicated for me personally.

Or, would you recommend me to give Alma another try?

Is there something even better?

5
 
 

Not my blog, just a good community share. Authors are on mastodon @[email protected]

6
 
 

Hello everybody, Daniel here!

We're excited to be back with some new updates that we believe the community will love!

As always before we start, we’d like to express our sincere thanks to all of our Cloud subscription users. Your support is crucial to our growth and allows us to continue improving. Thank you for being such an important part of our journey. 🚀

What's New?


🛠️ Code Refactoring and Optimization

The first thing you'll notice here is that Linkwarden is now faster and more efficient.[^1] And also the data now loads a skeleton placeholder while fetching the data instead of saying "you have no links", making the app feel more responsive.

🌐 Added More Translations

Thanks to the collaborators, we've added Chinese and French translations to Linkwarden. If you'd like to help us translate Linkwarden into your language, check out #216.

✅ And more...

Check out the full changelog below.

Full Changelog: https://github.com/linkwarden/linkwarden/compare/v2.6.2...v2.7.0


If you like what we’re doing, you can support the project by either starring ⭐️ the repo to make it more visible to others or by subscribing to the Cloud plan (which helps the project, a lot).

Feedback is always welcome, so feel free to share your thoughts!

Website: https://linkwarden.app

GitHub: https://github.com/linkwarden/linkwarden

Read the blog: https://blog.linkwarden.app/releases/2.7

[^1]: This took a lot more work than it should have since we had to refactor the whole server-side state management to use react-query instead of Zustand.

7
 
 

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

8
 
 

First of all, thank you so much for your great answers under my post from yesterday! They were really really helpful!

I've now decided that I will not use something with USB. It really doesn't seem to be reliable enough for constant read-write-tasks, and I don't wanna risk any avoidable data loss and headache.

Also, it just doesn't seem to be very future proof. It would be pretty expensive, only for it to get replaced soon, and then getting obsolete. It just seemed like a band-aid solution tbh. So, no USB hard drive bay, no huge external hard drive, and no NAS just for that purpose.


A few people asked me about the hardware.

My server is a mini-PC/ thin client I bought used for 50 bucks. I've used it for about two years now, and it had even more years of usage under the belt with its' former owner. Imo, that's a very sustainable solution, that worked pretty well until now.

I used it almost exclusively for Nextcloud (AIO), with all the data being stored in the internal 1 TB SSD.

For those who are interested, here are all the hardware details:

<hwinfo -short>

Thing is, I want to get more into selfhosting. For that, my main goal is to
a) Replace Nextcloud with individual (better) services, like Immich and Paperless-ngx.
NC-AIO was extremely simple to set up and worked pretty fine, but I always found it to be bloated and a bit wonky, and, mainly, the AIO takes up all my network and resources. I just want something better, you understand that for sure :)
b) Get more storage. I'm into photography, and all those RAW photos take up SO MUCH SPACE! The internal 1 TB is just not future proof for me.
c) Maybe rework my setup, both in software, and maybe in hardware. Originally, I didn't plan to screw everything, but I think it might be better that way. The setup isn't bad at all, but now, as I got more experience, I just want it to be more solid. But I'm not sure about doing that tbh, since it really isn't a lost case.


As someone already mentioned in the last post, I really don't have a million bucks to create my own data center. I'm not completely broke, but almost :D
Therefore, I just want to make the best out of my already existing hardware if possible.

Because I decided against USB, and because I don't know if there are any slots on the mainboard that can be repurposed for additonal storage, I need your advice if there are any options to achieve that, e.g. via a PCIe slot + adapter, if I had any.
I saw one SATA III port, but that one really isn't enough, especially for extendability.

Here are the photos from both the front and back side:


My thought was, instead of buying one hella expensive 3+TB SSD drive, just screw it and make something better from scratch.

So, if you guys don't give me a silver bullet solution, aka. "you can use this slot and plug in 4 more drives", I will probably have to build my own "perfect" device, with a great case, silent fans, many storage slots, and more.

Btw, do you have any recommendations for that? (What mainboard, which case, etc.) Preferably stuff that I can buy already used.

Thank you so much!

9
 
 

/edit: did a firmware upgrade of the AP and can't replicate it anymore. Thanks all for the input, much appreciated. In case it happens again I will use your tips.

I have a very weird issue. I've got a relatively simple network setup:

  • router connected to ONT (Fibre)
  • 10 port switch A connected to router, cables to various places in house
  • 4 port switch B connected to switch A, with TV & Xbox connected
  • Unify WiFi AP connected to switch A, both 2.4Ghz and 5Ghz networks

That works well. However, when I connect the WiFi AP to switch B I'm having issues. Initially it all works well, but after ~30 minutes the wifi stops working; I can no longer ping e.g. the router. It only happens to one of the WiFi networks (2.4Ghz or 5Ghz), not both. A reboot of the AP solves it again, but then it stops working after ~30 minutes.

Both switch A and B are 1Ghz switches, zero issues with other devices.

Any idea what I can try?

10
 
 

My Linksys router died this morning - fortunately, I had a spare Netgear one laying around, but manually replacing all DHCP reservations (security cameras, user devices, network devices, specific IoT devices) and port forwarding options was a tedious pain. I needed a quick solution; my job is remote, so I factory reset the Netgear (I wasn't sure what settings were already on it) and applied the most important settings to get the job done.

I'm looking for recommendations for either a more mature setup, backup solution, or another solution. Currently, my internet is provided from an AT&T ONT, which has almost everything disabled (DHCP included), and was passing through to my Linksys router. This acted as the router and DHCP server, and provided a direct connection to an 8-port switch, which split off into devices, 2 more routers acting as access points (one for the other side of the house, one for the separated garage, DHCP disabled on both).

If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

If going the route of a smarter solution, I'm not sure what to consider, so I'd love to hear some input. I think having so many devices using DHCP reservations might not be the way to go, but it's the best way I've been able to provide organization and structure to my growing collection of network devices.

If going with a more mature setup, I'm not sure what to consider for a fair ballpark budget / group of devices for a home network. I've been eyeing the Ubiquiti Cloud Gateway + 3 APs for a while (to replace my current 1 router / 2 routers-in-AP-mode setup), but am wondering if the selfhosted community has any better recommendations.

I'm happy to provide more information - I understand that selfhosting / home network setup is not a one-size-fits-all.

Edit: Forgot to mention! Another minor gripe I have is that my current 1 router / 2 routers-as-AP solution isn't meshed, so my devices have to be aware of all 3 networks as I walk across my property. It's a pain that I know can be solved with buying dedicated access points (...right?), but I'd like to know other's experiences with this, either with OpenWRT, or other network solutions!

Edit 2: Thanks for the suggestions and discussion everybody, I appreciate hearing everybody's recommendations and different approaches. I think I'm leaning towards the Ubiquiti UCG Ultra and a few Ubiquiti APs, they seem to cover my needs well. If in a few years that bites me in the ass, I think my next choices will be Mikrotik, OPNsense, or OpenWRT.

11
 
 

I am not looking for something like permify, but something like Snipe-IT, but for permissions and roles given to users.

Like an overview of which systems, software etc. a user has access to.

Does something like that exists?

12
 
 

I'm planning to upgrade my home server and need some advice on storage options. I already researched quite a bit and heard so many conflicting opinions and tips.

Sadly, even after asking all those questions to GPT and browsing countless forums, I'm really not sure what I should go with, and need some personal recommendations, experience and tips.

What I want:

  • More storage: Right now, I only have 1 TB, which is just the internal SSD of my thin client. This amount of storage will not be sufficient for personal data anymore in the near future, and it already isn't for my movies.
  • Splitting the data: I want to use the internal drive just for stuff that actively runs, like the host OS, configs and Docker container data. Those are in one single directory and will be backed up manually from time to time. It wouldn't matter that much if they get lost, since I didn't customize a lot and mostly used defaults for everything. The personal data (documents, photos, logs), backups and movies should each get their own partition (or subvolume).
  • Encryption at rest: The personal data are right now unencrypted, and I feel very unwell with that. They definitely have to get encrypted at rest, so that somebody with physical access can't just plug it in and see all my sensitive data in plain text. Backups are already encrypted as is. And for the rest, like movies, astrophotography projects (huge files!), and the host, I absolutely don't care.
  • Extendability: If I notice one day that my storage gets insufficient, I want to just plug in another drive and extend my current space.
  • Redundancy: At least for the most important data, a hard drive failure shouldn't be a mess. I back them up regularly on an external drive (with Borg) and sometimes manually by just copying the files plainly. Right now, the problem is, if the single drive fails, which it might do, it would be very annoying. I wouldn't loose many data, since they all get synced to my devices and I then can just copy them, and I have two offline backups available just in case, but it would still cause quite some headache.

So, here are my questions:

Best option for adding storage

My Mini-PC sadly has no additional ports for more SATA drives. The only option I see is using the 4 USB 3.0 ports on the backside. And there are a few possibilities how I can do that.

  • Option 1: just using "classic" external drives. With that, I could add up to 4 drives. One major drawback of that is the price. Disks with more than 1 TB are very expensive, so I would hit my limit with 4 TB if I don't want to spend a fortune. Also, I'm not sure about the energy supply and stability of the connection. If one drive fails, a big portion of my data is lost too. I can also transform them into a RAID setup, which would half my already limited storage space even more, and then the space wouldn't be enough or extendable anymore. And of course, it would just look very janky too...
  • Option 2: The same as above, but with USB hubs. That way, I theoretically could add up to 20 drives, when I have a hub with 5 slots. That would of course be a very suboptimal thing, because I highly doubt that the single USB port can handle the power demand and information speed/ integrity with that huge amount of drives. In reality, I of course wouldn't add that many. Maybe only two per hub, and then set them up as RAID. That would make 4x2 drives.
  • And, option 3: Buy a specialized hard drive bay, like this simpler one with two slots or this more expensive one for 4 drives and active cooling. With those, I can just plug in up to 4 drives per bay, and then connect those via USB. The drives get their power not from the USB port, but from their own power supply. Also, they get cooled (either passively via the case if I choose one that fits only two drives, or actively with a cooling fan) and there are options to enable different storage modes, for example a built in RAID. That would make the setup quite a bit simpler, but I'm not sure if I would loose control of formatting the drives how I want them to be if they get managed by the bay.

What would you recommend?

File system

File system type

I will probably choose BTRFS if that is possible. I thought about ZFS too, but since it isn't included by default, and BTRFS does everything I want, I will probably go with BTRFS. It would give me the option for subvolumes, some of which are encrypted, compression, deduplication, RAID or merged drives, and seems to be future proof without any disadvantages. My host OS (Debian) is installed with Ext4, because it came like that by default, and is fine for me. But for storage, something else than Ext4 seems to be the superior choice.

Encryption

Encrypting drives with LUKS is relatively straight forward. Are there simple ways to do that, other than via CLI? Do Cockpit, CasaOS or other web interface tools support that? Something similar to Gnomes' Disk Utility for example, where setting that up is just a few clicks.

How can I unlock the drives automatically when certain conditions are met, e.g. when the server is connected to the home network, or by adding a TPM chip onto the mainboard? Unlocking the volume every time the server reboots would be very annoying.

That of course would compromize the security aspect quite a bit, but it doesn't have to be super secure. Just secure enough, that if a malicious actor (e.g. angry Ex-GF, police raid, someone breaking in, etc.) can't see all my photos by just plugging the drive in. For my threat model, everything that takes more than 15 minutes of guessing unlock options is more than enough. I could even choose "Password123" as password, and that would be fine.

I just want the files to be accessible after unlocking, so the "Encrypt after upload"-option that Nextcloud has or Cryptomator for example isn't an option.

RAID?

From what I've read, RAID is a quite controversial topic. Some people say it's not necessary, and some say that one should never live without. I know that it is NOT a backup solution and does not replace proper 3-2-1-backups.

Thing is, I can't assess how often drives fail, and I would loose half of my available storage, which is limited, especially by $$$. For now, I would only add 1 or max 2 TB, and then upgrade later when I really need it. And for that, having to pay 150€ or 400€ is a huge difference.

13
 
 

I'm asking here since I assume a lot of you guys are running homelabs on Lenovo hardware.

I have two M720q Tiny that came with one 8 GB 2400MHz SoDIMM RAM each (see picture). I upgraded one to 2x 32 GB RAM and want to reuse the sticks in the other one for 16 GB in total and dual channel use. They have different FRU numbers (see picture) but I assume it's the same marketing part number and I guess my plan works?

Can anyone confirm? I'm just unsure about dual channel since it's not exactly the same RAM and I'm used to using exact pairs.

Thanks!

14
15
 
 

Basically, the title. After years of inactivty, I'll be taking music (cello) lessons again, with my teacher of yesteryear, from whom I've moved half a country away.

She has suggested Zoom but is open to alternatives. I don't particularly like Zoom, plus I have a feeling better quality can be had through a custom solution - but I'm at a bit of a loss as to what exactly would be a good fit for this project.

Maybe Jitsi? Does someone here have experience with it and could tell me if it's possible to set something like a "target" audio quality?

For hardware, I basically have two options. Both are already in use, for different things, and have sufficient processing capabilities - albeit no GPU:

  • host everything at home. Plus: lowest possible latency from me to the server. Not sure how much that is worth though.
  • root server in the Hetzner cloud: much faster network speed. Again though, not sure how beneficial that is, the ultimate bottleneck will always be my upload speed (40Mbit)

OK, I realize that this post is a but of a random assortment of thoughts. I'd be really happy about suggestions and / or hearing about other's experiences with similar use-cases!

16
1
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 
 

Hi all!

I am looking for aWIFI Digital Photo Frame which is not dependent on a cloud storage that I don't know and don't trust. So the device would either have lots of internal storage (at least 64GB)or storage can be added via SD cards or USB Sticks.

Ideally there would be an app that I can use to upload pictures to the device in my home WIFI but SFTP/Samba would also be ok.

Any recommendations?

edit: up to 200 EUR budget edit: at least 10 inch in size

Thanks a lot!

17
 
 

I am currently serving a photoprism instance for my self and the wife. I want to expand to have everyone's home folder on the server. So we would have 5 home folders, all lunuxes. Anyway so I'm looking at some old servers that actually look pretty good.

HPE Proliant DL360 Gen9

I've been comparing it with other servers and it seems to be the easiest to use for the semi intrepid admin wannabe that I am. Is there anything better in the sub $300 range?

18
 
 

I am moving from an debian-server (odroid) to a proxmox-server. I have a 2tb-ssd for some media in my proxmox, so that is what i did:

  1. i mounted the smaba-share from my old server in proxmox (not in the lxc "ausiobookshelf")
  2. i moved the data from the old server to the lxc-mountpoint "audiobooks" on my proxmox

This worked but now i have trouble to give permissions. In proxmox i can edit the permissions but there is no user "audiobookshelf" in the proxmox-root. In the lxc i have the user "audiobookshelf" but i have no rights to edit the permissions.

Question: What is the best solution to move data to lxc-mountpoints regarding the permissions? Should i use a systemwide user or group? Or should i mount the samba-share from the old server in the lxc?

audiobookshelf is only the beginning. SABnzbd and jellyfin will follow so i ask in common... ;)

19
130
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 
 

See this post from another website for more context

A new version (1.32.0) of Vaultwarden is out with security fixes:

This release has several CVE Reports fixed and we recommend everybody to update to the latest version as soon as possible.

CVE-2024-39924 Fixed via #4715

CVE-2024-39925 Fixed via #4837

CVE-2024-39926 Fixed via #4737

Release page

20
 
 

Here we are - 3600 which was still under manufacture 2-3 years ago are not get patched. Shame on you AMD, if it is true.

21
 
 

The toddler loves having Kodi full of all their faves but I haven’t been able to iron out all the buffering I’m getting streaming from my mini-pc NFS mounted shares to the pi4 libreelec hooked up via Ethernet in the living room. Everything is wired, so I wouldn’t think that would be an issue but here I am about to put down a couple hundred dollars for a Synology router that looks like the monolith from 2001. Is this going to do the trick, you think? Is there another router recommended to keep a distributed little homelab (any 10tb spread between various usb hdd, raspberry pi’s and mini PCs all hosting a variety of containers and services) running smoothly? Budget I’m hoping to keep under 300 and lower the better but happy toddler and buttery smooth streaming over lan is the priority.

22
 
 

I'm pretty new to self-hosting, and the NAS I'm using right now has been a pain since the moment I bought it. The Synology DS220+ just doesn't have enough CPU power for my needs, and I've recently used up all the disk space I installed, so I'm looking for a new server.

Unfortunately, all the options I've found online prioritize storage space over CPU, and I haven't had much luck finding anything that fits my needs.

Requirements: CPU: Intel Core i3 or higher, but preferably Core i5 GPU: Not needed RAM: max 64 GB, min 16 GB Storage: max 32 TB, min 10 TB Network: 10 GB SPF+ Price: max 6K CAD, preferred 3K CAD

I'm hoping to run TrueNAS Scale with Plex and Nextcloud installed, and my media library isn't likely to get larger than 5 TB, so CPU is really the main limiter of my current NAS.

As an example of something almost perfect: The TrueNAS mini X+ and R varieties would work excellently, but don't meet the CPU requirement. I wanted to look at the other systems on offer from TrueNAS, but they don't list out CPU specs for anything more advanced than the Mini line.

Of the Lenovo stuff, since it was one of the few websites with a filterable picker, the ThinkSystem SR630 V2 was the closest of fitting my requirements. It comes short on the CPU, though, and is verging on the price limit too. I also don't need 12 TB of RAM, or 1.2 PB of storage.

What do you use? Can you recommend any websites I can go to find something that fits my needs better?

23
 
 

After seeing that my wireless speeds were much faster than the speeds I was getting over Ethernet, I decided to invest in some new cables. I didn't know it before, but I saw while I was changing them out that my current cables were Cat 5e. While putting my network together, I had just been grabbing whatever cables I could find in my scrap drawers. Now I have Cat 8 cables and my speeds jumped from 7MB/s to an average of over 40MB/s. It's a much bigger improvement than I expected, especially for such a small investment.

24
 
 

I don't consider myself very technical. I've never taken a computer science course and don't know python. I've learned some things like Linux, the command line, docker and networking/pfSense because I value my privacy. My point is that anyone can do this, even if you aren't technical.

I tried both LM Studio and Ollama. I prefer Ollama. Then you download models and use them to have your own private, personal GPT. I access it both on my local machine through the command line but I also installed Open WebUI in a docker container so I can access it on any device on my local network (I don't expose services to the internet).

Having a private ai/gpt is pretty cool. You can download and test new models. And it is private. Yes, there are ethical concerns about how the model got the training. I'm not minimizing those concerns. But if you want your own AI/GPT assistant, give it a try. I set it up in a couple of hours, and as I said... I'm not even that technical.

25
2
submitted 4 weeks ago* (last edited 4 weeks ago) by [email protected] to c/[email protected]
 
 

Hi guys for those of you that use pi-hole (or similar solutions like adguard home, etc) and wireguard how far away can you be from your wireguard/pi-hole server before latency becomes a major issue?

Also on a side note how many milliseconds of latency would you guys consider to be to slow?

Edit I meant dns latency sorry for not mentioning

view more: next ›