NHacker Next
login
▲Self-Host and Tech Independence: The Joy of Building Your Ownssp.sh
457 points by articsputnik 2 days ago | 215 comments
Loading comments...
kassner 1 days ago [-]
Warning: shameless plug ahead

Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.

Laptops are also quite power efficient at idle, so in the long run they make more sense than a desktop. If you are just starting, they are a great first server.

(And no, laptops don’t have an inbuilt UPS. I recommend everyone to remove the battery before using it plugged 24x7)

1: https://www.kassner.com.br/en/2023/05/16/reusing-old-hardwar...

godelski 14 hours ago [-]
On topic, this is how I got into computing and Linux. I moved out as soon as I graduated high school and the only computer I had was an gen 1 mac mini and a tiny netbook with a blazing 1Ghz single core Intel atom (32bit). Even XP ran slow. Couldn't install vista nor the relatively new windows 7.

A friend told me about Linux. So I thought I had nothing to lose. What I didn't know is what I had to gain.

Ended up getting hooked. Grabbed computers out of the dumpster at my local community college and was able to piece together a few mildly decent machines. And even to this day I still recycle computers into random servers. Laptops and phones are usually great. They can't do everything but that's not the point. You'd be surprised what a 10 yo phone can still do.

I'm not trying to brag, but do want people to know that it's very possible to do a lot in absolutely nothing. I was living paycheck to paycheck at the time. It's not a situation I want anyone to go through, but there is a lot more free hardware out there than you think. People throw out a lot of stuff. A lot of stuff that isn't even broken! Everything I learned on was at least 5 years old at the time. You don't need shiny things and truth is that you don't get a lot of advantages from them until you get past the noob stage. It's hard, but most things start hard. The most important part is just learning how to turn it into play.

neepi 11 hours ago [-]
Yep same. Amazing what you can pull out of the skip these days and run for nothing. I lifted a couple of dead Lenovo P720 workstations out and managed to get a working dual Xeon silver 32 core machine with 64Gb of ECC RAM.

Uses a bunch of power but two orders of magnitude less in cash than buying another ECC ram desktop over 3 years.

If it blows up it cost me nothing other than an hour of part swapping.

godelski 6 hours ago [-]

  > If it blows up it cost me nothing other than an hour of part swapping.
I think this is part of the magic sauce.

When you're poor you're probably less risky with expensive stuff and what's considered expensive is a low threshold.

But if it was a dumpster find... who cares?

neepi 3 hours ago [-]
Having been poor for most of my life, it's what you didn't pay for that keeps you afloat generally. If it blows up, I will move to the Lenovo M70s I found without a hard disk or RAM the other week. I put an 8 gig stick in it and it works so will get some more off ebay.

I have a fairly high end M4 Macbook Pro but prefer to live as if I don't most of the time. All of us can take a big fall in life so it makes sense to keep one foot in both worlds.

nntwozz 23 hours ago [-]
old comment: https://news.ycombinator.com/item?id=41150483

Where I live (250 apartment complex in Sweden) people throw old computers in the electronics trash room, I scavenge the room every day multiple times when I take my dog out for a walk like some character out of Mad Max. I mix and match components from various computers and drop debian on them then run docker containers for various purposes. I've given my parents, cousins and friends Frankenstein servers like this. You'd be amazed at what people throw away, not uncommon to find working laptops with no passwords that log straight into Windows filled with all kinds of family photos. Sometimes unlocked iPhones from 5 years ago. It's a sick world we live in. We deserve everything that's coming for us.

LaurensBER 14 hours ago [-]
I'm not sure if that a sign of the coming apocalypse.

I hope it reflects the fact that most people don't have a great understanding of IT and cyber security rather than a sign of a sick world ;)

safety1st 1 days ago [-]
I'm posting right now from a 13 year old Acer laptop running Linux Mint XFCE. I always feel bad about throwing away old tech so when the time came to buy a new laptop I hooked this one up to my living room TV via HDMI, bought a $25 Logitech K400+ wireless keyboard/trackpad combo, and it's still trucking along just fine. Surfs the web, handles YouTube, Netflix with no problems, I occasionally pop open VS Code or Thunderbird to check into something work-related. Even runs a couple indie games on Steam with gamepad support.

I bet Framework laptops would take this dynamic into overdrive, sadly I live in a country that they don't ship to.

em-bee 21 hours ago [-]
same here, using the old laptops until they are physically so damaged that they can't be used anymore and the cost to repair exceeds the cost to replace them. got one in it's last breaths. working fine mostly, but the keyboard is badly damaged, so needs an external keyboard to be useful. for work of course i need something stronger, but when i need to replace my work laptop my kids get an "upgrade" :-)
kassner 14 hours ago [-]
> I bet Framework laptops would take this dynamic into overdrive

It’s in my (long-term) TODO list to build my own enclosure for a Framework motherboard, to make a portable server to carry around during long trips. Something compact that carries the punch of an i7. One day…

Infernal 9 hours ago [-]
Similar to this? https://frame.work/products/cooler-master-mainboard-case
kassner 8 minutes ago [-]
Yes, but custom to my needs (disks and connections).
agumonkey 12 hours ago [-]
what are the specs ? I use a 10yo thinkpad with a core i3 and arch based desktop, sometimes the web is too heavy (discord or similar webapps) but it's mostly fine.

it's true that with a bit of education, you can get pretty far with old machines

Abishek_Muthian 2 hours ago [-]
> I recommend everyone to remove the battery before using it plugged 24x7

I agree but not all laptops can run without battery being plugged in. I use a Acer E5 575 as a home-lab and it can’t run without battery being plugged in, but interestingly the laptop has decided to bypass the battery completely after it died. Operating Systems detect no battery but its there and without it the laptop won’t boot.

dd_xplore 2 hours ago [-]
There would be some sort of internal ‘switch’ that can be bypassed I believe
philjohn 14 hours ago [-]
The best bang for buck at the moment seems to be tiny mini micro machines https://www.servethehome.com/introducing-project-tinyminimic...

Typically available regularly via ebay (or similar) as businesses rotate them out for new hardware.

The other week I picked up an i5 9400T Lenovo m720q with 16GB of memory for £100 delivered.

They practically sip power, although that's less true now I've shoved a 10Gb dual SFP NIC in there.

xcrunner529 7 hours ago [-]
Yep. I have bought 3 or 4 for different uses. So perfect as servers. I run plex and lots of docker containers. Development. Etc. all of those machines are so useful for console Linux and containers.
thatspartan 22 hours ago [-]
Speaking of laptop batteries as a UPS source, some laptops come with battery management features that keep the battery healthy even when plugged in full time, usually exposed as a setting in the BIOS/UEFI. I've found that business/enterprise type laptops like Thinkpads and Probooks have this as standard, for example Thinkpads from 2010 already had this, assuming you're lucky enough to find one with a usable battery of course.
cguess 20 hours ago [-]
Macbooks do this as well automatically if kept plugged in for a certain period of time.
kassner 20 hours ago [-]
Is there something for Linux/debian? I’m assuming this is part of the OS and wouldn’t work on a MacBook with Linux.
seszett 16 hours ago [-]
It's managed by the OS when it's awake, by the bios (or uefi or whatever) when it's sleeping.

Both methods work under Asahi Linux on the ARM macs.

mac-attack 17 hours ago [-]
Look up tlp's charging thresholds. Just set mines up for debian
20 hours ago [-]
KronisLV 21 hours ago [-]
> Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.

My homelab servers have Athlon 200GE CPUs in them: https://www.techpowerup.com/cpu-specs/athlon-200ge.c2073

They're x86 so most software works, AM4 socket so they can have the old motherboards I had in my PC previously, as well as the slower RAM from back then. At the same time they were dirt cheap on AliExpress, low TDP so I can passively cool them with heatsinks instead of fans and still powerful enough for self-hosting some software and using them as CI runners as well. Plus, because the whole setup is basically a regular PC with no niche components, the Linux distros I've tried on them also had no issues.

Honestly it's really cool that old components can still be of use for stuff like that.

mkayokay 1 days ago [-]
I can also recommend Lenovo ThinkCentre MiniPCs or similar brands. Those can often be found cheap when companies upgrade their Hardware. These machines are also power efficient when idling, use even less space than a laptop and the case fan is very quiet (which can be annoying with laptops under load).

I'm currently running Syncthing, Forgejo, Pihole, Grafana, a DB, Jellyfin, etc... on a M910 with an i5 (6th or 7th Gen) without problems.

huuhee3 1 days ago [-]
Yeah I would recommend this too. I've only used Dell Optiplex Micro series, no issues so far. They use external PSU similar to those in laptops, which helps with power efficiency.

Something with 8th gen i5 can be had for about 100-150 USD from ebay, and that's more than powerful enough for nearly all self-hosting needs. Supports 32-64gb of RAM and two SSD.

glitchcrab 23 hours ago [-]
I second this, I have a 4 node Proxmox cluster running on MFF Optiplexes and it's been great. 32gb of RAM in each and a second USB NIC (bonded with the built-in NIC) makes for a powerful little machine with low power draw in a convenient package.
philjohn 14 hours ago [-]
The Optiplexes look nice, but I went with the Lenovo m720q's for the PCIe slot ... 10Gb dual SFP+ NICs are cheap as chips on eBay and when you can migrate VM's faster it's a nice quality of life improvement for migrating VM's between proxmox nodes.
zer00eyz 17 hours ago [-]
> M910 with an i5

These are great and the M920q is also nice.

At 100 to 160 used these are a steal, just test the disks before you commit to long term projects with them (some have a fair bit of wear). Its newer cousins quickly climb in price to the $300+ range (still refurb/used)

The bleeding edge of this form factor is the Minisforum MS-01. At almost 500 bucks for the no ram/storage part it's a big performance jump for a large price jump. This isnt a terrible deal if you need dual SFP+ ports (and you might) and a free PCIE slot but it is a large price jump.

kassner 15 hours ago [-]
> M920q

I’m pissed at Lenovo for making the perfect machine for a home server, and then cheaping out by not adding the $0.50 M.2 connector on the back of the board. 2xM.2 + 1xSATA requires upgrading to “Tall” Intel NUCs if you want 3 discs.

philjohn 14 hours ago [-]
If you want 2 m.2 slots you want the p330, same form factor as the m920q[1]

[1]https://www.ebay.co.uk/itm/116583724775

kassner 12 hours ago [-]
Thank you! I thought only ThinkCentre were in the 1-liter form factor
m-localhost 23 hours ago [-]
I've got an old Mac-Mini 2012 laying around. It was a gift. I never wanted to switch to Mac on this solid, but not very powerful machine. Over xmas last year I booted the thing, and it was unbearable slow, even with the original version of the OS on it. After an macOS update, it was unusable. I put an SSD in (thanks YouTube for the guidance) and booted it with Debian and on top of that installed CasaOS (web-based home server OS/UI). Now I can access my music (thanks Navidrome) from on the road (thanks Wireguard). Docker is still a mystery to me, but I already learned a lot (mapping paths)
kassner 23 hours ago [-]
I have a 2009 MacBook Pro (Core 2 Duo) which I wanted to give a similar fate, but unfortunately it idles at 18W on Debian.

I hope Asahi for Mac Mini M4 becomes a thing. That machine will be an amazing little server 10 years from now.

detourdog 19 hours ago [-]
My domain has been running on a Mac Mini 2012 since new using Mac OS. Internet services are generally constrained by the available bandwidth and don't need much processing.
cherryteastain 1 days ago [-]
Yes but arguably anything below the equivalent of RAID6/RAIDZ2 puts you at a not inconsiderable risk of data loss. Most laptops cannot do parity of any sort because of a lack of SATA/M.2 ports so you will need new hardware if you want the resilience offered by RAID. Ideally you will want that twice on different machines if you go by the "backups in at least 2 different physical locations" rule.
PhilipRoman 23 hours ago [-]
To be honest I never understood the purpose of RAID for personal use cases. RAID is not a backup, so you need frequent, incremental backups anyway. It only makes sense for things where you need that 99.99% uptime. OK, maybe if you're hosting a service that many people depend on then I could see it (although I suspect downtime would still be dominated by other causes) but then I go over to r/DataHoarder and I see people using RAID for their media vaults which just blows my mind.
darkwater 17 hours ago [-]
Convenience. If you lose a disk you can just replace it and don't need to reinstall/restore the backup.

Also, because it's fun and probably many self-hosters had racked servers and plugged disks in noisy, cold big chambers and they want to live again the fun part of that.

paldepind2 22 hours ago [-]
RAID is not backup, but in some circumstances it's better than a backup. If you don't have RAID and your disk dies you need to replace it ASAP and you've lost all changes since your last backup. If you have RAID you just replace the disk and suffer 0 data loss.

That being said, the reason why I'm afraid of not using RAID is data integrity. What happens when the single HDD/SSD in your system is near its end of life? Can it be trusted to fail cleanly or might it return corrupted data (which then propagates to your backup)? I don't know and I'd be happy to be convinced that it's never an issue nowadays. But I do know that with a btrfs or zfs RAID and the checksuming done by these file systems you don't have to trust the specific consumer-grade disk in some random laptop, but instead can rely on data integrity being ensured by the FS.

haiku2077 18 hours ago [-]
You should not propagate changes to your backup in a way that overwrites previous versions. Otherwise a ransomware attack will also destroy your backup. Your server should be allowed to only append the data for new versions without deleting old versions.

Also, if you're paranoid avout drive behavior, run ZFS. It will detect such problems and surface it at the OS level (ref "Zebras All The Way Down" by Bryan Cantrill)

mikeocool 13 hours ago [-]
RAID isn’t backup - but in my years running computers at my house I’ve been lucky enough to lose zero machines to theft, water damage, fire, etc. but I have had many hard drives fail.

Way more convenient to just swap out a drive then to swap out a drive and restore from backup.

PhilipRoman 12 hours ago [-]
Interesting, I've had the exact opposite experience. My oldest HDD from 2007 is still going strong. Haven't had even a single micro SD card fail in a RPI. I built some fancy backup infrastructure for myself based on a sharded hash addressed database but so far have only used the backups to recover from "Layer 8" issues :)

I had a look at my notes and so far the only unexpected downtime has been due to 1x CMOS battery running out after true power off, 1x VPS provider randomly powering off my reverse proxy, 2x me screwing around with link bonding (connections always started to fail a few hours later, in middle of night).

em-bee 21 hours ago [-]
i use mirror raid on my desktop. the risk of a disk dying is just to high. i even made sure to buy disks from two different vendors to reduce the chance of them dying at the same time. for the laptop i run syncthing to keep the data in sync with the desktop and a remote server. if the laptop dies i'll only be a few minutes out. when travelling i sync to a USB drive frequently.

for the same reason i don't buy laptops with soldered SSD. if the laptop dies, chances are the SSD is still ok, and i can recover it easily.

j45 7 hours ago [-]
It's incredibly valuable. It makes redundancy really affordable.

This means nothing until the need to replace one drive arises, then it's not an if..

No downtime with raid 5, you can swap out one drive as needed while the rest runs just fine.

xcrunner529 6 hours ago [-]
I like snapraid for media drives. As long as it’s something without lots of deletes and changes, I bet more space and can use mixed drives and get a bit of a backup too since it’s a manual sync to create or update the “parity”. And the added advantage that any drive taken out or they dies you still can read any if the content on the other drives at any time.
kassner 1 days ago [-]
Absolutely!

> if you want the resilience offered by RAID

IMHO, at that stage, you are knowledgeable enough to not listed to me anymore :P

My argument is more on the lines of using an old laptop as a gateway drug to the self-hosting world. Given enough time everyone will have a 42U rack in their basements.

geraldhh 24 hours ago [-]
> Most laptops cannot do parity of any sort because of a lack of SATA/M.2 ports

raid is NOT media or connection dependent and will happily do parity over mixed media and even remote blockdevs

washadjeffmad 18 hours ago [-]
Nodes don't need to store data, and they can be PXE booted if they have a little RAM, so they only need redundant devices for their system partitions if you want to boot them locally (how often will they really be rebooted, though?). A hard drive plus a flash / USB drive would be plenty.

Consumer NASes have been around for 20 years, now, though, so I think most people would just mount or map their storage.

anotherpaul 1 days ago [-]
Glad I am not alone in this. Old laptops are much better than Raspberry pies and often free and power efficient.
imrejonk 1 days ago [-]
And: they have a crash cart (keyboard, mouse and display) and battery backup built-in. An old laptop is perfect for starting a homelab. The only major downside I can think of, and as another commenter already mentioned, is the limited storage (RAID) options.
HPsquared 23 hours ago [-]
A lot of older 17" laptops had dual HDD slots.
kassner 23 hours ago [-]
Or DVD drives in which you could add a disk caddy.
HPsquared 23 hours ago [-]
Ah yes, optical drives were very common for a while.
Onavo 1 days ago [-]
> free and power efficient

Free yes. Power efficient no. Unless you switch your laptops every two years, it's unlikely to be more efficient.

kassner 1 days ago [-]
My laptop from 2011 idles at 8W, with two SATA SSDs. I have an Intel 10th-gen mini PC that idles at 5W with one SSD. 3W is not groundbreaking, but for a computer you paid $0, it would take many years to offset the $180 paid on a mini PC.
HPsquared 23 hours ago [-]
Say power costs 25¢/kWh. That's $2 per year per watt of standby power. Adjust to your local prices.

So that'd take 30 years to pay back. Or, with discounted cash flow applied... Probably never.

motorest 1 days ago [-]
> My laptop from 2011 idles at 8W, with two SATA SSDs.

some benchmarks show the Raspberry Pi 4 idling below 3W and consuming a tad over 6W under sustained high load.

Power consumption is not an argument that's in favor of old laptops.

dd_xplore 1 hours ago [-]
RPi is amazing for IOT tasks cuz it’s pretty portable but not for running general purpose server tasks, you’d get better performance per watt with used gear
kassner 1 days ago [-]
> tad over 6W

That is the key. The RPi works for idling, but anything else gets throttled pretty bad. I used to self host on the RPi, but it was just not enough[1]. Laptops/mini-PCs will have a much better burstable-to-idle power ratio (6/3W vs 35/8W).

1: https://www.kassner.com.br/en/2022/03/16/update-to-my-zfs-ba...

motorest 24 hours ago [-]
> That is the key. The RPi works for idling, but anything else gets throttled pretty bad.

I don't have a dog in this race, but I recall that RPi's throttling issues when subjected to high loads were actually thermal throttling. Meaning, you picked up a naked board and started blasting benchmarks until it overheated.

You cannot make sweeping statements about RPi's throttling while leaving out the root cause.

kassner 23 hours ago [-]
amd64 processors will have lots of hardware acceleration built in. I couldn’t get past 20MB/s over SSH on the Pi4, vs 80MB/s on my i3. So while they can show similar geekbench results, the experience of using the Pi is a bit more frustrating than on paper.
shawabawa3 23 hours ago [-]
Why do you recommend removing the battery? Risk of fire?

I would have thought any reasonably recent laptop would be fine to leave plugged in indefinitely. Not to mention many won't have an easily removable battery anyway

christophilus 20 hours ago [-]
Not the guy you’re asking, but I’d say risk of fire, yes. The laptop will be safer without a battery than it is with one, regardless of safeguards.
kassner 15 hours ago [-]
As said by others, mostly the fire risk. They can catch on fire, although rare, and a bad contact or flaky power source could make it go into many charge/discharge cycles in a short period of time. Batteries also degrade faster if it is too warm, cheap laptops often have terrible thermals and you could also shove it in a closet. A combination of those will increase the fire risk.

Also when using an old laptop, the battery could be pretty beaten up (too many cycles or prolonged exposure to heat) or it could have been replaced by a cheap non-compliant alternative, making it harder to trust wrt fire risk. And if you have to buy a brand-new one to reduce that risk, it immediately changes all the economic incentives to use an old laptop (if you are gonna spend money, might as well buy something more suitable).

> many won't have an easily removable battery

That’s true, although I’d guess majority can still have the battery disconnected once you get access to the motherboard.

netfortius 18 hours ago [-]
I wish I took a picture of my MacBook pro mid-2015, which happens to be my home hosted stuff server, before I changed it's battery. As it was just sitting in a corner, almost forgotten, I noticed the problem when cleaning, one day, and it started wobbling when I moved the piece of furniture it was sitting on. Once I gave it to a guy who disposes of such things, he told me I was lucky it didn't explode.
yb6677 22 hours ago [-]
Also interested in the answer to this.
xcircle 18 hours ago [-]
I use an old thinkpad with Linux. There you can set a charging stop at e.g. 85%. Then you don’t have a need to unplug the battery
mdaniel 17 hours ago [-]
As a counterpoint my Lenovo X1 that was fresh from the factory had a battery swell so bad it cracked the case. So I think the risk being addressed was that, unless you're looking at the device every single day, the battery poses a fire/explosion risk that isn't worth it to some people
PeterStuer 22 hours ago [-]
I you are not afraid of shopping the used market, I'm currently building a Proxmox node with 3rd gen Threadripper 32Cores/64Threads, 256GB ram and 2x10G, 2x2,5G and a dedicated IPMI mgmnt 1G interface, 64 PCIe gen 4 lanes, all for less than 2k Euro.
briHass 20 hours ago [-]
I highly recommend anyone going this route to use Proxmox as your base install on the (old) hardware, and then use individual LXCs/VMs for the services you run. Maybe it's just me, but I find LXCs to be much easier to manage and reason about than Docker containers, and the excellent collection of scripts maintained by the community: https://community-scripts.github.io/ProxmoxVE/scripts makes it just as easy as a Docker container registry link.

I try to use LXCs whenever the software runs directly on Debian (Proxmox's underlying OS), but it's nice to be able to use a VM for stuff that wants more control like Home Assistant's HAOS. Proxmox makes it fairly straightforward to share things like disks between LXCs, and automated backups are built in.

esskay 1 hours ago [-]
I think it depends on the usecase (and budget) as to what's 'best' really. I've got Proxmox running on a couple of servers and it's great.

But I also run Unraid on the main NAS server purely for its ZFS drive setup. Being able to throw in a bunch of drives of various sizes and brands on a home machine is pretty valuable and saves a huge amount of money.

internet101010 1 hours ago [-]
Yep my entire set up is based on Proxmox VE and Proxmox Backup Server. Even my gaming PC is Proxmox with GPU being passed through to a VM. Couldn't be happier.
leosanchez 20 hours ago [-]
I use lxd to manage lxc containers. Am I missing out on anything?
nullwarp 20 hours ago [-]
A handy mostly straightforward UI with built in backup/restore and other useful tools.

It's hardly a requirement but if someone is just starting to learn, proxmox has lots of documentation on how to do things and the UI keeps you from footgunning yourself copy/pasting config code off websites/LLM too much.

briHass 19 hours ago [-]
Personally, I didn't want to manage my management/virtualization layer. I wanted something that was an all-in iso that wouldn't tempt me to configure at all. I wanted to be able to restore just my container backups to a new PM install without worrying about anything missing at the host (to the extent possible).

I also like that Proxmox can be fully managed from the web UI. I'm sure most of this is possible with LCD on some distro, but Proxmox was the standard at the time I set it up (LXD wasn't as polished then)

aucisson_masque 1 days ago [-]
I get why you want to self host, although I also get why you don’t want.

Selfhosting is a pain in the ass, it needs updating docker, things break sometimes, sometimes it’s only you and not anyone else so you’re left alone searching the solution, and even when it works it’s often a bit clunky.

I have a extremely limited list of self hosted tool that just work and are saving me time (first one on that list would be firefly) but god knows i wasted quite a bit of my time setting up stuffs that eventually broke and that i just abandoned.

Today I’m very happy with paying for stuff if the company is respecting privacy and has descent pricing.

zdw 1 days ago [-]
> docker

There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.

bluGill 1 days ago [-]
Maybe. There are pros and cons. Docker means you can run two+ different things on the same machine and update them separately. This is sometimes important when one project releases a feature you really want, while a different one just did a major update that broke something you care about. Running on the OS often means you have to update both.

Single binary sometimes works, but means you need more memory and disk space. (granted much less a concern today than it was back in 1996 when I first started self hosting, but it still can be an issue)

zdw 1 days ago [-]
How can running a single binary under systemd need more memory/disk space than having that identical binary with supporting docker container layers under it on the same system, plus the overhead of all of docker?

Conflicting versions, I'll give you that, but how frequently does that happen, especially if you mostly source from upstream OS vendor repos?

The most frequent conflict is if everything wants port 80/443, and for most self-hosted services you can have them listen on internal ports and be fronted by a single instance of a webserver (take your pick of apache/nginx/caddy).

bluGill 1 days ago [-]
I didn't mean the two paragraphs to imply that they are somehow opposites (though on hindsight I obviously did). There are tradeoffs. a single binary is between docker and a library that uses shared libraries. What is right depends on your situation. I use all three in my selfhosted environment - you probably should too.
Lvl999Noob 1 days ago [-]
If you are using docker, do you save anything by using shared libraries? I thought docker copies everything. So every container has its own shared libraries and the OS running all those containers has its own as well.
kilburn 1 days ago [-]
Not necessarily. You are still running within the same kernel.

If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.

This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]

[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...

zdw 1 days ago [-]
The difference between a native package manager provided by the OS vendor and docker is that in a native package manager allows you to upgrade parts of the system under the applications.

Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.

With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.

Incidentally, the same applies for most languages that prefer/require static linking.

As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.

motorest 1 days ago [-]
> With docker, someone has to rebuild every container that contains a copy of the library.

I think you're grossly overblowing how much work it takes to refresh your containers.

In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.

zdw 18 hours ago [-]
I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage. Most probably YOLO `docker pull` it once and never think about it again.

TBH, a slower upgrade cycle may be tolerable inside a private network that doesn't face the public internet.

motorest 15 hours ago [-]
> I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage.

What? You think the same guys who take an almost militant approach to how they build and run their own personal projects would somehow fail to be technically inclined to automate tasks?

rootnod3 1 days ago [-]
There are more options than docker for that. FreeBSD jails for example.
dgb23 14 hours ago [-]
I don’t understand why you would need docker for that.
sunshine-o 1 days ago [-]
I would agree with that.

Docker has a lot of use cases but self hosting is not one of them.

When self-hosting you wanna think long term and the fact you will loose interest in the fiddling after a while. So sticking with software packaged in a good distribution is probably the way to go. This is the forgotten added value of a Linux or BSD distribution, a coherent system with maintenance and an easy upgrade path.

The exception are things like Umbrel which I would say use docker as their package manager and maintain everything, so it is ok.

magicalhippo 1 days ago [-]
I feel the exact opposite. Docker has made self-hosting so much easier and painless.

Backing up relevant configuration and data is a breeze with Docker. Upgrading is typically a breeze as well. No need to suffer with a 5-year old out of date version from your distro, run the version you want to and upgrade when you want to. And if shit hits the fan, it's trivial to roll back.

Sure, OS tools should be updated by the distro. But for the things you actually use the OS for, Docker all the way in my view.

KronisLV 21 hours ago [-]
> Docker has made self-hosting so much easier and painless.

Mostly agreed, I actually run most of my software on Docker nowadays, both at work and privately, in my homelab.

In my experience, the main advantages are:

  - limited impact on host systems: uninstalling things doesn't leave behind trash, limited stability risks to host OS when running containers, plus you can run a separate MariaDB/MySQL/PostgreSQL/etc. instance for each of your software package, which can be updated or changed independently when you want
  - obvious configuration around persistent storage: I can specify which folders I care about backing up and where the data that the program operates on is stored, vs all of the runtime stuff it actually needs to work (which is also separate for each instance of the program, instead of shared dependencies where some versions might break other packages)
  - internal DNS which makes networking simpler: I can refer to containers by name and route traffic to them, running my own web server in front of everything as an ingress (IMO simpler than the Kubernetes ingress)... or just expose a port directly if I want to do that instead, or maybe expose it on a particular IP address such as only 127.0.0.1, which in combination with port forwarding can be really nice to have
  - clear resource limits: I can prevent a single software package from acting up and bringing the whole server to a standstill, for example, by allowing it to only spike up to 3/4 CPU cores under load, so some heavyweight Java or Ruby software starting up doesn't mean everything else on the server freezing for the duration of that, same for RAM which JVM based software also loves to waste and where -Xmx isn't even a hard limit and lies to you somewhat
  - clear configuration (mostly): environment variables work exceedingly well, especially when everything can be contained within a YAML file, or maybe some .env files or secrets mechanism if you're feeling fancy, but it's really nice to see that 12 Factor principles are living on, instead of me always needing to mess around with separate bind mounted configuration files
There's also things like restart policies, with the likes of Docker Swarm you also get scheduling rules (and just clustering in general), there's nice UI solutions like Portainer, healthchecks, custom user/group settings, custom entrypoints and the whole idea of a Dockerfile saying exactly how to build an app and on the top of what it needs to run is wonderful.

At the same time, things do sometimes break in very annoying ways, mostly due to how software out there is packaged:

https://blog.kronis.dev/blog/it-works-on-my-docker

https://blog.kronis.dev/blog/gitea-isnt-immune-to-issues-eit...

https://blog.kronis.dev/blog/docker-error-messages-are-prett...

https://blog.kronis.dev/blog/debian-updates-are-broken

https://blog.kronis.dev/blog/containers-are-broken

https://blog.kronis.dev/blog/software-updates-as-clean-wipes

https://blog.kronis.dev/blog/nginx-configuration-is-broken

(in practice, the amount of posts/rants wouldn't change much if I didn't use containers, because I've had similar amounts of issues with things that run in VMs or on bare metal; I think that most software out there is tricky to get working well, not to say that it straight up sucks)

tacker2000 1 days ago [-]
What are you talking about?

Docker is THE solution for self hosting stuff since one often has one server and runs a ton of stuff on it, with different PHP, Python versions, for example.

Docker makes it incredibly easy to a multitude of services on one machine however different they may be.

And if you ever need to move to a new server, all you need to do is move the volumes (if even necessary) and run the containers on the new machine.

So YES, self hosting stuff is a huge use case for docker.

sunshine-o 14 hours ago [-]
I think your view show the success of Docker but also an over-hype and generation that only know how to do things with Docker (or and so think everything is easier with it).

But before Docker there was the virtualisation hype when people sweared every software/service needs its own VM. VM or containers we end up with frankenstein systems with dozens of images on one machine. And with Docker we probably lost a lot of security.

So this is fine I guess in the corporate world because things are messy anyway and there are many other contraints (hence the success of containers).

But in your home, serving a few apps for a few users you actually don't need that gas factory.

If you wanna run everything on your home lab with Docker or Kubernetes because you wanna build a skillset for work or reuse your professional skills, fine go for it. But everything you think is easy with Docker is actually simpler and easier with raw Linux or BSD.

cowmix 18 hours ago [-]
OTOH, no.

Been self-hosting for 35+ years. Docker's made the whole thing 300% easier — especially when thinking long term.

phito 1 days ago [-]
Oh my god no, docker is so damn useful I will never return to package managers/manual installation.
motorest 1 days ago [-]
>>Oh my god no, docker is so damn useful I will never return to package managers/manual installation.

This. These anti-containerisation comments read like something someone oblivious to containers would say if they were desperately grabbing onto tech from 30 years ago and refused to even spend 5 minutes exploring anything else.

ndriscoll 21 hours ago [-]
Or they have explored other options and find docker lacking. I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (which can optionally easily be wrapped into containers if you want on nixos, but there you're using containers for their isolation features, not for the ability to 'docker pull', and for many purposes you can probably e.g. just use file permissions and per-service users instead of bind-mounts into containers).

Containers as practiced by many are basically static linking and "declarative" configuration done poorly because people aren't familiar with dynamic linking or declarative OS config done well.

motorest 21 hours ago [-]
> Or they have explored other options and find docker lacking.

I don't think so. Containerization solves about 4 major problems in infrastructure deployment as part of it's happy path. There is a very good reason why the whole industry pivoted towards containers.

> . I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (...)

This comment is void of any credibility. To start off, you suddenly dropped k8s into the conversation. Think about using systemd to setup a cluster of COTS hardware running a software-defined network, and then proclaim it's easier.

And then, focusing on Docker, think about claiming that messing with systemd units is easier than simply running "docker run".

Unbelievable.

ndriscoll 21 hours ago [-]
I mentioned k8s because when people talk about the benefits of containers, they usually mean the systems for deploying and running containers. Containers per se are just various Linux namespace features, and are unrelated to e.g. distribution or immutable images. So it makes sense to mention experience with the systems that are built around containers.

The point is when you have experience with a Linux distribution that already does immutable, declarative builds and easy distribution, containers (which are also a ~2 line change to layer into a service) are a rather specific choice to use.

If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run. Debugging NAT and iptables when you have multiple interfaces and your container doesn't have tcpdump is all a pain, for example. Dealing with issues like your bind mount not picking up a change to a file because it got swapped out with a `mv` is a pain. Systemd units aren't complicated.

motorest 21 hours ago [-]
> I mentioned k8s because when people talk about the benefits of containers, they usually mean the systems for deploying and running containers.

No, it sounds like a poorly thought through strawman. Even Docker supports Docker swarm mode and many k8s distributions use containerd instead of Docker, so it's at best an ignorant stretch to jump to conclusions over k8s.

> Containers per se are just various Linux namespace features, and are unrelated to e.g. distribution or immutable images. So it makes sense to mention experience with the systems that are built around containers.

No. Containers solve many operational problems, such as ease of deployment, setup software defined networks, ephemeral environments, resource management, etc.

You need to be completely in the dark to frame containerization as Linux namespace features. It's at best a naive strawman, built upon ignorance.

> If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run.

I'll make it very simple to you. I want to run postgres/nginx/keycloak. With Docker, I get everything up and running with a "docker run <container image>".

Now go ahead and show how your convoluted way is "way simpler".

ndriscoll 21 hours ago [-]
Containers do not do deployment (or set up software defined networks). docker or kubernetes (or others) do deployment. That's my point.

nix makes it trivial to set up ephemeral environments: make a shell.nix file and run `nix-shell` (or if you just need a thing or two, do e.g. `nix-shell -p ffmpeg` and now you're in a shell with ffmpeg. When you close that shell it's gone). You might use something like `direnv` to automate that.

Nixos makes it easy to define your networking setup through config.

For your last question:

    services.postgres.enable = true;
    services.nginx.enable = true;
    services.keycloak.enable = true;
If you want, you can wrap some or all of those lines in a container, e.g.

    containers.backend = {
        config = { config, pkgs, lib, ... }: {
            services.postgres.enable = true;
            services.keycloak.enable = true;
        };
    };
Though you'd presumably want some additional networking and bind mount config (e.g. putting it into its own network namespace with a bridge, or maybe binding domain sockets that nginx will use plus your data partitions).
turtlebits 11 hours ago [-]
Find any self hosted software, the docker deployment is going to be the easiest to stand up/destroy and migrate.
aucisson_masque 1 days ago [-]
I run Debian on my machine, so package are not really up to date and I would be stuck, not being able to update my self hosted software because some dependencies were too old.

And then, some software would require older one and break when you update the dependencies for another package.

Docker is a godsend when you are hosting multiple tools.

For the limited stuff I host, navidrome, firefly, nginx, .. I have yet to see single binary package. It doesn’t seem very common in my experience.

zdw 14 hours ago [-]
FWIW, Navidrome has bare binaries, packages (apt, rpm, etc.) and docker container options: https://github.com/navidrome/navidrome/releases
motorest 1 days ago [-]
> There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

None of your points make any sense. Docker works beautifully well as an abstraction layer. It makes trivially simple to upgrade anything and everything running on it, to the point you do not even consider it as a concern. Your assertions are so far off that you managed to.l get all your points entirely backwards.

To top things off, you get clustering for free with Docker swarm mode.

> If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.

I have news for you. In fact, you should be surprised to learn that nowadays that today you even get full blown Kubernetes distributions up and running in Linux distributions after a quick snap package install.

movedx 23 hours ago [-]
Absolutely everything they said makes sense.

Everything you're saying is complete overkill, even in most Enterprise environments. We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

> I have news for you.

I have news for _you_: using Docker to run anything that doesn't need it (i.e. it's the only officially supported deployment mechanism) is like putting your groceries into the boot of your car, then driving your car onto the tray of a truck, then driving the truck home because "it abstracts the manual transmission of the car with the automatic transmission of the truck". Good job, you're really showing us who's boss there.

Operating systems are easy. You've just fallen for the Kool Aid.

dd_xplore 1 hours ago [-]
I self host: Jellyfin qbit Authentik Budibase Pihole Nginx proxy manager Immich Jupyter ….. The list goes on, these software have tons of dependencies, installing everything on a single OS would break them! Containers are godsend for self hosting and for anyone deploying many tools at a time!
motorest 21 hours ago [-]
> Absolutely everything they said makes sense.

Not really. It defies any cursory understanding of the problem domain, and you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish.

Some people in this discussion even go to the extreme of claiming that messing with systemd to run a service is simpler than typing "docker run".

It defies all logic.

> Everything you're saying is complete overkill, even in most Enterprise environments.

What? No. Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.

movedx 21 hours ago [-]
> you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish

You'd have to go out of your way to ignore how difficult they are to maintain and secure. Anyone with a few hours of experience trying to design an upgrade path for other people's container; security scanning of them; reviewing what's going on inside them; trying to run them with minimal privileges (internally and externally), and more, will know they're a nightmare from a security perspective. You need to do a lot of work on top of just running the containers to secure them [1][2][3][4] -- they are not fire and forget, as you're implying.

This one is my favourite: https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Se... -- what an essay. Keep in mind someone has to do that _and_ secure the underlying hosts themselves for there is an operating system there too.

And then this bad boy: https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR... -- again, you have to do this kind of stuff _again_ for the OS underneath it all _and_ anything else you're running.

[1] https://medium.com/@ayoubseddiki132/why-running-docker-conta...

[2] https://wonderfall.dev/docker-hardening/

[3] https://www.isoah.com/5-shocking-docker-security-risks-devel...

[4] https://kubernetes.io/docs/tasks/administer-cluster/securing...

They have their place in development and automated pipelines, but when the option of running on "bare metal" is there you should take it (I actually heard someone call it that once: it's "bare metal" if it's not in a container these days...)

You should never confuse "trivial" with "good". ORMs are "trivial", but often a raw SQL statement (done correctly) is best. Docker is "good", but it's not a silver bullet that just solves everything. It comes with its own problems, as seen above, and they heavily outweigh the benefits.

> Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

Ah! I see now. I don't think you work in operations. I think you're a software engineer who doesn't have to do the Ops or SRE work at your company. I believe this to be true because you're hyper-focused on the running of the containers but not the management of them. The latter is way harder than managing services on "bare metal". Running services via "systemctl" commands, Ansible Playbooks, Terraform Provisioners, and so many other options, has resulted in some of the most stable, cheap to run, capable, scalable infrastructure setups I've ever seen across three countries, two continents, and 20 years of experience. They're so easy to use and manage, the companies I've helped have been able to hire people from University to manage them. When it comes to K8s, the opposite is completely true: the hires are highly experienced, hard to find, and very expensive.

It blows my mind how people run so much abstraction to put x86 code into RAM and place it on a CPU stack. It blows my mind how few people see how a load balancer and two EC2 Instances can absolutely support a billion dollar app without an issue.

> You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.

Sure, OK. I find you hostile, so I'll let you sit there boiling your own blood.

feirlane 16 hours ago [-]
What is your opinion on podman rootless containers? In my mind running rootless containers as differe OS users for each application I'm hosting was an easy way of improving security and making sure each of those services could only mess with their own resources. Are there any known issues with that? Do you have experience with Podman? Would love to hear your thoughts
movedx 11 hours ago [-]
That sounds like a great option to me. The more functionality you can get out of a container without giving up privileges, the better. Podman is just a tool like any other - I'd happily use it if it's right for the job.

All I would say is: can you run that same thing without a containerisation layer? Remember that with things like ChatGPT it's _really_ easy to get a systemd unit file going for just about any service these days. A single prompt and you have a running service that's locked down pretty heavily.

feirlane 8 hours ago [-]
Yeah I could run them as regular systemd daemons themselves, but I would lose the easy isolation between different services and main OS. Feels easier to limit what the services have access to in the host OS by running them in containers.

I do run the containers as systemd user services however, so everything starts-up at boot, etc

eddythompson80 1 days ago [-]
I completely disagree.

> Docker adds indirection on storage, networking, etc.,

What do you mean by "indirection"? It adds OS level isolation. It's not an overhead or a bad thing.

> makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

Literally the entire selfhost stack could be updated and redeployed in a matter of:

      docker compose pull
      docker compose build .
      docker compose up -d
Self hosting with something like docker compose means that your server is entirely describable in 1 docker-compose.yml file (or a set of files if you like to break things apart) + storage.

You have clean separation between your applications/services and their versions/configurations (docker-compose.yml), and yous state/storage (usually a NAS share or a drive mount somewhere).

Not only are you no longer depended on a particular OS vendor (wanna move your setup to a cheap instance on a random VPS provider but they only have CentOS for some reason?), but also the clean seperation of all the parts allows to very easily scale individual components as needed.

There is 1 place where everything goes. With the OS vendor package everytime you need to check is it in systemd unit? is it a config file in /etc/? wth?

Then next time you're trying to move the host, you forget the random /etc/foo.d/conf change you made. With docker-compose, that change has to be stored somewhere for the docker-compose to mount or rebuild, so moving is trivial.

It's not Nixos, sure. but it's much much better than a list of APT or dnf or yum packages and scripts to copy files around

zdw 1 days ago [-]
Tools like Ansible exist and can do everything you mention on the deploy side and more, and are also cross platform to a wider range of platforms than Linux-only Docker.

Isolation technologies are also available outside of docker, through systemd, jails, and other similar tools.

motorest 1 days ago [-]
> Tools like Ansible exist and can do everything you mention on the deploy side and more (...)

Your comment is technically correct, but factually wrong. What you are leaving out is the fact that, in order to do what Docker provides out of the box, you need to come up with a huge custom Ansible script to even implement the happy path.

So, is your goal to self host your own services, or to endlessly toy with the likes of Ansible?

princevegeta89 1 days ago [-]
Why do you need to update docker? I kept my box running for more than 1 year without upgrading docker. I upgrade my images but it hardly takes 15 minutes for me, in let's say a month.

>>> if the company is respecting privacy It's very rare to see companies doing it, and moreover it is hard to trust them to even maintain a unique stance as years pass by.

buran77 1 days ago [-]
It doesn't matter if you upgrade Docker or not. All tech, self hosted or not, fails for three reasons:

1) You did something to it (changed a setting, upgraded software, etc.)

2) You didn't do something to it (change a setting, upgrade a software, etc.)

3) Just because.

When it does you get the wonderful "work-like" experience, frantically trying to troubleshoot while the things around your house are failing and your family is giving you looks for it.

Self host but be aware that there's a tradeoff. The work that used to be done by someone else, somewhere else, before issues hit you is now done by you alone.

mr_mitm 24 hours ago [-]
And if you're security conscious like me and want to do things the "right way" just because you can (or should be able to), you now have to think about firewall rules, certificate authorities, DNS names, notifications, backup strategies, automating it in Ansible, managing configs with git, using that newfangled IPv6, ... the complexity piles up quickly.

Coincidentally, I just decided to tackle this issue again on my Sunday afternoon: https://github.com/geerlingguy/ansible-role-firewall/pull/11...

Sometimes it's not fun anymore.

6 hours ago [-]
ndriscoll 8 hours ago [-]
Besides 2 of my hard drives failing over the last 30 years, I can't recall ever encountering 2) or 3). I also can't really even imagine the mechanism by which a self-hosted solution could fail without you touching it or without a hardware failure. Software does not rot.
aucisson_masque 1 days ago [-]
> if the company is respecting privacy It's very rare to see companies doing it, and moreover it is hard to trust them to even maintain a unique stance as years pass by.

Indeed, no one can predict the future but there are companies with bigger and stronger reputation than other. I pay for instance for iCloud because it’s e2e in my country and pricing is fair, it’s been like that for years and so I don’t have to set up baikal server for calendar, something for file archieving, something else for photo and so on.

I’d be surprised apple did willingly something damaging to user privacy, for the simple reason that they paid so much ads on privacy, they would instantly loose a lot of credibility.

And even stuff you self host, yes you can let it be, not update it for a year but I wouldn’t do that because of security issue. Somethings like navidrome (music player), it’s accessible from the web, no one want to launch a vpn each time you listen to music and so it got to be updated or you may get hacked. And no one can say that the navidrome maintainer will still be there in the coming years, could stop the project, be sick, die… it’s not a guarantee that others take back on the project and provide security update.

motorest 1 days ago [-]
> Why do you need to update docker?

For starters, addressing security vulnerabilities.

https://docs.docker.com/security/security-announcements/

> I kept my box running for more than 1 year without upgrading docker.

You inadvertently raised the primary point against self-hosting: security vulnerabilities. Apparently you might have been running software with known CVEs for over a year.

danparsonson 9 hours ago [-]
Much of this risk is mitigated by hiding everything behind Wireguard or similar. None of my self-hosted stuff is publicly exposed but I can reach it from anywhere. You can go one step further and run some kind of gateway OS (e.g. opnSense) on a separate cheap VPS, route everything through that, then firewall your main server off completely.
BLKNSLVR 1 days ago [-]
> if the company is respecting privacy and has descent pricing.

Also an extremely limited list.

Larrikin 1 days ago [-]
What project did you run into issues with? I've found any project that has gotten to the point of offering a Docker Compose seems to just work.

Plus I've found nearly every company will betray your trust in them at some point so why even give them the chance? I self host Home Assistant, but they seem to be the only company that actively enacts legal barriers for themselves so if Paulus gets hit by a bus tomorrow the project can't suddenly start going against the users.

znpy 17 hours ago [-]
> Selfhosting is a pain in the ass

I use rhel/rocky Linux exactly because of this. I don’t need the latest software on my home server, and i am reasonably sure i can run yum update without messing up my system.

Most of the time people complain about system administration when self-hosting it’s because they’re using some kind of meme-distro that inevitably breaks (which is something you don’t want on a server, irrespective if it’s at work or at home).

Bonus point: i can run rootless containers with podman (orchestrated via docker-compose).

And i get professionally curated software (security patches backported, selinux policies, high-quality management and troubleshooting tooling).

czhu12 11 hours ago [-]
I was able to replicate some of this by building my own hosting platform (https://canine.sh) that can deploy a Github repo to anywhere -- from a Kubernetes cluster to a home raspberry pi server.

I've built tons of stuff in my career, but building the thing that can host all of it for myself has been hugely rewarding (instead of relying on hosting providers that inevitably start charging you)

I now have almost 15 apps hosted across 3 clusters:

https://imgur.com/a/RYg0wzh

One of the most cherised things I've built, and I find myself constantly coming back and improving / updating out of love.

whitefang 34 minutes ago [-]
With self hosting my biggest pain point is disaster and recovery management. I would like to know how do you handle it.
snowstormsun 2 minutes ago [-]
Redundant Backups
sunshine-o 1 days ago [-]
I self-host most of what I need but I recently faced the ultimate test when my Internet went down intermittently.

It raised some interesting questions:

- How long can I be productive without the Internet?

- What am I missing?

The answer for me was I should archive more documentation and NixOS is unusable offline if you do not host a cache (so that is pretty bad).

Ultimately I also found out self-hosting most of what I need and being offline really improve my productivity.

elashri 1 days ago [-]
I find that self hosting "devdocs" [1] and having zeal (on linux) [2] solves a lot of these problems with the offline docs.

[1] https://github.com/freeCodeCamp/devdocs

[2] https://zealdocs.org/

teddyh 1 days ago [-]
For offline documentation, I use these in order of preference:

• Info¹ documentation, which I read directly in Emacs. (If you have ever used the terminal-based standalone “info” program, please try to forget all about it. Use Emacs to read Info documentation, and preferably use a graphical Emacs instead of a terminal-based one; Info documentation occasionally has images.)

• Gnome Devhelp².

• Zeal³

• RFC archive⁴ dumps provided by the Debian “doc-rfc“ package⁵.

1. https://www.gnu.org/software/emacs/manual/html_node/info/

2. https://wiki.gnome.org/Apps/Devhelp

3. https://zealdocs.org/

4. https://www.rfc-editor.org/

5. https://tracker.debian.org/pkg/doc-rfc

AstroBen 1 days ago [-]
I've taken this as far as I can. I love being disconnected from the internet for extended periods - they're my most productive times

I have a bash alias to use wget to recursively save full websites

yt-dlp will download videos you want to watch

Kiwix will give you a full offline copy of Wikipedia

My email is saved locally. I can queue up drafts offline

SingleFile extension will allow you to save single pages really effectively

Zeal is a great open source documentation browser

kilroy123 1 days ago [-]
Could you share the bash alias? I would love this too.
AstroBen 1 days ago [-]
https://srcb.in/nPU2jIU5Ca

Unfortunately it doesn't work well on single page apps. Let me know if anyone has a good way of saving those

sunshine-o 1 days ago [-]
The only way I know of is prepossessing with a web browser and piping it to some thing like monolith [0]

So you end up with something like this [1]:

> chromium --headless --window-size=1920,1080 --run-all-compositor-stages-before-draw --virtual-time-budget=9000 --incognito --dump-dom https://github.com | monolith - -I -b https://github.com -o github.html

- [0] https://github.com/Y2Z/monolith

- [1] https://github.com/Y2Z/monolith?tab=readme-ov-file#dynamic-c...

BLKNSLVR 1 days ago [-]
Each downtime is an opportunity to learn the weaknesses of your own system.

There are certain scenarios you have no control over (upstream problems), but others have contingencies. I enjoy working out these contingencies and determining whether the costs are worth the likelihoods - and even if they're not, that doesn't necessarily mean I won't cater for it.

ehnto 1 days ago [-]
When my rental was damaged by a neighbouring house fire, we were kicked out of the house the next day. This was a contingency I hadn't planned well for.

I have long thought that I need my homelab/tools to have hardcases and a low power, modularity to them. Now I am certain of it. Not that I need first world technology hosting in emergency situations, but I am now staying with family for at least a few weeks, maybe months, and it would be amazing to just plonk a few hardcases down and be back in business.

ehnto 1 days ago [-]
> and NixOS is unusable offline if you do not host a cache (so that is pretty bad).

I think a cache or other repository backup system is important for any software using package managers.

Relying on hundreds if not thousands of individuals to keep their part of the dependency tree available and working is one of the wildest parts of modern software developmemt to me. For end use software I much prefer a discrete package, all dependencies bundled. That's what sits on the hard-drive in practice either way.

ndriscoll 20 hours ago [-]
Nixos is perfectly usable without an Internet connection. I've never encountered an issue, and in fact I've joked with my wife that considered as an overall end-to-end system (i.e. including the Internet dependency), my jellyfin instance gets better uptime than something like Spotify would.

You can't install or update new software that you'd pull from the web, but you couldn't do that with any other system either. I can't remember specifically trying but surely if you're just e.g. modifying your nginx config, a rebuild will work offline?

sunshine-o 17 hours ago [-]
So this is what I thought for a long time and tested several time sucessfully.

But surprisingly the day I needed to change a simple network setting without the internet I got stuck ! I still can't explain why.

So I now feel we are rolling the dices a bit with an offline NixOS

bombcar 1 days ago [-]
https://kiwix.org/en/ and some jellyfin setups are a great offline resource.

But yeah, things like NixOS and Gentoo get very unhappy when they don't have Internet for more things. And mirroring all the packages ain't usually an option.

hansvm 1 days ago [-]
I'm not too familiar with NixOS, but I've been running Gentoo for ages and don't know why you'd need constant internet. Would you mind elaborating?
bombcar 21 hours ago [-]
For installing new things - they assume a working Internet.

Ubuntu and CentOS at least HAD the concept of a "DVD" source, though I doubt it is used much anymore.

XorNot 1 days ago [-]
You can reverse resolve Nix back down to just the source code links though, which should be enough to build everything if those URLs are available on your local network.
larodi 1 days ago [-]
having a .zip of the world, also helps, even though being a lossy one. i mean - always have one of the latest models around, ready for spin. we can easily argue llms are killing the IT sphere, but they also are a reasonable insurance against doomsday.
itsafarqueue 1 days ago [-]
If by doomsday you mean “power out for a few hours”, sure.
larodi 23 hours ago [-]
Or few days. But I can also imagine being power independent with your own robotry to sustain even longer power offs. But you’ll also need be very well hidden as society likely collapses in matter of days if this ever happens.
sdf4j 1 days ago [-]
> I always say to buy a domain first.

You can only rent a domain. The landlord is merciless if you miss a payment, you are out.

There are risks everywhere, and it depresses me how fragile is our online identity.

1vuio0pswjnm7 1 days ago [-]
"You can only rent a domain."

If ICANN-approved root.zone and ICANN-approved registries are the only options.

As an experiment I created own registry, not shared with anyone. For many years I have run own root server, i.e., I serve own custom root.zone to all computers I own. I have a search experiment that uses a custom TLD that embeds a well-known classification system. The TLD portion of the domainname can catgorise any product or service on Earth.

ICANN TLDs are vague, ambiguous, sometimes even deceptive.

eximius 3 hours ago [-]
Is there any difference here from running a normal DNS server?

Any of your special domains will be ones your server claims as authoritative, so I don't understand why you need a root server?

iampims 1 days ago [-]
You should write something about this…
coldfoundry 18 hours ago [-]
This sounds like a wonderful project, do you have any documentation of the process you wouldn't mind sharing? Would love to play around with something similar to what you did, almost like a mini-internet.
znpy 17 hours ago [-]
> The landlord is merciless if you miss a payment, you are out.

That’s a skill issue though.

I have a domain that i used to pre-pay for years in advance.

For my current main domain i had prepaid nine years in advance and it was paid up to 2028. A couple of years ago i topped it up and now it’s prepaid up to 2032.

It’s not much money (when I prepaid for 9 years i spent like 60€ or so) and you’re usually saving because you’re fixing the price so skipping price hikes, inflation etc.

hobs 12 hours ago [-]
Host the wrong content, you are out, get sued because of someone elses trademark on your domain, you are out, registrar actually dissolved or has weird stuff? out.
XorNot 1 days ago [-]
It's something of a technical limitation though: there's no reason all my devices - the consumers of my domain name - couldn't just accept that anything signed with some key is actually XorNot.com or whatever...but good luck keeping that configuration together.

You very reasonably could replace the whole system with just "lists of trusted keys to names" if the concept has enough popular technical support.

klabb3 1 days ago [-]
I propose a slightly different boundary: not ”to self-host” but ”ability to self-host”. It simply means that you can if you want to, but you can let someone else host it. This is a lot more inclusive, both to those who are less technical and those who are willing to pay for it.

People who don’t care, ”I’ll just pay”, are especially affected, and the ones who should care the most. Why? Because today, businesses are more predatory, preying on future technical dependence of their victims. Even if you don’t care about FOSS, it’s incredibly important to be able to migrate providers. If you are locked in they will exploit that. Some do it so systematically they are not interested in any other kind of business.

crabmusket 1 days ago [-]
This sounds like the "credible exit" idea Bluesky talk about.

Also shout-out to Zulip for being open source, self hostable, with a cloud hosted service and transfer between these setups.

arjie 1 days ago [-]
Tooling for self-hosting is quite powerful nowadays. You can start with hosted components and swap various things in for a self-hosted bit. For instance, my blog is self-hosted on a home-server.

It has Cloudflare Tunnel in front of it, but I previously have used nginx+letsencrypt+public_ip. It stores data on Cloudflare R2 but I've stored on S3 or I could store on a local NAS (since I access R2 through FUSE it wouldn't matter that much).

You have to rent:

* your domain name - and it is right that this is not a permanent purchase

* your internet access

But almost all other things now have tools that you can optionally use. If you turn them off the experience gets worse but everything still works. It's a much easier time than ever before. Back in the '90s and early 2000s, there was nothing like this. It is a glorious time. The one big difference is that email anti-spam is much stricter but I've handled mail myself as recently as 8 years ago without any trouble (though I now use G Suite).

kldg 14 hours ago [-]
SBCs are great for public webservers and suited to save you quite a bit in energy costs. I've used a Raspbery Pi4B for about 5 years with around 10k human visitors (~5k bots) per year just fine. I'd like to try a RISC-V SBC as server, but maybe I have a few more years to wait.

I don't run into resource issues on the Pi4B, but resource paranoia (like range anxiety in EVs) keeps me on my toes about bandwidth use and encoding anyway. I did actually repurpose my former workstation and put it in a rackmount case a couple weeks ago to take over duties and take on some new ones, but it consumes so much electricity that it embarrasses me and I turned it off. Not sure what to do with it now; it is comically over-spec'd for a web server.

Most helpful thing to have is a good router; networking is a pain in the butt, and there's a lot to do when you host your own when you start serving flask servers or whatever. Mikrotik has made more things doable for me.

ravetcofx 14 hours ago [-]
how are you tracking visitors and differentiating them with bots?
kldg 13 hours ago [-]
crudely. apache2 logs are parsed every 5 minutes. if the IP address exists already in post-processed database, ignore the entry; if they didn't exist in database, a script parses user agent strings and checks against a list of known "consumer" browsers; a whitelist. If they match, we assume they're human. we then delete the detailed apache2 logs and put just the IP address, when we first saw them (date, not datetime), and whether they were deemed human or bot into database. faking user agent strings or using something like playwright would confuse the script; but the browser list will also inherently not have all entries of existing "consumer browsers".

every day, a script checks all IP addresses in the post-processed database to see if there are "clusters" on the same subnet. I think it's if we see 3 visitors on the same subnet, we consider it a likely bot and retroactively switch those entries to being a bot in the database. Without taking in millions of visitors, I think this is reasonable, but it can introduce errors, too.

larodi 2 days ago [-]
Can definitely become a trend given so many devs out there and so much that AI can produce at home which can be of arbitrary code quality…
Havoc 1 days ago [-]
Ever since arch got an installer I’m not sure I’d consider it hard anymore. Still dumps you into a command line sure but it’s a long way away from the days of trying to figure out arcane partition block math
MarcelOlsz 1 days ago [-]
RIP "I use arch btw"
bombcar 1 days ago [-]
Hello, I'm "I use gentoo btw"
ryandrake 1 days ago [-]
> The premise is that by learning some of the fundamentals, in this case Linux, you can host most things yourself. Not because you need to, but because you want to, and the feeling of using your own services just gives you pleasure. And you learn from it.

Not only that, but it helps to eliminate the very real risk that you get kicked off of a platform that you depend on without recourse. Imagine if you lost your Gmail account. I'd bet that most normies would be in deep shit, since that's basically their identity online, and they need it to reset passwords and maybe even to log into things. I bet there are a non-zero number of HN commenters who would be fucked if they so much as lost their Gmail account. You've got to at least own your own E-mail identity! Rinse and repeat for every other online service you depend on. What if your web host suddenly deleted you? Or AWS? Or Spotify or Netflix? Or some other cloud service? What's your backup? If your answer is "a new cloud host" you're just trading identical problems.

whartung 1 days ago [-]
My singular issue with self hosting specifically with email is not setting it up. Lots of documentation on setting up an email server.

But running it is different issue. Notably, I have no idea, and have not seen a resource talking about troubleshooting and problem solving for a self hosted service. Particularly in regards with interoperability with other providers.

As a contrived example, if Google blackballs your server, who do you talk to about it? How do you know? Do that have email addresses, or procedures for resolution in the error messages you get talking with them?

Or these other global, IP ban sites.

I’d like to see a troubleshooting guide for email. Not so much for the protocols like DKIM, or setting DNS up properly, but in dealing with these other actors that can impact your service even if it’s, technically, according to Hoyle, set up and configured properly.

mjrpes 1 days ago [-]
> But running it is different issue. Notably, I have no idea, and have not seen a resource talking about troubleshooting and problem solving for a self hosted service. Particularly in regards with interoperability with other providers.

It's nearly impossible to get 100% email deliverability if you self host and don't use a SMTP relay. It might work if all your contacts are with a major provider like google, but otherwise you'll get 97% deliverability but then that one person using sbcglobal/att won't ever get your email for a 4 week period or that company using barracuda puts your email in a black hole. You put in effort to get your email server whitelisted but many email providers don't respond or only give you a temporary fix.

However, you can still self host most of the email stack, including most importantly storage of your email, by using an SMTP relay, like AWS, postmark, or mailgun. It's quick and easy to switch SMTP relays if the one you're using doesn't work out. In postfix you can choose to use a relay only for certain domains.

baobun 1 days ago [-]
IME the communities around packaged open-source solutions like mailinabox, mailco, mailu tend to help each other out with stuff like this and the shared bases help. Maybe camp a few chatrooms and forums and see if any fits your vibe.
boplicity 1 days ago [-]
Most services, including email providers, spam databases, and "ip-ban sites" have clear documentation, in terms of how to get on their good side, if needed, and it is often surprisingly straightforward to do so. Often it's as simple as filling out a relatively form.
dantodor 1 days ago [-]
Have you ever tried to use it? Because I fought for about 2 months with both Google and Microsoft, trying to self-host my mail server, to no success. The only answer was amongst the lines 'your server has not enough reputation'. Even though perfectly configured, DKIM, DMARC, etc. Now imagine a business not being able to send a message to anyone hosted on Gmail or Outlook, probably 80-90 percents of the companies out there.
kassner 23 hours ago [-]
I feel you. I had my email on OVH for a while, but they handle abuse so bad that Apple just blanketed banned the /17 my IP was in. And I was lucky that Apple actually answered my emails and explained why I was banned. I doubt Microsoft and Google would give you any useful information.
bluGill 1 days ago [-]
They claim that, but everyone small I know who self hosted email has discovered that forms don't do anything. I switched to fastmail 15 years ago and my email got a lot better because they are big enough that nobody dares ignore them. (maybe the forms work better today than 15 years ago, but enough people keep complaining about this issue that I doubt it)
JoshTriplett 1 days ago [-]
Own your own domain, point it to the email hosting provider of your choice, and if something went horribly wrong, switch providers.

Domains are cheap; never use an email address that's email-provider-specific. That's orthogonal to whether you host your own email or use a professional service to do it for you.

doubled112 1 days ago [-]
This is my plan.

I will lose some email history, but at least I don’t lose my email future.

However, you can’t own a domain, you are just borrowing it. There is still a risk that gets shut down too, but I don’t think it is super common.

danillonunes 1 days ago [-]
As for the domain risks, my suggestions is to stick with the .com/.net/.org or something common in your country and avoid novelty ones such as .app, .dev, etc, even if you can't get the shortest and simpler name. And if you have some money to spare, just renew it to 10 years.
data-ottawa 1 days ago [-]
Even if you renew for 10 years, set a calendar reminder annually to check in and make sure your renewal info is still good.
spencerflem 13 hours ago [-]
You can also top it up every year as well. Two for one :)
JoshTriplett 1 days ago [-]
> I will lose some email history, but at least I don’t lose my email future.

I back up all my email every day, independent of my hosting provider. I have an automatic nightly sync to my laptop, which happens right before my nightly laptop backups.

noAnswer 1 days ago [-]
Why should you lose some email history? Just move the mails to a differente folder.

I self host my mails but still use a freemail for the contact address for my providers. No chicken and egg problem for me.

weikju 1 days ago [-]
If doing so id also recommend not using the same email or domain for the registrar and for your email host…. If you are locked out of one you’d want to be able to access the other to change things.
teeray 1 days ago [-]
Agreed. I’ve had the same email address for a decade now but cycled through the registrar’s email, Gmail, and M365 in that time. Makes it easy to switch.
ozim 1 days ago [-]
Self hosting at home - what is higher risk? Your HDD dying or losing Gmail account?

Oh now you don’t only self host, now you have to have space to keep gear, plan backups, install updates, oh would be good to test updates so some bug doesn’t mess your system.

Oh you know installing updates or while backups are running it would be bad if you have power outage- now you need a UPS.

Oh you know what - my UPS turned out to be faulty and it f-up my HDD in my NAS.

No I don’t have time to deal with any of it anymore I have other things to do with my life ;)

layoric 1 days ago [-]
Different strokes for different folks. Motivation for me has been a combination of independence and mistrust. Every single one of the larger tech companies have shown their priority to growth above making good products and services, and not being directly user hostile. Google search is worse now than it was 10 years ago. Netflix has ads with a paid subscription, so does YouTube. Windows is absolute joke, more and more we see user hostile software. Incentives aren’t aligned at all. As people who work in software, I get not wanting to do this stuff at home as well. But honestly I’m hoping for a future where a lot of these services can legit be self hosted by technical people for their local communities. Mastodon is doing this really well IMO. Self hosted software is also getting a lot easier to manage, so I’m quite optimistic that things will keep heading this way.

Note, I’ve got all the things you mentioned down to the UPSes setup in my garage, as well as multiple levels of backups. It’s not perfect, but works for me without much time input vs utility it provides. Each to their own.

ozim 1 days ago [-]
Well I hope we don’t keep on discussing Google vs Self Hosting hardware at home.

There are alternatives that should be promoted.

deadbabe 1 days ago [-]
If your trust is violated, typically the worst that happens is you are fed a couple more relevant ads or your data is used for some commercial purpose that has little to no effect on your life.

Is it really worth going through so much effort to mitigate that risk?

layoric 1 days ago [-]
Again, it's a value judgement, so the answer is largely personal. For me, yes. The social license we give these larger companies after all the violated trust doesn't make sense. If your local shop owner/operator that you talked to everyday had the same attitude towards your when you went shopping and exchanged pleasantries with most weeks, people would confront them about their actions, and that shop wouldn't last long. We have created the disconnect for convenience, and tried to ignore the level of control these companies have on our day to day lives if they are so inclined or instructed to change their systems.

Cloud is just someone else's computer. These systems aren't special. Yes they are impressively engineered to deal with the scale they deal with, but when systems are smaller, they can get a lot simpler. I think as an industry we have conflated distributed systems with really hard engineering problems, when it really matter at what level of abstraction the distribution happens when it comes to down stream complexity.

deadbabe 1 days ago [-]
The cloud is someone else’s computer and an apartment is just someone else’s property.

How far do we take this philosophy?

spencerflem 13 hours ago [-]
Lots of people don't like landlords :)
II2II 1 days ago [-]
The risk may be real, but is it likely to happen to many people?

The reason why I bring this up is because many early adopters of Gmail switched to it or grew to rely upon it because the alternatives were much worse. The account through your ISP, gone as soon as you switched to another ISP. That switch may have been a necessary switch if you moved to a place the ISP did not service. University email address, gone soon after graduation. Employer's email address, gone as soon as you switched employers (and risky to use for personal use anyhow). Through another dedicated provider, I suspect most of those dedicated providers are now gone.

Yeap, self-hosting can sort of resolve the problem. The key word being sort of. Controlling your identity doesn't mean terribly much if you don't have the knowledge to setup and maintain a secure email server. If you know how to do it, and noone is targetting you in particular, you'll probably be fine. Otherwise, all bets are off. Any you don't have total control anyhow. You still have the domain name to deal with after all. You should be okay if you do your homework and stay on top of renewals, almost certainly better off than you would be with Google, but again it is only as reliable as you are.

There are reasons why people go with Gmail, and a handful of other providers. In the end, virtually all of those people will be better off in both the short to mid-term.

weitendorf 1 days ago [-]
It introduces some pretty important risks of its own though. If you accidentally delete/forget a local private key or lose your primary email domain there is no recourse. It's significantly easier to set up 2FA and account recovery on a third party service

Note that I'm not saying you shouldn't self-host email or anything else. But it's probably more risky for 99% of people compared to just making sure they can recover their accounts.

elashri 1 days ago [-]
I have seen much more stories about people losing access to their Gmail because of a comment flagged somewhere else (i.e YouTube) than people losing access to their domains (it is hard to miss all these reminders about renewal and you shouldn't wait until then anyway so that's something under you control).

And good luck getting anyone from Google to solve your problem assuming you get to a human.

jeffbee 1 days ago [-]
> losing access to their Gmail because

Google will never comment on the reasons they disable an account, so all you've read are the unilateral claims of people who may or may not be admitting what they actually did to lose their accounts.

owl_vision 6 hours ago [-]
for the past 20 odd years, old hardware with tweaked, custom compiled and build FreeBSD, NetBSD served me and my few customers quiet well. There is lot of joy in it. Recently, I started modifying open source software to be self hostable. Some does not work well enough when the internet is not accessible. for example FarmOS.
davidcalloway 21 hours ago [-]
While I like the article and agree with the sentiment, I do feel it would have been nice to at least mention the GNU project and not leave the impression that we have free software only thanks to Linus Torvalds.
budududuroiu 16 hours ago [-]
I’m almost done with my switch away from a fully Apple ecosystem and I feel great about my Framework laptop, GrapheneOS Pixel and cluster of servers in my closet.

I can’t help but wonder if mainstream adoption of open source and self hosting will cause a regulatory backlash in favour of big corpo again (thinking of Bill Gates’ letter against hobbyists)

nodesocket 1 days ago [-]
I run a Kubernetes 4x pi cluster and an Intel N150 mini PC both managed with Portainer in my homelab. The following open source ops tools have been a game changer. All tools below run in containers.

- kubetail: Kubernetes log viewer for the entire cluster. Deployments, pods, statefulsets. Installed via Helm chart. Really awesome.

- Dozzle: Docker container log viewing for the N150 mini pc which just runs docker not Kubernetes. Portainer manual install.

- UptimeKuma: Monitor and alerting for all servers, http/https endpoints, and even PostgreSQL. Portainer manual install.

- Beszel: Monitoring of server cpu, memory, disk, network and docker containers. Can be installed into Kubernetes via helm chart. Also installed manually via Portainer on the N150 mini pc.

- Semaphore UI: UI for running ansible playbooks. Support for scheduling as well. Portainer manual install.

johnea 1 days ago [-]
Nice article!

It's heartening in the new millennium to see some younger people show awareness of the crippling dependency on big tech.

Way back in the stone ages, before instagram and tic toc, when the internet was new, anyone having a presence on the net was rolling their own.

It's actually only gotten easier, but the corporate candy has gotten exponentially more candyfied, and most people think it's the most straightforward solution to getting a little corner on the net.

Like the fluffy fluffy "cloud", it's just another shrink-wrap of vendor lockin. Hook 'em and gouge 'em, as we used to say.

There are many ways to stake your own little piece of virtual ground. Email is another whole category. It's linked to in the article, but still uses an external service to access port 25. I've found it not too expensive to have a "business" ISP account, that allows connections on port 25 (and others).

Email is much more critical than having a place to blag on, and port 25 access is only the beginning of the "journey". The modern email "reputation" system is a big tech blockade between people and the net, but it can, and should, be overcome by all individuals with the interest in doing so.

johnea 1 days ago [-]
Just for reference, take a look at this email system using FreeBSD:

https://www.purplehat.org/?page_id=1450

p.s. That was another place the article could mention a broader scope, there is always the BSDs, not just linux...

NicoSchwandner 14 hours ago [-]
Nice post, very inspiring! It's definitely addictive to self-host your services! And with modern LLMs, this gets much easier!
Onavo 1 days ago [-]
No love for Pangolin?

https://www.reddit.com/r/selfhosted/comments/1kqrwev/im_addi...

PeterStuer 22 hours ago [-]
I'm going with Pangolin, small hosted VPS on Hetzner, to front my Homelab. Takes away much of the complications of serving securely directly from the home LAN.
buildItN0w_ 16 hours ago [-]
self hosting my own things helped me to gain so much knowledge!

Great read!

Yeul 19 hours ago [-]
As someone who recently had to install Windows on a new PC I am convinced Microsoft wants to turn computers into terminals.

Which is not exactly what you want from a gaming PC.

igtztorrero 13 hours ago [-]
PostalServer also a great open source software to send massive transactional emails. https://github.com/postalserver/install/
9283409232 16 hours ago [-]
I hope we can make hosting open source on VPS much more accessible to the average person. Something like Sandstorm[0] or Umbrel[1].

[0] https://sandstorm.org [1] https://umbrel.com/

carlosjobim 16 hours ago [-]
Hosting on VPS has recently become much better for the average person with the introduction of Fastpanel. I know that people here are going to hate it because it's not open source, but it is free, user friendly, and very easy to use while still being powerful. It's a total win for me.
holoduke 1 days ago [-]
I spend quite some years with linux systems, but i am using llms for configurating systems a lot these days. Last week i setup a server for a group of interns. They needed a docker kubernetes setup with some other tooling. I would have spend at least a day or two to set it up normally. Now it took maybe an hour. All the configurations, commands and some issues were solved with help of chatgpt. You still need to know your stuff, but its like having a super tool at hand. Nice.
haiku2077 1 days ago [-]
Similarly, I was reconfiguring my home server and having Claude generate systemd units and timers was very handy. As you said you do need to know the material to fix the few mistakes and know what to ask for. But it can do the busywork of turning "I need this backup job to run once a week" into the .service and .timer file syntax for you to tweak instead of writing it from scratch.
SoftTalker 1 days ago [-]
Isn't depending on Claude to administer your systems rather divergent from the theme of "Self-Host and Tech Independence?"
iforgotpassword 1 days ago [-]
I think it's just a turbo mode for figuring things out. Like posting to a forum and getting an answer immediately, without all those idiots asking you why you even want to do this, how software X is better than what you are using etc.

Obviously you should have enough technical knowledge to do a rough sanity check on the reply, as there's still a chance you get stupid shit out of it, but mostly it's really efficient for getting started with some tooling or programming language you're not familiar with. You can perfectly do without, it just takes longer. Plus You're not dependent on it to keep your stuff running once it's set up.

chairmansteve 1 days ago [-]
Not in this case. It's a learning accelerator, like having an experienced engineer sitting next to you.
haiku2077 1 days ago [-]
I would describe it as the opposite- like having an inexperienced but very fast engineer next to you.
jeffbee 1 days ago [-]
And using a hosted email service is like having hundreds of experienced engineers managing your account around the clock!
layoric 1 days ago [-]
Claude and others are still in the adoption phase so the services are good, and not user hostile as they will be in the extraction phase. Hopefully by then some agreement on how to setup RAG systems for actual human constructed documentation for these systems will be way more accessible, and have good results with much smaller self hosted models. IMO, this is where I think/hope the LLMs value to the average person will land long term. Search, but better at understanding the query. Sadly, they will also been used for a lot of user hostile nonsense as well.
haiku2077 1 days ago [-]
No. I've been a sysadmin before and know how to write the files from scratch. But Claude is like having a very fast intern I can tell to do the boring part for me and review the work, so it takes 30 seconds instead of 5 minutes.

But if I didn't know how to do it myself, it'd be useless- the subtle bugs Claude occasionally includes would be showstopper issues instead of a quick fix.