I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    62
    ·
    4 months ago

    The big thing for #2 would be to seperate out what you actually need vs what people keep recommending.

    General guidance is useful, but there’s a lot of ‘You need ZFS!’ and ‘You should use K8s!’ and ‘Use X software!’

    My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more. I ended up doing MergerFS with a proper offsite backup method because, well, it’s shockingly low-complexity.

    And I ended up doing Docker with a bunch of compose files and bind mounts, because it’s shockingly low-complexity. And it’s just running on Debian, instead of some OS that has a couple of layers of additional software to make things “easier” because, again, it’s low-complexity.

    I can re-deploy the entire stack on new hardware in about ~10 minutes (I’ve tested this a few times just to make sure my backup scripts work), and there’s basically zero vendor tie-in or dependencies that you’d have to get working first since it’s just a pile of tarballs and packages from the distro’s package manager on, well, ANY distro.

    • Last@reddthat.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more.

      Same here. Sometimes I get carried away, but overall, a very basic setup is more than fine. Nearly all of my devices run Ubuntu/Debian, and only the work-related stuff gets over-engineered.

      It’s helpful for me to have something like a home lab where I can get hands-on experience with many different technologies. I’ve worn many hats, from developer to sysadmin, so a certain segment of my network tends to be built like Fort Knox. However, overall, 90% of my installs are minimalist with common best practices applied.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        4 months ago

        IMO a homelab for learning and a server that you’re self-hosting services on really aren’t the same thing and maybe shouldn’t be treated that way, if you can swing it.

        I’d rather my password manager or jellyfin or my peertube instance or whatever not be relying on a tech stack I don’t entirely understand and might not be able to easily fix if it breaks.

        I guess a lot of it is new to doing this vs greybeard split, since the longer I’ve done sysadmin work the less I care about the cool new thing and have a preference for the old, stable, documented, bugfixed, supported, and with a clear roadmap software.

        I should probably get a job doing sysadmin work for a bank, lmao.

        • Last@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 months ago

          If they’re a beginner, what better way is there to learn? My home lab and their Windows laptop running VirtualBox are two different things. The topic of security is too deep to cover now, but if they don’t open it up to the world, there shouldn’t be much risk. Local access only should be safe enough, and they might try a dozen different services before settling on one—or none at all.

          Edit: Sysadmin is boring, I need to create. DevOps or some other automation role would be perfect IMO

          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 months ago

            This is going to be a bit of my grumpy-greybeard, but again: if you’re learning, then something like Docker and docker-compose is much simpler and less prone to fuckups than a bunch of K8s.

            If you don’t know ANYTHING about what you’re doing, starting with the simplest tools and then deciding if you want to learn the more complicated ones is probably a less insane path than jumping right into the configuration-as-code DevOps pipeline.

            And, at that point, you should have your “production” and “testing” environments set up in such a way they won’t eat each other when you do an oops.

            • Last@reddthat.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              Oh ok, we’re talking about two very different things then. That’s a very strong opinion for a simple question. I understand what you mean a little better now. Docker is better, but Windows has some weirdness going on with Docker Desktop last time I tried using it. WSL + Docker might be even better to avoid the VM stuff altogether

    • ChapulinColorado@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      I have made that migration myself going from a Raspberry PI 4 to a n100 based NAS. It was 10 minutes for the software stack as you said This not taking into account media migration which was done on the background over a few hours on WiFi (I had everything on an external hard drive at the time).

      That last part is the only thing I would change about my self hosting solution. Yes, the NAS has a nice form factor, is power efficient and has so far been very optimal for my needs (no lag like rpi4), however I have seen they don’t really sell motherboard or parts to repair them. They want you to replace it with another one. Reason 2 on the same is vendor lock in. Depending on the options you select when creating the storage groups/pools (whatever they are called), you could be stuck needing to get something from the same vendor to read your data if the device stops working but the disks are salvageable. Reason 3 is they’ve had security incidents so a lot of the “features” I would not recommend using ever to avoid exposing your data to ransomware over the internet. I don’t trust their competitors either. I know how commercial software is made with the smallest amount of care for security best practices.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Yeah, I just use plain boring desktop hardware. (Oh no! I’m experiencing data corruption due to the lack of ECC!) It’s cheap, it’s available, it’s trivial to upgrade and expand, and there’s very few little gotchas in there: you get pretty much exactly what it looks like you get.

        Also nice is that that you can have a Ship of Theseus NAS by upgrading what needs upgrading as you go along and aren’t tied into entire platform swaps unless it makes sense - my last big rebuild was 3 years ago, but this is basically a 10 year old NAS at this point.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        elaborate

        It’s a really simple script.

        Everything is deployed with a docker compose, and all the docker volume data are bind mounts and, for example, a Jellyfin install would have everything in /stacks/jellyfin.

        The backup script makes a tarball of each service individually (and stops the stack if there’s anything in there doing database things or anything else that might end up being inconsistent by just archiving the filesystem), and uploads them to a S3 storage provider AND burns them to a BluRay.

        The recovery script does the opposite: it downloads and unarchives the data.

        As long as you’re on Linux and have Docker, it should just magically work.

          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            If you write the script yourself, just make sure you test it a couple of times, and preferably with different datasets from different runs.

            I found some edgecase stuff that would have prevented a restore even after I had tested it successfully (some permission issues due to changes in containers and whatnot were resulting in less than the expected data being archived and restored) a couple of times.

    • Eximius@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      4 months ago

      btrfs with its send/receive (incremental fs-level backups) is already stable enough for mostly everything (just has some issues with raid 5/6), and is much more performant than zfs. And it is also in the linux kernel tree (quite hugely useful). Of course, if more zfs-like functionality is what you look for.

      • blackstrat@lemmy.fwgx.uk
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        4 months ago

        “Already stable enough”

        1. no it isn’t.
        2. if fucking should be, it’s been around 15 years!
        • spechter@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 months ago

          My only experience with btrfs was when trying out Opensuse Tumbleweed. Within a couple days my home partition was busted, next time it was another partition. No idea if the problems could be fixed as these were fairly new installations to give Opensuse a try and I couldn’t be bothered to fix a system that’s troubling me from the very beginning.

          Between all the options that just work ™, btrfs is the one I’ve learned to stay away from.

          EDIT: that was four or five years ago

        • thomasloven@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          4 months ago

          And I’ve been using it for eight six of those 15 in RAID 5/6 with zero issues, so YMMW I guess. Sorry you experienced problems.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Honestly it’s not; BTRFS has been in my ‘that’s neat, but it’s still got a non-zero chance of deciding to light everything on fire because it’s bored’ list for, uh, a decade now?

        The NAS build is old enough to more or less predate BTRFS being usable (closing in on a decade since I did the initial OS install, jeez) and none of the features matter for what I’m storing: if every drive in my NAS died today, I’d be very annoyed for a couple of hours during the rebuild, and would lose terrabytes of linux ISOs that I can just download again, if I wanted to use Jellyfin to install them a 2nd time. (Any data I care about is pulled offsite at least once a day, so I’ve got pretty comprehensive backups minus the ISOs.)

        I know EXT4 and mergerfs and snapraid are not cool, or have shiny features, but I’ve also had zero problems with them over the last decade, even between Ubuntu upgrades (16.04, 18.04, 20.04, 22.04) and hardware platform upgrades (6600k, 8700k, 10950k) and the entire replacement of all the system drives (hdd -> ssd -> nvme) and the expansion of and replacement of dead HDDs, of varying sizes (4tb drives to 8tb drives to 16tb drives to some 20tb drives).

        It all just… worked, and at no point was I concerned about the filesystem not working if I replaced or upgraded or changed something, which is not something ZFS or BTRFS would have guaranteed during that same time window.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        IMHO 99% of the time btrfs features are used as a band-aid for things that would be much better done otherwise. Generally by using a stable distro and a decent backup solution (like Debian + Borg). And you get to use a truly stable, proven, boring fs ike ext4 or xfs.

        • Eximius@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          Stable yes, but no protection from bitrot, and the journal of ext4 is the band aid, instead of a cow fs like zfs or btrfs.

          • lemmyvore@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            You can protect important data with backups, which you should do anyway, and in practice I feel like the added complexity of BTRFS and ZFS is not worth the COW.

            BTRFS is cool but they tried to cram way too much too fast into it and it added a ton of complexity and it’s still not 100% done after all these years. A COW mode for ext4 would have been adopted much faster.