• 3 Posts
  • 118 Comments
Joined 10 days ago
cake
Cake day: January 6th, 2026

help-circle
  • When you don’t even know where to begin:

    • Arch Wiki search
    • StackExchange search
    • Sometimes other distro wikis like Gentoo and Debian have some pointers
    • Forum search (reddit)

    That should give you one or more possible solutions involving commands. Don’t just run them. If they’re new packages you need to install, you can check some basic package metadata like website URL either via your distros web interface or package manager itself:

    pacman -Si packagename  
    apt-cache show packagename  
    

    One installed, hopefully you have man page showing up for man command. If not they or some other reference docs should be available on the web. Many but not all commands will give you some usage explanation by passing --help. Any flags/parameters you found in solutions should be explained here. Try to understand the solution/example you were given and what you should expect it to do. Maybe you want to change, add, or remove some arguments for your scenario.

    If any files are mentioned, you can open and read them in a text editor. If the command is expected to change anything, or you need to edit config files, you can back those up before you go to town.


  • A CA can be an encrypted volume on a live USB stick. It’s mostly for the CRLs you might want something online. A static HTTP server where you manually dump revocations is enough for that.

    Unless you do TOFU (which some do and btw how often do you actually verify the github.com ssh fingerprint when connecting from a new host?), you need to add the trust root in some way, just as with any other method discussed. But that’s no more work than doing the same with individual host keys.

    And what’s the alternative? Are you saying it’s less painful to log in and manually change passwords for every single server/service when you need to rotate?






  • kumi@feddit.onlinetoSelfhosted@lemmy.worldAnyone using Revolt?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    The website and marketing!
    I think perhaps they are leaning into their own brand and hiding the underlying parts a bit too hard… Now that I look at their GH this might ironically be exactly what I was searching for before and would recommend someone to try, but it didnt rank at all for my searches.

    Thanks for setting the record straight. I will have to look closer at Movim again.


  • kumi@feddit.onlinetoSelfhosted@lemmy.worldAnyone using Revolt?
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    Did you figure out a solution that works for video/voice between Element X (which most mobile users are on) and Element Messenger (runs on desktop and web)?

    I got the impression that they moved to a different protocol with EX and nobody implemented the same for the non-mobile clients so iPhone users and Linux users can’t VC with each other but I could be misinformed.





  • I’m guilty of a few of these and sorry not sorry but this is not changing.

    Often these are written with local dev and testing in mind, and in any case the expectation is that self-hosters will look through them and probably customize them - and in any case be responsble for their own firewalls and proxies - before deploying them to a public-facing server. Larger deployments sometimes have internal load balancers on separate machines so even when reflecting a production deployment, exposing on 0.0.0.0 or running eith network=host might be normal.

    Never just run third-party compose files for user services on a machine directly exposed to untrusted networks like the internet.


  • One related story: I did have the arguable pleasure to operate a stateful Websockets/HTTP2-heavy horizontally scaled “microservice” API with Rails and even more Ruby, as well as gRPC written in other stuff. Pinning of instances based on auth headers and sessions, weighting based on subpaths, stuff like that. It was originally deployed with Traefik. When it went from “beta” stage to having to handle heavier traffic consistently and reliably on the public internet, Traefik did not cut it anymore and after a few rounds of evaluation we settled on HAProxy, which was never regretted IIRC. My friends company had it in front of one of the countries busiest online services at the time, a pipeline largely built in PHP. Fronted with haproxy. I have seen similar patterns patterns play out at other times in other places.

    Outside of $work I’ve had them all running side by side or layered (should consolidate some but ain’t nobody got time for that) over 5+ years so I think I have a decent feel for their differences.

    I’m not saying HAProxy is perfect, always the best pick, has the most features, or without tradeoffs. It does take a lot more upfront learning and tweaking to get what you need from it. But I can’t square your claims with lived experience, especially when you specifically contrast it with Traefik, which I would say is easy to get started with, has popular first-class support for containers, and loved by small teams - but breaks at scale and when you hit more advanced use-cases.

    Not that any of the things either of us have mentioned so far is releveant whatsoever for a budding homelabber asking how to do domain-based http routing.

    I think you are just baiting now.




  • Please don’t recommend UFW.

    One main problem with UFW, besides being based on legacy iptables (instead of the modern nftables which is easier to learn and manage), is the config format. Keeping track of your changes over track is hard, and even with tools like ansible it easily becomes a mess where things can fall out of sync with what you expect.

    Unless you need iptables for some legacy system or have a weird fetish for it, nobody needs to learn iptables today. On modern Linux systems, iptables isn’t a kernel module anymore but a CLI shim that actually interacts with the nft backend.

    It is also full of footguns. Misconfigured UFW resulting in getting pwned is very common. For example, with default settings, Docker will bypass UFW completely for incoming traffic.

    I strongly recommend firewalld, or rawdogging nftables, instead of ufw.

    There used to be limitations with firewalld but policies maturing and replacing the deprecated “direct” rules together with other general improvements has made it a good default choice by now.


  • Firewalld

    sudo apt-get install firewalld  
    systemctl enable --now firewalld # ssh on port 22 opened but otherwise most things blocked by default  
    firewall-cmd --get-active-zones  
    firewall-cmd --info-zone=public  
    firewall-cmd --zone=public --add-port=1234/tcp  
    firewall-cmd --runtime-to-permanent  
    

    There are some decent guides online. Also take a look in /etc/firewalld/firewalld.conf and see if you want to change anything. Pay attention to the part about Docker.

    You need to know about zones, ports, and interfaces for the basics. Services are optional. Policies are more advanced.

    I suggest it for your laptop, too.


  • The right nginx config will do this. Since you already have Nginx Proxy Manager, you shouldn’t need to introduce another proxy in the middle just for this.

    Most beginners find Caddy a lot easier to learn and configure compared to Nginx, BTW.

    Another thing that I rarely see mentioned is that since SNI (domain name) is unencrypted for https (unless ECH, which is still not common), you can proxy and route https requests based on domain without terminating TLS or involving http at all. sniproxy is a proxy for just that and available in debian repos. If all you really need is passing through request to downstream proxies or a service terminating TLS itself, it works nicely.

    https://github.com/ameshkov/sniproxy



  • You could self-host a shared “source of truth” git repo that you access over ssh or filesystem. That can be anything from a USB thumb drive, a small clean server or a container on your existing desktop with ssh access, to an entire Forgejo deployment. Then you only need the “secret zero” of an ssh key to get everything set up and syncable.

    If fresh setup is more common, you probably have other parts like package installation and network configuration that you also want to automate. Enter configuration management like ansible or salt, image builders like packer or archiso, “immutable” solutions like Nix or rpm-ostree. Once you get there you typically manage that in git anyway and you could put your dotfiles repo as a submodule and copy them over as part of OS setup.

    If it’s just for once in a blue moon, manual ad-hoc copying gets you pretty far.

    No matter how you slice it I think you have to either frequently spend time syncing changes or just accept the drift and divergence between machines and the sources.