Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.

Raspberry pi3 Docker:- pi-hole, unbound, portainer.

  • 2 Posts
  • 57 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle

  • My initial inception of this box was to have it request a static IP so I knew “box.ip”. Then tape then tape some thing like this:

    Box.ip Service1:port Service2:port …

    Onto the case. Then in NPM have it proxy requests to “box.ip:8096” to “tailscale.ip:8096”. But alas, I couldn’t figure it out. I could get 1 service to work but not multiple.

    I couldn’t ask someone to write the config for me, but if you’re certain it’s doable then I’ll learn to write a config. Thank you for the offer. I’m guessing for each service I tell nginx to “listen” at “port” instead of only listening to ports 80,443 and 81.

    MDNS seems like an interesting solution though, I’m going to read about that now actually, thank you for highlighting that solution to me. If I could get that working that would be ideal. I’ll have to check if the expected devices are compatible but that would make everyone’s life easier if I could just setup a cronjob on startup.


  • Thank you for the reading material, it’ll be tonight project. I think I’m just going to tell people if they want to join in the family immich/mealie/etc they’ll just have to let me into their router. They’ll get memorable addresses out of it and adblocking too. I’m pretty sure that setup is comfortably within my skill set. I thought long and hard about opening ports but the security needed is beyond me currently. Down side is cost and I’ll be managing a bunch of boxe. But I can add updating them into the monthly maintenance and if/when they come back they can be repurposed into other projects.


    I tried /locations but my service would rewrite the URL and break itself. I’d navigate to “box.ip/immich” and immich would change the address to “box.ip/login” and hang.

    I’d need to learn how to have npm lock “box.ip/immich” and let immich append “/login”. I’ll leave my test VM up and just chip away at it. I think I need the “rewrite” flag but I’m getting dangerously close to just learning how to write an nginx config instead of having npm do it for me.

    Thanks again for the pointers




  • Oh, routing, I remember watching an “off site back up” video where they set up IP tables, or IP forwarding, or some such, so when their parents tried to access jellyfin locally it was routed over tailscale. Maybe I’m misremembering though, I’m not confident enough to start thinking about it seriously, so I logged it as “that’s possible” and moved on.

    That way I just have to keep one instance of jellyfin/immich/etc up to date. It’s all a bit beyond my ken currently but it’s the way I’m trying to head. At least until I learn a better way.

    Ideally, I give someone a pi all set up. They plug it in go to service.domain.xyz and it routes to me. Or even IP:Port would be fine, I’ll write them down and stick it to their fridge.

    My parents and I run each others’ off-site back up (tailscale-syncthing), but their photo and media services are independent from mine. I just back up their important data, and they return the favour, but we can’t access or share anything.

    Guides like yours are great for showing what’s possible. I often find myself not knowing what I don’t know so don’t really know where to start learning what I need to learn.


  • What a write up, thank you for documenting this.

    I understand a lot of people in this hobby do it professionally too, so a lot is assumed to be common knowledge us outsiders just don’t have.

    While my system of using tailscale’s magic dns to use lxc:port works fine for my fiancée and I, expanding this a family wide system would prove challenging.

    So this guide is next step. I could send my fiancée to <home.domain.xyz> and it’ll take her to homarr, or <jellyseerr.domain.xyz>

    The ultimate dream would be to give family members a pi zero and a <home.domain.xyz> and then run a family jellyfin/immich.



  • As a beginner in self hosting I like plugging the random commands I find online into a llm. I ask it what the command does, what I’m trying to achieve and if it would work…

    It acts like a mentor, I don’t trust what it says entirely so I’m constantly sanity checking it, but it gets me to where I want to go with some back and forth. I’m doing some of the problem solving, so there’s that exercise, it also teaches me what commands do and how the flags alter it. It’s also there to stop me making really stupid mistakes that I would have learned the hard way without.

    Last project was adding a HDD to my zpool as a mirror. I found the “attach” command online with a bunch of flags. I made what I thought was my solution and asked chatgpt. It corrected some stuff: I didn’t include the name of my zpool. Then gave me a procedure to do it properly.

    In that procedure I noticed an inconsistency in how I was naming drives vs how my zpool was naming drives. Asked chat gpt again, I was told I was a dumbass, if thats the naming convention I should probably use that one instead of mine (I was using /dev/sbc and the zpool was using /dev/disk/by-id/). It told me why the zpool might have been configured that way so that was a teaching moment, I’m using usb drives and the zpool wants to protect itself if the setup gets switched around. I clarified the names and rewrote the command, not really chatgpt was constantly updating the command as we went… Boom I have mirrored my drives, I’ve made all my stupid mistakes in private and away from production, life is good.


  • A good general suggestion. The WAF I follow are ‘reasonable’ expense, reasonable form factor, and a physical investment. I floated the idea of a VPS and that’s when I learned of the third criteria. It is what it is.

    I just started on this 8tb HDD so it isn’t very full right now, I could raise the ratio limits. But, I worry about filling the HDD and part of me worries about 100s of torrents on an n100 doing other things. So I’m keeping the habit from my pi4+1TB days of deleting media behind us and keeping the torrent count low.

    I justify it as self managing though: popular Isos are on then off my harddrive fairly quickly, but the ones that need me will sit and wait until they hit the ratio of 3 however long that is. I would like to do “3 + (get that last seeder to 100%)” but I don’t know how/if it’s possible to automate through prowlarr.







  • I guessed it was a “once bitten twice shy” kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and… sound like a whole lot of effort when currently my efforts can be better spent in other areas.

    In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I’ll probably realise it’s not so much effort after all.

    That said I’m currently learning, so if something is going to be breaking my stuff, it’s probably going to be me and not an update. Not to discredit your comment, it was informative and useful.


  • Fedegenerate@lemmynsfw.comtoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    8 months ago

    When I asked this question

    So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

    • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
    • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
    • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
    • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

    I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use “latest” though, looking forward to seeing how that burns me later on.

    But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.