

it was just a joke, to the “one more dashboard” part :D its fine


it was just a joke, to the “one more dashboard” part :D its fine


soo… servers your router doesn’t like for whatever reason blocked for everyone else? with gov ID checks? why would we want that?
and how is this a dashboard idea?


wonderful! now somebody needs to rewrite this in Rust and we are done!


A time limit after disasters would be necessary. It’s difficult to think of a proper time limit though, as even a month might not be enough time if your entire house burns down.
and also accounting for low bandwidth connections… whats more, some shitty providers even have monthly data caps
Maybe a payment system could be set up to where, if your server doesn’t ping for a week, your credit card is automatically charged (after pinging you with many emails).
yeah, that would be almost a necessary feature. being able to hold on to the backup when you really can’t restore.


such a system would need a strict time limit for restoration after the catastrophe. Otherwise leeching would be too easy.


better would be something that can just eat a zfs send stream, but I guess for an emergency it’s fine. but I would still want to encrypt everything somehow.


the images are not loading for us, maybe it’s because of the maintenance mode?


but wouldn’t you also need to verify the matrix/signal contact? both of them gives you the option to verify the other, but its very rarely used by people. so, you need either an already verified secure channel, or meeting on the street.
but then again we don’t actually know each other. so if we meet, how would you know it’s actually me, and not someone impersonating me?


yes but these stats highlight those that are only participating or creating heated debates. users that are consistently downvoted, but also users who are giving lots of downvotes


a firewall can be used to filter incoming traffic by its properties. most consumer home routers don’t expose the firewall settings


as I understand it does not hide negatively voted comments. these are stats, for to moderators, for helping moderation decisions. It’s not automatic.


they are visible, they do matter, just not that much as on reddit
the startup costs of federation are high, but that was a technical choice.
that tells a lot.


oh! I don’t know how nix containers work, but I would be looking into creating a shared network between the containers, that is not the normal network.


oh, I see what you mean!
they do that for the sake of providing an example that works instantly. but on the long term it’s not a good idea. if you intend to keep using a service, you are better off connecting it to a postgres db that’s shared across all services. once you get used to it, you’ll do that even for those services that you are just quicly trying out.
how I do this is I have a separate docker compose that runs a postgres and a mariadb. and these are attached to such a docker network, which is created once with a command, rather than in a compose file. every compose file where the databases are needed, this network is specified as an “external” network. this way containers across separate compose files can communicate.
my advice is its best to also have this network as “internal” too, which is a weird name but gist is, this network in itself won’t provide access to your LAN or the internet, while other networks may still do that if you want.
basically setup is a simple command like “docker network create something something”, and then like 3 lines in each compose file. you would also need to transfer the data from the separate postgreses to a central one, but thats a one time process.
let me know if you are interested, and I’ll help with commands and what you need. I don’t mind it either if you only get around to this months later, it’s fine! just reply or send a message


just to be clear, are you saying that most beginners just copy paste the example docker compose from the project documentation, and leave it that way?
I guess that’s understandable. we should have more starter resources that explain things like this. how would they know, not everyone goes in with curiosity to look up how certain components are supposed to be ran


almost every self hosted service needs a database. and what “another” database? are you keeping separate postgreses for each service that wants to use it? one of the most important features of postgres is that it as a single database server can hold multiple databases, with permissions and whatnot
its interesting because even fecesbook has a setting for this


I think it depends. when you run many things for yourself and most services are idle most of the time, you need more RAM and cpu performance is not that important. a slower CPU might make the services work slower, but RAM is a boundary to what you can run. 8 GB is indeed a comfortable amount when you don’t need to run even a desktop environment and a browser on it besides the services, but with things like Jellyfin and maybe even Immich, that hoard memory for cache, it’s not that comfortable anymore.
ok, but who is the target audience for that? I am interested now