

Or its a .ml instancrlol


Or its a .ml instancrlol


Right. This is why you CAN do video, audio, messaging, 3d object data, etc using the same backend servers. It does require some complexity on the dev side too because of that


A discord “sever” is a community space running on Discord’s servers. There are forum areas managed by the community owner, including permant voice/video channels, DMs and group chats are enabled through shared community involvement or direct friend requests but otherwise external to the communities management. Moderation is handled on per community basis barring servwr ToS violations.


IPFS backend and some automated pinning system for Peertube would go a long way to me


Are Chinese devs incabable in your mind of this?


Maybe check the https://www.softwareheritage.org/ if rember the names?
2026 I will continue to use the Linux Desktop. My current prediction and I’m sticking to it
https://matrix.org/ecosystem/bridges/
Services that link matrix servers to other chat services.
My LUG uses discord :/ but we are working on a matrix bridge.
Bridges IMHO seem like the way to go.


I was using Odoo at home for a while


Where I live there are already a few coop/consignment stores with a online store front. I wonder how hard a warehouse/fufilement coop.
Sweet! Great place showing off the projects and companies that are truely working towards respects your freedom type of computing!
I would add Oxide to the server list they are definitely in this space


Was macos at work, now Linux dev machine. Its a big up.
To be honest, all those are web apps now shrug. Zoom, slack, teams, docs, sheets, <insert word named app here>, all open in the browser. So IDC what the OS is for them. Linux Zero-Touch deployments are still in progress IMHO so I get why they arent here yet for a lot offices, but we are closer now than ever (thanks atomic OSs!).


Definitely overkill lol. But I like it. Haven’t found a more complete solutions that doesn’t feel like a comp sci dissertation yet.
The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for “free”. Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).


Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I’m not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.
For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.
That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.
Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.


Git lab CI is my goto for git repo based things (unit tests, integration tests, etc). Fleet through Rancher for real deployments (manages and maintains state because kubernetes). Tekton is my in between catchall.


https://github.com/agabani/tor-operator I’ve keep wanting to add something like this to a cluster and hosting those services behind a Tor proxy
Or its a .ml instance And moderation of certain topics is very active lol