I’ve been kicking around the idea of running a server for games and chat woth some of my friends, but worry about everyone getting cut off when there’s a disruption.

I’ve started looking into kubernetes out of curiosity, and it seems like we could potentially set up a cluster with master nodes at 3+ locations to hose whatever game server or chat server that we want with 100% uptime, solving my concerns.

Am I misunderstanding the kubernetes documentation, and this is just a terrible idea? Or am I on the right track?

  • NowThatsWhatICallDadRock@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    That would be a load balancer but is not integral to the working of kubernetes. I wouldn’t consider kubernetes unless you have a need for autoscaling. It’s a lot of overhead for such a limited use case.

    You can front any three un-clustered nodes with a load balancer to the same effect

    • mnemonicmonkeys@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      You can front any three un-clustered nodes with a load balancer to the same effect

      Good to hear. Are there specific example you could point me to? I’d like to learn more

      • NowThatsWhatICallDadRock@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        https://www.cloudflare.com/learning/performance/what-is-load-balancing/

        https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

        I would start here. Most off the shelf proxies can do it. Once set up you’ll just have your friends connect to the load balancer either via IP or dns hostname. For anything behind a residential connection I would recommend either tunneling out or setting up ddns (dynamic dns) as the IPs can change every few days. Take a look at load balancing strategies as well

        For the game server you’ll probably want failover instead, which most proxies can also provide. This is because a load balancer could route everyone to different instances. I would set up save syncs between the three nodes so that if your primary instance becomes unhealthy you can simply reconnect to the same address and the proxy will route you to the secondary node. Obviously requires healthchecks. When the primary node becomes healthy again new connections will be initiated there.

        Both of these introduce latency because you are adding a network hop though. You could also look into dns failover (direct to each node) to avoid this