• 0 Posts
  • 117 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle


  • Dran@lemmy.worldtoLinux@lemmy.mlWhich distro?
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I run ubuntu’s server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).

    For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight

    It is the best of both worlds.




  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.

    Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.

    If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.



  • Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation

    https://docs.docker.com/engine/network/packet-filtering-firewalls/

    For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.

    The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)



  • I’m sorry but this is just a fundamentally incorrect take on the physics at play here.

    You unfortunately can’t ever prevent further breakdown. Every time you run any voltage through any CPU, you are always slowly breaking down gate-oxides. This is a normal, non-thermal failure mode of consumer CPUs. The issue is that this breakdown is non-linear. As the breakdown process increases, it increases resistance inside the die, and as a consequence requires higher minimum voltages to remain stable. That higher voltage accelerates the rate of idle damage, making time disproportionately more damaging the more damaged a chip is.

    If you want to read more on these failure modes, I’d recommend the following papers:

    L. Shi et al., “Effects of Oxide Electric Field Stress on the Gate Oxide Reliability of Commercial SiC Power MOSFETs,” 2022 IEEE 9th Workshop on Wide Bandgap Power Devices & Applications

    Y. Qian et al., “Modeling of Hot Carrier Injection on Gate-Induced Drain Leakage in PDSOI nMOSFET,” 2021 IEEE International Conference on Integrated Circuits, Technologies and Applications



  • The “problem” is that the more you understand the engineering, the less you believe Intel when they say they can fix it in microcode. Without writing an entire essay, the TL/DR is that the instability gets worse over time, and the only way that happens is if applied voltages are breaking down dielectric barriers within the chip. This damage is irreparable, 100% of chips in the wild are irreparably damaging themselves over time.

    Even if Intel can slow the bleeding with microcode, they can’t repair the damage, and every chip that has ever ran under the bad code will have a measurably shorter lifespan. For the average gamer, that sometimes hasn’t even been the average warranty period.









  • That is usually more incompetence than malice. They write a game that requires different operation on amd vs Nvidia devices and basically write an

    If Nvidia: Do x; Else if amd: Do Y; Else: Crash;

    The idea being that if the check for amd/Nvidia fails, there must be an issue with the check function. The developers didn’t consider the possibility of a non amd/Nvidia card. This was especially true of old games. There are a lot of 1990s-2000s titles that won’t run on modern cards or modern windows because the developers didn’t program a failure mode of “just try it”