Alt account of @Badabinski

Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.

  • 0 Posts
  • 142 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2024

help-circle
  • Was this done according to proper clean-room design principles? If so, then imo the GPL is still working as intended. The company had to spend a fuckton of money and time getting one engineer to read the source and describe what was done to other engineers, and then ensure that one engineer never ever worked on the project again.

    If they didn’t do that then they violated the GPL and someone should report them to the SFLC.



  • I wrote a program at work that gets deployed to hundreds of thousands of systems and is very hard to fully test or instrument. This program recently had a bug that was hard to track down. Using the command line, I connected to one of these boxes over ssh and ran a series of commands to detect the bug and dump details of what happened. Then, I took all those commands and turned them into a onliner that I could pass in over ssh, so I could get everything I needed for an individual maxhine. I then used xargs to run that command in parallel over every single one of the systems my code was running on and in the end, I was left with a nice directory of files whose name was the IP of an affected system, each filled with useful information. I started by manually running command over ssh, but the composable nature of the shell allowed me to transition that into a script in a matter of minutes.

    I provided a more residential example of why I exclusively use the terminal for file management in a different top level comment.


  • My work and personal computers typically have two applications open—a web browser and a terminal (well, really a shitload of terminals). I don’t have a desktop, I have a terminal. I don’t have a graphical file manager, I have a terminal. I’m not doing this because it’s cool, I do it because it’s efficient as all fuck and makes it trivial to fire off one-liners to automate shit.

    Like, I stream a certain video game competitively, and I need to keep recordings if I want to submit runs. I started off recording my gameplay using x264, and the file sizes were too damn big. I tested various av1 options out using ffmpeg on a small sample clip, and when I was done it was simplicity itself to just do this:

    # I'm typing this on my phone so I'm not going to write out the ffmpeg args
    for file in recordings/*.mp4; do ffmpeg "${some_args[@]}"; done
    

    I didn’t have to learn some stupid GUI batch processing thing. I didn’t have to install any extra tools (since I already had ffmpeg). I just took my command, substituted the input and output files for variable names, and looped that shit.

    I feel that the command line is the most efficient interface for a huge number of tasks. Discoverability is awful (although improved with good tab completion and just reading the fucking manual), but the efficiency and composability of a CLI built in the Unix tradition is hard to overstate imo.








  • Badabinski@kbin.earthtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    3 months ago

    I wrote and maintained a lot of sysvinit scripts and I fucking hated them. I wrote Upstart scripts and I fucking hated them. I wrote OpenRC scripts and I fucking hated them. Any init system that relies on one of the worst languages in common use nowadays can fuck right off. Systemd units are well documented, consistent, and reliable.

    From my 30 seconds of looking, I actually like nitro a bit more than OpenRC or Upstart. It does seem like it’d struggle with daemons the way sysvinit scripts used to. Like, you have to write a process supervisor to track when your daemonized process dies so that it can then die and tell nitro (which is, ofc, a process supervisor), and it looks like the logging might be trickier in that case too. I fucking hate services that background themselves, but they do exist and systemd does a great job at handling those. It also doesn’t do any form of dependency management AFAICT, which is a more serious flaw.

    Nitro seems like a good option for some use cases (although I cannot conceive why you’d want to run a service manager in a container when docker and k8s have robust service management built into them), but it’s never touching the disk on any of the tens of thousands of boxes I help administrate. systemd is just too good.




  • Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.

    EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.

    EDIT: In response to this:

    There’s a reason a huge portion of docker images are alpine-based.

    After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.


  • Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?

    EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.


  • Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.

    EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.





  • True! I just wonder how much energy they’d realistically be able to store for a given amount of resources. Like, does this have the same issues as Lifted Weight Storage? Where the energy density just doesn’t really make sense once you get right down to it. I don’t know the relevant math to determine how much water and at what pressures might be required to scale this up to the 500MWh/1GWh range. It might be perfectly fine.

    EDIT: fuck man I’m not writing well today. edited to make me sound like less of a cretin