Hoo boy, you weren’t kidding. I find it amazing how quickly this went from “the kernel team is enforcing sanctions” to an an unfriendly abstract debate about the definition of liberalism. I shouldn’t, really, but I still am.
Hoo boy, you weren’t kidding. I find it amazing how quickly this went from “the kernel team is enforcing sanctions” to an an unfriendly abstract debate about the definition of liberalism. I shouldn’t, really, but I still am.
Oh yeah, the equation completely changes for the cloud. I’m only familiar with local usage where you can’t easily scale out of your resource constraints (and into budgetary ones). It’s certainly easier to pivot to a different vendor/ecosystem locally.
By the way, AMD does have one additional edge locally: They tend to put more RAM into consumer GPUs at a comparable price point – for example, the 7900 XTX competes with the 4080 on price but has as much memory as a 4090. In systems with one or few GPUs (like a hobbyist mixed-use machine) those few extra gigabytes can make a real difference. Of course this leads to a trade-off between Nvidia’s superior speed and AMD’s superior capacity.
These days ROCm support is more common than a few years ago so you’re no longer entirely dependent on CUDA for machine learning. (Although I wish fewer tools required non-CUDA users to manually install Torch in their venv because the auto-installer assumes CUDA. At least take a parameter or something if you don’t want to implement autodetection.)
Nvidia’s Linux drivers generally are a bit behind AMD’s; e.g. driver versions before 555 tended not to play well with Wayland.
Also, Nvidia’s drivers tend not to give any meaningful information in case of a problem. There’s typically just an error code for “the driver has crashed”, no matter what reason it crashed for.
Personal anecdote for the last one: I had a wonky 4080 and tracing the problem to the card took months because the log (both on Linux and Windows) didn’t contain error information beyond “something bad happened” and the behavior had dozens of possible causes, ranging from “the 4080 is unstable if you use XMP on some mainboards” over “some BIOS setting might need to be changed” and “sometimes the card doesn’t like a specific CPU/PSU/RAM/mainboard” to “it’s a manufacturing defect”.
Sure, manufacturing defects can happen to anyone; I can’t fault Nvidia for that. But the combination of useless logs and 4000-series cards having so many things they can possibly (but rarely) get hung up on made error diagnosis incredibly painful. I finally just bought a 7900 XTX instead. It’s slower but I like the driver better.
Speak for yourself. I’m going to migrate all of my 22-bit RSA keys to a longer key length. And not 24 bits, either, given that they’re probably working on a bigger quantum computer already. I gotta go so long that no computer can ever crack it.
64-bit RSA will surely be secure for the foreseeable future, cost be damned.
I also only used v2 but it’s the extra stuff in it that slightly annoys me. Like how turbo mode (brighter than the usual maximum but usually time-limited to avoid overheating) is only available when the full UI is unlocked. Or how there’s a stepped ramp mode that I have to remember to disable whenever I swap out the battery. Or how I can accidentally enter one of the more exotic modes of for some reason I press the button too often.
Anduril is way overengineered. I like this UI that some of my lights have:
While off:
While on:
That’s pretty easy to learn and gives you all the functions you’d reasonably need (plus that strobe) without a lot of clutter.
We used to have one: “Solang das deutsche Reich besteht wird jede Schraube rechts gedreht.” (“As long as the German Empire persists every screw is turned right.”)
Given that the German Empire failed spectacularly, this sentence isn’t very popular anymore.
True, although that has happened with F/OSS as well (like with xz or the couple times people put Bitcoin miners into npm packages). In either case it’s a lot less likely than the software simply ceasing to be supported, becoming gradually incompatible with newer systems, and rotting away.
Except, of course, that I can pick up the decade-old corpse of an open source project and try to make it work on modern systems, despite how painful it is to try to get a JavaFX application written for Java 7 and an ancient version of Gradle to even compile with a recent JDK. (And then finally give up and just run the last Windows release with its bundled JRE in Wine. But in theory I could’ve made it work!)
Note that this specifically talks about proprietary platforms. Locally-run proprietary freeware has entirely different potential issues, mostly centered around the developer stopping to maintain it. Locally-run F/OSS has similar issues, actually, but lessened by the fact that someone might later pick up the project and continue it.
Admittedly, platforms are very common these days because the web is an easily accessible cross-platform GUI toolkit SaaS is more easily monetized.
And this is why stuff should be defined in terms of day’s earnings to provide scaling. If an ultra-rich person gets jailed and has to post 20 billion dollars in bail, they can’t treat jail as a minor inconvenience.
You forgot the degaussing sound for those screens that had that feature. Like turning them on but louder.
*KLONK*
Oh, right. Fast Boot. I forgot about that bundle of joy.
But that’s wasn’t the only instance of an NTFS volume suddenly being broken. Another favorite was when I shrunk a volume on one disk from Linux (and then remembered that Windows correspond done it better) and rebooted to have it fixed and Windows proceeded to repair one on a different disk.
NTFS feels rock solid if you use only Windows and extremely janky if you dual-boot. Linux currently can’t really fix NTFS volumes and thus won’t mount them if they’re inconsistent.
As it happens, they’re inconsistent all the time. I’ve had an NTFS volume become dirty after booting into Windows and then shutting down. Not a problem for Windows but Linux wouldn’t touch the volume until I’d booted into Windows at least once.
I finally decided to use a storage upgrade to move most drives to Btrfs save for the Windows system volume and a shared data partition that’s now on ExFAT because it’s good enough for it.
I gotta be honest, I haven’t used a dedicated sound card since the Vista/7 era when EAX stopped being a thing and onboard sound could handle 5.1 output just fine. The last one I had was a SoundBlaster Audigy.
These days the main uses for dedicated sound interfaces are for when you need something like XLR in/out and then you’ll probably go with something USB.
Port 220.
IRQ 5, port 220h, DMA 1 was what I used for my SoundBlaster 2.
Later I used IRQ 5, port 220h, DMA 1, high DMA 5 for my SoundBlaster 16.
Why not go straight for the Ultimate Warrior, get him in a debate with Trump, and make the host cry?
If you want a snake and a pie chart, at least have the snake do something with it like carrying the chart in its mouth.
Perhaps you can do the biblical scene of the snake tempting Adam and Eve but this time it’s the snake tempting managers with a useless pie chart.
Mind you, the real winner is of course Android. It has a consistent, easy to learn interface and a wide range of applications that integrate nicely.
And we don’t need to speculate; it has already won and is the true face of Linux for the masses. Plenty of young people don’t even own traditional computers anymore and do everything on their smartphone or tablets.
And that’s why this entire discussion is really just a form of fan wank; we don’t need to find a unified UI for Linux because it has already been found and has a massive market share. You may not like it but this is what peak performance looks like.
Everything else can be as complicated, janky, or exotic as it wants because it doesn’t matter.
Honestly, if you want one simple DE for everyone it should probably be XFCE. Dead simple to use, feels vaguely familiar to Windows users, not overly complicated.
KDE is heavily customizable, Gnome is very opinionated, and tiling WMs don’t adhere to orthodox UI patterns. Those are all suboptimal if you want something usable by the absolute widest range of users.
I’d argue that unfun design elements can be useful in games if used with care and purpose. For instance, “suddenly all of the characters you’re attached to are dead” is not exactly fun but one of the Fire Emblem games used it to great dramatic effect at the midway point.
Of course the line between an event or mechanic that players love to hate and one they just hate is thin.