And of course they had to shoehorn some AI bullshit in it
(why I installed this driver: because i can remap the two extra buttons as copy/paste)
And of course they had to shoehorn some AI bullshit in it
(why I installed this driver: because i can remap the two extra buttons as copy/paste)
The sad reality of the end of Windows dominance.
I get what you are saying and this is definitely a factor but I think the bigger influencer was mobile adoption. As soon as smartphones took off it was inevitable that we would see a surge in cross platform frameworks/libraries.
The fact we tackled this problem by shifting everything to web apps was also inevitable given the more simplistic deployment requirements and maintenance costs of a website vs native application.
I feel like I am shouting to the void when I talk about performance of modern software being unbelievably bad.
Yeah, I can see how it ended up like that, and it would at least be nice if Windows accepted that and had one copy of the browser rather than every app installing it’s own just in case of breaking changes.
And it would also be really nice if it only clogged the system for when it needs to show a UI, but I’ve got a ton of background processes that are also running a browser just in case today is the day that I finally need to see them. Just looking down task manager now at some suspect large processes, I can see a Razer “mouse driver”, Epic, Discord, Steam, Nvidia, Oculus, NordVPN, Signal…
None of these things need to be running a browser while I’m not looking at them.
But hey, lets throw another 32GB of RAM in there, and another dozen cores, and maybe we can achieve the dream of running each of them all in their own fucking operating system as well…
Yeah and unfortunately it’s going to get worse when AI agents are also always running in the background (which is inevitable, let’s be honest).
Proton proves that you don’t need to run on a web browser for cross platform compatibility. Turing-complete platforms are equivalent in their capabilities, it’s just a matter of adding a translation layer that doesn’t need to be as heavy as a browser DOM (at least for going between windows and Linux on x64).
I’m not 100% convinced that an emulation layer isn’t as heavy as a browser.
We had things like Java and QT, and none of it really took off. Apple is probably to blame here as well, for wanting everything to be native to iOS and ignoring the reality that developers don’t want to make five different versions of their software.
It’s generally not as heavy because the layer is just reinterpreting API calls while the user code still runs natively. On a browser running JavaScript, it’s using an interpreter for every line of code. Depending on the specifics, it could be doing string processing for each operation, though it probably only does the string processing once and converts the code into something it can work with faster.
Like if you want to add two variables, a compiled program would do it in about 4 cpu instructions, assuming it needed to be loaded from memory and saved back to memory. Or maybe 7 if everything had a layer of indirection (eg pointers).
A scripting language needs to parse the statement (which alone will take on the order of dozens of cpu instructions, if not hundreds), then look up the variables in a map, which can be fast but not as fast as a memory load or two, then do the add, and store the result with another map lookup. Not to mention all of the type stuff being handled at run time, like figuring out what the variables are and what an add of those types even means, plus any necessary conversions. I understand that JavaScript can be compiled and that TypeScript is a thing, but the compiled code still needs to reproduce all of the same behaviour the scripting language does, so generic functions can still be more complex to handle calling and return conventions and making sure they work on all possible types that can be provided. And if they are using eval statements (or whatever it is to process dynamically generated code), then it’s back to string processing.
Plus the UI itself is all html and css, and the JavaScript interacts with it as such, limiting optimizations that would convert it into another format for faster processing. The GPU doesn’t render HTML and CSS directly; it all needs to be processed for each update.
For D3D to Vulkan, the GPU handles the repetitive work while any data that needs to be converted only needs to happen once per pass through the API (eg at load time).
That browser render stuff can all be done pretty quickly on today’s hardware, so it’s generally usable, but native stuff is still orders of magnitude faster and the way proton works is much closer to native than a browser.
Going to be quite a bit heavier than that if you run it on a different CPU architecture though. And even if you’re not running on mobile, Apple still opened that can of worms a few years back. Linux too, I guess.
Honestly, I don’t mind HTML for a UI. It resizes nicely to fit a large number of devices. It looks pretty much the same no matter what you’re running it on. But it should just be that, a UI layer. Otherwise the solution you were looking for was a website, and not a dozen 500MB chunks of Chrome installed around my PC.