

Who’s in the middle of this Venn Diagram between “uses some kind of custom OS on their phone to where their camera app doesn’t automatically read QR codes” and “doesn’t know how to install or use software that can read QR codes”?
Who’s in the middle of this Venn Diagram between “uses some kind of custom OS on their phone to where their camera app doesn’t automatically read QR codes” and “doesn’t know how to install or use software that can read QR codes”?
I don’t have a phone that can scan QR codes.
QR codes are a plain text encoding scheme. If you can screenshot it, you have access to FOSS software that can decode it, and you can paste that URL into your browser.
“I don’t want it to be like the butt of today’s jokes, I want it to be like the butt of 30-year-old jokes”
MacBook seamless suspend/sleep performance is like 25% of why my personal daily driver is MacOS. Another 50% is battery life, of which their sleep/suspend management plays a part. I’ve played around with Linux on Apple hardware but it’s just never quite been there on power management or sleep/wake functionality. Which is mostly Apple’s fault for poor documentation and support for other OS’s, but it just is, and I got sick of fighting it.
Thread is a bit more power efficient, which matters for battery powered devices that aren’t connected to permanent power and don’t need to transmit significant data, like door locks, temperature/humidity sensors, things like that. A full wifi networking chip would consume a lot more power for an always-on device.
I’m not sure that would work. Admins need to manage their instance users, yes, but they also need to look out for the posts and comments in the communities hosted on their instance, and be one level of appeal above the mods of those communities. Including the ability to actually delete content hosted in those communities, or cached media on their own servers, in response to legal obligations.
They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.
Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.
Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.
So what do you mean by 48% accurate? That’s not enough information to do anything with.
Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.
Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.
Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.
Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.
Wouldn’t a louder room raise the noise floor, too, so that any quieter signal couldn’t be extracted from the noisy background?
If we were to put a microphone and recording device in that room, could any amount of audio processing be able to extract the sound of the small server out from the background noise of all the bigger servers? Because if not, then that’s not just a auditory processing problem, but a genuine example of destruction of information.
Was that in 2000? My own vague memory was that Linux started picking up some steam in the early 2000’s and then branched out to a new audience shortly after Firefox and Ubuntu hit the scene around 2004, and actually saw some adoption when Windows XP’s poor security and Windows Vista’s poor hardware support started breaking things.
So depending on the year, you could both be right.
Do you have a source for AMD chips being especially energy efficient?
I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC’s N4P node, which doesn’t have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can’t get on a TSMC node even within a year or two of Apple.
Still, I haven’t seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren’t necessarily supported on all hardware or OSes, and it’s hard to get a real comparison.
SoCs are inherently more energy efficient
I agree. But that’s a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).
And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That’s why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.
Ultimately, I’d think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.
it’s kinda undeniable that this is where the market is going. It is far more energy efficient than an Intel or AMD x86 CPU and holds up just fine.
Is that actually true, when comparing node for node?
In the mobile and tablet space Apple’s A series chips have always been a generation ahead of Qualcomm’s Snapdragon chips in terms of performance per watt. Meanwhile, Samsung’s Exynos has always been behind even more. That’s obviously not an instruction set issue, since all 3 lines are on ARM.
Much of Apple’s advantage has been a willingness to pay for early runs on each new TSMC node, and a willingness to dedicate a lot of square millimeters of silicon to their gigantic chips.
But when comparing node for node, last I checked AMD’s lower power chips designed for laptop TDPs, have similar performance and power compared to the Apple chips on that same TSMC node.
Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.
It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.
Yeah, if OP has command line access through rsync then the server is already configured to allow remote access over NFS or SMB or SSH or FTP or whatever. Setting up a mounted folder through whatever file browser (including the default Windows Explorer in Windows or Finder in MacOS) over the same protocol should be trivial, and not require any additional server side configuration.
Yeah, I mean I do still use rsync for the stuff that would take a long time, but for one-off file movement I just use a mounted network drive in the normal file browser, including on Windows and MacOS machines.
What if I told you that there are really stupid comments on Lemmy as well
That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.
I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.
We didn’t call it spam then, but unsolicited phone calls have always been a problem.
But the big one here is the characteristic word. By adding Fenyx Rising, it could be argued that that, in addition to the material differences between the products, there is enough separation to ensure there is no risk of confusion from audiences. There are also multiple Immortals trademarks which could make that word in and of itself less defensible depending on the potential conflict.
That’s basically it right there. The word “immortal” has multiple dictionary definitions tracing back long before any trademark, including a [prominent ancient military unit](https://en.wikipedia.org/wiki/Immortals_(Achaemenid_Empire\)) so any trademark around that word isn’t strong enough to prevent any use of the word as a normal word, or even as part of another trademark when used descriptively.
The strongest trademark protection comes for words that are totally made up for the purpose of the product or company. Something like Hulu or Kodak.
Next up are probably mashed up words that might relate to existing words but are distinct mashups or modifications, like GeForce or Craisins.
Next up, words that have meaning but are completely unrelated to the product itself, like Apple (computers) and Snickers (the candy bar) or Tide (the laundry detergent).
Next up are suggestive marks where the trademark relies on the meaning to convey something about the product itself, but still retains some distinctiveness: InSinkErator is a brand of in-sink disposal, Coffee Mate is a non-dairy creamer designed for mixing into coffee, Joy-Con is a controller designed to evoke joy, etc.
Some descriptive words don’t get trademark protection until they enter the public consciousness as a distinct indicator of its origin or manufacture. Name-based businesses often fall into this category, like a restaurant named after the owner, and don’t get protection until it’s popular enough (McDonald’s is the main example).
It can get complicated, but the basic principle underlying all of it is that if you choose a less unique word as the name of your trademark, you’ll get less protection against others using it.
I think any kind of reputation score should be community specific. There are users whose commenting style fits one community but not another, and their overall reputation should be understood in the context of which communities actually like them rather than some kind of global average.