

So, just for show? It sounds possible but implausible IMO; I don’t think YouTube cares about that cesspool of its own comments, not even enough to set a smoke screen up.
The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.
So, just for show? It sounds possible but implausible IMO; I don’t think YouTube cares about that cesspool of its own comments, not even enough to set a smoke screen up.
Disgusting.
This would be sad and highly unethical if coming from some small, relatively new company, struggling to keep up. But since it’s coming from a large, monopolistic and 30yo old corporation it becomes way worse.
I’m glad I stopped buying their cards. My last one is an AMD, and I’m going Chinese for the next one (a decade or so from now).
Probably not because that 30% is an average of different resources.
For example. Let’s say you have two resources:
Both average to 30%. If you multiply the population by 3, you still have a surplus of A, but now there isn’t enough B.
Another concern is that increasing the population so much would force unsustainable approaches to resource extraction. In other words: 30 billion people living fine and dandy for a generation or two, and then their descendants living in a hellhole.
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
But I wish it had some broader use, that would justify its cost.
I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
(In both our cases, it’s about matching cost vs. use.)
I’d go further: you won’t reach AGI through LLM development. It’s like randomly throwing bricks on a construction site, no cement, and hoping that you’ll get a house.
I’m not even sure if AGI is cost-wise feasible with the current hardware, we’d probably need cheaper calculations per unit of energy.
Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!
…what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.
The whole thing can be summed up as the following: they’re selling you a hammer and telling you to use it with screws. Once you hammer the screw, it trashes the wood really bad. Then they’re calling the wood trashing “hallucination”, and promising you better hammers that won’t do this. Except a hammer is not a tool to use with screws dammit, you should be using a screwdriver.
An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates.
So he’s suggesting that the models are producing less accurate results… because they have higher rates of less accurate results? This is a tautological pseudo-explanation.
AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months
When are people going to accept the fact that large “language” models are not general intelligence?
ideally to make them better at giving us answers we can trust
Those models are useful, but only a fool trusts = is gullible towards their output.
OpenAI says the reasoning process isn’t to blame.
Just like my dog isn’t to blame for the holes in my garden. Because I don’t have a dog.
This is sounding more and more like model collapse - models perform worse when trained on the output of other models.
inb4 sealions asking what’s my definition of reasoning in 3…2…1…
If anything, printers today are worse than they used to be in the 90s. For example, I don’t remember chips preventing you from using third party ink being a thing back then. So I believe the printing industry mafia has been spending those decades adding antifeatures to their designs.
And IMO it highlights how much we [society in general] need open hardware.
If I need to prove something stupid and immoral, and it relies on the assumption that 2+2=5, then 2+2=4 is woke propaganda. Simple as.
And one of the muppets behind Reddit, kn0thing.
As I mentioned in another post, about the same topic, he’s tying a sinking ship to another. So both can sink together.
Musk said the combined company will “build a platform that doesn’t just reflect the world but actively accelerates human progress.”
The funniest part is that this might not be a lie - I wouldn’t be surprised if Musk genuinely believed that.
…let’s get real. xAI’s main product is Grok, a text and image generator. Twitter is basically a blog platform for the sort of people who whine “WAAAH! TL;DR!”. Merge both and you’ll get what? Automated shitposting!
Which is also how Digg v4 ended up, the brands as content submitters.
Exactly. Almost like Reddit decision makers know how Digg died, and yet they’re unable to not follow its steps.
Cordwell and Barker expect user growth for Reddit to stall in 2025 and, as a result, see revenue growth becoming more reliant on making the platform’s proposition more attractive for advertisers.
This won’t be even remotely fun for the people still using that platform. Because “making the platform’s proposition more attractive to advertisers” boils down to either more ads or ads that are more obnoxious, more disguised as content, more targetted.
Etymologically “agent” is just a fancy borrowed synonym for “doer”. So an AI agent is an AI that does. Yup, it’s that vague.
You could instead restrict the definition further, and say that an AI agent does things autonomously. Then the concept is mutually exclusive with “assistant”, as the assistant does nothing on its own, it’s only there to assist someone else. And yet look at what Pathak said - that she understood both things to be interchangeable.
…so might as well say that “agent” is simply the next buzzword, since people aren’t so excited with the concept of artificial intelligence any more. They’ve used those dumb text gens, gave them either a six-fingered thumbs up or thumbs down, but they’re generally aware that it doesn’t do a fraction of what they believed to.
I’m not surprised. And I heavily recommend people to ask questions about a topic that they reliably know to those assistants; they’ll notice how much crap the bots output. Now consider that the bot is also bullshitting about the things that you don’t know.
If a billionaire slapped someone’s face, I’d expect Forbes to narrate how the second person cruelly hurt the billionaire’s hand with their face.
If we (people in general) do it, we’re being filthy thieves and the reason why everything is bad. But when it’s a megacorpo, it’s suddenly a-OK?
Screw this shit. Information should be like the air, free for everyone. Not free for the GAFAM chaste and paid for us untouchables.
No, I only saw it after I solved the problem.
Initially I simplified the problem to one prisoner. The best way to reduce uncertainty was to split the bottles into two sets with 500 bottles each; the prisoner drinks from one, if he dies the poisonous wine is there, otherwise it’s in one of the leftover 500 bottles.
Then I added in a second prisoner. The problem doesn’t give me enough time to wait for the first prisoner to die, to know which set had the poisonous wine; so I had to have the second prisoner drinking at the same time as the first, from a partially overlapping set. This means splitting the bottles into four sets instead - “both drink”, “#1 drinks it”, “#2 drinks it”, “neither drinks it”.
Extending this reasoning further to 10 prisoners, I’d have 2¹⁰=1024 sets. That’s enough to uniquely identify which bottle has poison. Then the binary part is just about keeping track of who drinks what.
Number all bottles in binary, starting from 0000000000. Then the Nth prisoner drinks all wines where the Nth digit is “1”. have each prisoner drinking the wines where a certain digit is “1”.
So for example. If you had 8 bottles and 3 prisoners (exact same logic):
If nobody dies the poisoned wine is numbered 000. And if all die it’s the 111.
If you don’t hold it, you’ll eventually lose it. Plus sharing is loving, and if you don’t have it you can’t share it.