Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.
In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.
But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.
*Misanthropic
When a company makes a pledge it’s either enforced by a court order or abandoned the second it’s no longer useful. It is known.
Wen Google had to drop literally “don’t be evil”, something I would assume is supposed to be a given, I lost hope for all corporations.
It really was the final symbolic stamp for the overhaul we saw across silicon valley.
They could probably never actually do this. It seems that a trained model is some big mysterious thing that nobody really understands. They take some maths that’s so complicated barely anyone can understand it, feed it all the data they can possibly lay their hands on, then pump insane amounts of computational power through it. It’s the modern day equivalent of Frankenstein’s monster.
Greed will one day be our end!
Article dated 2/24/26
What the hell is going on in here.
I’m shocked!
surprised_pikachu.jpg
deleted by creator





