One of the stupidest things I’ve heard.
I also would like to attach the bluesky repost I found this from:
https://bsky.app/profile/leyawn.bsky.social/post/3lnldekgtik27
I wish ads felt pain when I skipped them
“Does autocorrect cry when I don’t use its corrections?”
These are the same type of people who believed ELIZA to be sentient.
We can’t even give humans human rights. AI will have to get in line.
Does my phone feel pain when I drop it?
Why don’t you ask it?
What kind of fluff “journalism” is this?
thanks. my first thought was, “are you fucking kidding me?”
but this is what all the money wants us to think about “AI”, which is definitely not intelligence. they want everyone to accept that pattern recognition is indistinguishable from intelligence.
edit - alcohol makes me talk in cicles
The pride of cancelling my 20 year subscription continues to swell.
Wow, and in the NYT no less. This will make a lot of people a lot more stupid. I guess the AI grift needs to go on for a while longer.
I really wonder what’s going on in the editors minds here.
The entire premise of the article is “All experts say no, but I think yes” - why would anyone about any topic publish this? If it would be an actual debate, maybe some contrarian but actual experts arguing in favor of sentience, you could get into an argument here. But this article is blatant science denial. Climate change deniers and antivaxxers use the exact same approach “facts say X, but my feelings say Y”.
I guess articles like this create high engagement, they are the very definition of rage-bait.
What’s saddening is the complete lack of integrity on every level of the publisher. Surely they must know that this is blatant misinformation, but they just don’t care.
Stuff like this does have consequences, it shapes the discussion and leads to bad decisions and outcomes. But like in so many instances, everyone is fine with it as long as they can convince themselves that they won’t be affected by the results of their own actions.
Before we even get close to have this discussion, we would need to have an AI capable of experiencing things and developing an individual identity. And this goes completely opposite of the goals of corporations that develop AIs because they want something that can be mass deployed, centralised, and as predictable as possible - i.e. not individual agents capable of experience.
If we ever have a truly sentient AI it’s not going to be designed by Google, OpenAI, or Deepmind.
Yep, an AI can’t really experience anything if it never updates the weights during each interaction.
Training is simply too slow for AI to be properly intelligent. When someone cracks that problem, I believe AGI is on the horizon.
Ok so: Measure of a Man is one of my all time favorite Star Trek episodes, but come the fuck on. We are so, so far away from that. Maybe worry more about humans, right now, and the world we live in, instead of some nebulous fucking future that we won’t even goddamn reach if we don’t pay attention to, you know, humans and the world we live in.
So… the headline answered the question and people still read the article?
Can our AI fall in love with a human? Scientists laughed at me when I asked them but I found this weird billionaire to pay me to have sex with his robot.
Gemini in it’s current form? No, but it is a fair question to ask for the future
Yeah, twenty years from now at the very least.
A little too optimistic
Yeah, but it’s like fusion. It’s always 20 years away for the last 60 years.
Realistically, as a dev who watched AI develop from cheap parlor tricks to very expensive and ecosystem crunching fancy parlor tricks that mangers think will replace all of their expensive staff who actually know how to design and create:
Modern “AI” is fundamentally incapable of actual thought. They are very advanced and impressive statistical engines, but the technology is incapable of thinking at a fundamental level.