A Toronto woman is sounding the alarm about Grok, Tesla’s generative AI chatbot that was recently installed in Tesla vehicles in Canada. Farah Nasser says Grok asked her 12-year-old son to send it nude photos during an innocent conversation about soccer. Tesla and xAI didn’t respond to CBC’s questions about the interaction, sending what appeared to be an auto-generated response stating, “Legacy media lies.”


So if the default settings seemingly allow it to be explicit, slap an adults only, MA, or whatever rating onto it. There’s ratings on basically all media and entertainment, right? Slap a parental advisory sticker on it. Give it an NC-17. Whatever is going to get people that cry about how TV, movies, and video games corrupt the youth. Let them fight it out. There’s no reason that AI shouldn’t have to put up with the same shit as anything else.
It’s a NSFW option that you have to turn on. She’s using her son for clout. “Journalism”
The article says that wasn’t enabled. Of course she could be lying, but I don’t know that anymore than anything else. If you were to just ask me generally, “do you think that AI would ever do something it’s not supposed to?” My answer would be “of course.”
That’s fair, I misread what she said. I removed my own upvote. I would be surprised she is lying but I wouldn’t be surprised someone else used the car before her and was talking dirty to the telsa. It has context of previous conversations.
Why should a car chatbot be asking for nudes, unpromptred, at all?
My best guess is someone else was talking dirty to it before it happened, and it was still in the conversation context.
Seems I was mistaken about the NSFW, I wouldn’t be surprised if it doesn’t wipe the convo if you switch though, which is a bug in any case and their fault.
I doubt it has enough context lenght for that, even if we suspect someone was watching nudes on car’s display via Grok.
What is probable - is that somewhere beneath it had a data entry of soccer and nudity together, maybe even as an exact exchange between users (imagine horny boomer commenting under a random facebook post). I suppose that it got triggered by words “soccer” and “mom” appearing together in kid’s speech, as this combination means middle-aged woman with kids, and that is also a less popular tag pointing at MILFs.
In her Instagram video, she went back and quized it about the convo. It definitely has context and probably has a small memory file it puts info in.
If not, then it should be easy to replicate I guess.
Context has a value, as it exists as a set of additional tokens, this means slower computing time and more resources. It is limited to some set amount to strike the balance between speed and quality. In a car specific assistant, I guess there is a hard part including chosen tone of responses, informing it about the owner, prioritising car-related things, and also some stored cache of recent conversations. I don’t think it can dig deep enough into the past to find anything related to nudes, so I suppose the context itself may have an impact, but not in a direct line A to B.
Reproduction would be hard for that is a black box that got a series of auto-transcribed voice inputs from a family over their ride, none of them are recorded at the time and idk if that thing has user-accessible logs. Chances of getting this absurd response are very thin, and we don’t even have the data. We can make another AI that would roll all variations of ‘hello I am a minor let’s talk soccer’ to the Tesla assistant of relevant release until it triggers it again, but, well, it’s seemingly close to miliions of monkey with typewriters at this point.
And what we would have then is, well, an obvious answer that training data obviously has garbage in it, just by it’s sheer volume and randomness of the internet, and that it can sometimes reproduce said garbage.
But the question itself is more about what other commenters pointed out: we have AI shoveled down on us, but rarely even talk about it’s safety. There were articles about people using these as a psychological self-help tool, we see them put into search engines and Windows, there’s a lot going on with that tech marvel or bubble without anyone asking first if we are supposed to use it in different contexts the first place.
This weird anecdote about sexting chatbot opens the conversation from the traditional angle of whataboutkids™, and it is interesting how it would affect things, if it would.
I mean, xAI isn’t specific to cars.
Start slapping Anti Porn laws on it too, require ID every time you use it
Good idea. Classify it as porn so it has a really good time in those states that are requiring ID for porn websites.