A Toronto woman is sounding the alarm about Grok, Tesla’s generative AI chatbot that was recently installed in Tesla vehicles in Canada. Farah Nasser says Grok asked her 12-year-old son to send it nude photos during an innocent conversation about soccer. Tesla and xAI didn’t respond to CBC’s questions about the interaction, sending what appeared to be an auto-generated response stating, “Legacy media lies.”

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      My best guess is someone else was talking dirty to it before it happened, and it was still in the conversation context.

      Seems I was mistaken about the NSFW, I wouldn’t be surprised if it doesn’t wipe the convo if you switch though, which is a bug in any case and their fault.

      • altkey (he\him)@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I doubt it has enough context lenght for that, even if we suspect someone was watching nudes on car’s display via Grok.

        What is probable - is that somewhere beneath it had a data entry of soccer and nudity together, maybe even as an exact exchange between users (imagine horny boomer commenting under a random facebook post). I suppose that it got triggered by words “soccer” and “mom” appearing together in kid’s speech, as this combination means middle-aged woman with kids, and that is also a less popular tag pointing at MILFs.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          In her Instagram video, she went back and quized it about the convo. It definitely has context and probably has a small memory file it puts info in.

          If not, then it should be easy to replicate I guess.

          • altkey (he\him)@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Context has a value, as it exists as a set of additional tokens, this means slower computing time and more resources. It is limited to some set amount to strike the balance between speed and quality. In a car specific assistant, I guess there is a hard part including chosen tone of responses, informing it about the owner, prioritising car-related things, and also some stored cache of recent conversations. I don’t think it can dig deep enough into the past to find anything related to nudes, so I suppose the context itself may have an impact, but not in a direct line A to B.

            Reproduction would be hard for that is a black box that got a series of auto-transcribed voice inputs from a family over their ride, none of them are recorded at the time and idk if that thing has user-accessible logs. Chances of getting this absurd response are very thin, and we don’t even have the data. We can make another AI that would roll all variations of ‘hello I am a minor let’s talk soccer’ to the Tesla assistant of relevant release until it triggers it again, but, well, it’s seemingly close to miliions of monkey with typewriters at this point.

            And what we would have then is, well, an obvious answer that training data obviously has garbage in it, just by it’s sheer volume and randomness of the internet, and that it can sometimes reproduce said garbage.

            But the question itself is more about what other commenters pointed out: we have AI shoveled down on us, but rarely even talk about it’s safety. There were articles about people using these as a psychological self-help tool, we see them put into search engines and Windows, there’s a lot going on with that tech marvel or bubble without anyone asking first if we are supposed to use it in different contexts the first place.

            This weird anecdote about sexting chatbot opens the conversation from the traditional angle of whataboutkids™, and it is interesting how it would affect things, if it would.