• ExtremeDullard@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    19 hours ago

    Even if it was open source (it isn’t, because no model is really open source ultimately) and even if it let you review what it says it’s gonna do, AI is known for pulling all kinds of shit and lie about it.

    Would you really trust your system to something that can do this? I wouldn’t…

    • Mordikan@kbin.earth
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      18 hours ago

      Would you really trust your system to something that can do this? I wouldn’t…

      I wouldn’t trust a Sales team member with database permissions, either. This is why we have access control in sysadmin. That AI had permission to operate as the user in Replit’s cloud environment. Not a separate restricted user, but as that user and without sandboxing. That should never happen. So, if I were managing that environment I would have to ask the question: is it the AI’s fault for breaking it or is it my fault for allowing the AI to break it?

      AI is known for pulling all kinds of shit and lie about it.

      So are interns. I don’t think you can hate the tool for it being misused, but you certainly can hate the user for allowing it.