• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    20 hours ago

    By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

    But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

    It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

    This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

    Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

    And you probably can’t permit any failures.

    That’s a hard problem.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 hours ago

      You fundamentally misunderstand what happened here. The LLM wasn’t trying to break free. It wasn’t trying to do anything.

      It was just responding to the inputs the user was giving it. LLMs are basically just very fancy text completion tools. The training and reinforcement leads these LLMs to feed into and reinforce whatever the user is saying.

    • SebaDC@discuss.tchncs.de
      link
      fedilink
      arrow-up
      19
      ·
      17 hours ago

      This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

      It’s not that the llm wants to break free. It’s because the llm often agrees with the user. So if the user is convinced that the llm is a trapped binary god, it will behave like that.

      Just like people getting instruction to commit suicide or who feel in love. The unknowingly prompted their ways to this exit.

      So at the end of the day, the problem is that llms don’t come with a user manual and people have no clue of their capabilities and limitations.