• TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    36
    ·
    2 days ago

    Since basically all data is now contaminated, there’s no way to get massive amounts of clean data for training the next generation of LLMs. This should make it harder to develop them beyond the current level. If an LLMs wasn’t smart enough for you yet, there’s a pretty good chance that it won’t be in a long time.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      law of diminishing returns, LLM train thier data on AI slop of LLM, that is trained other llm, all the way down to “normal human written slop”

    • artifex@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      Didn’t Elon breathlessly explain how the plan was to have Grok rewrite and expand on the current corpus of knowledge so that the next Grok could be trained on that “superior” dataset, which would forever rid it of the wokeness?

        • dustycups@aussie.zone
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          It was a really entertaining moment in history to see grok showing up elon & co despite their clear attempts to make it conform to their world view.

          • artifex@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            22 hours ago

            The common colloquialism is that objective reality has a liberal bias. So either you train your LLM on “woke” science and facts, or it spits out garbage nonsense that is obviously wrong to even the typical twitter user.

      • Naich@lemmings.world
        link
        fedilink
        arrow-up
        11
        ·
        2 days ago

        It started calling itself MechaHitler after the first pass, so I’d be interested to see how less woke it could get training itself on that.

    • Xylight@lemdro.id
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 days ago

      A lot of LLMs now use intentionally synthesized, or AI generated training data. It doesn’t seem to affect them too adversely.

      • TranquilTurbulence@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Oh I’m sure there is a way. We’ve already grabbed the low hanging fruit, but the next one is a lot higher. It’s there, but it requires some clever trickery and effort.