When I tried it in the past, I kinda didn’t take it seriously because everything was confined to its instance, but now, there’s full-featured global search and proper federation everywhere? Wow, I thought I heard there were some technical obstacles making it very unlikely, but now it’s just there and works great! I asked ChatGPT and it says this feature was added 5 years ago! Really? I’m not sure how I didn’t notice this sooner. Was it really there for so long? With flairs showing original instance where video comes from and everything?

  • deranger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      Just using the “information” it regurgitates isn’t very useful, which is why I didn’t recommend doing that. Whether the information summarized by Wikipedia and ChatGPT is accurate really isn’t important, you use those tools to find primary sources.

      • deranger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 hours ago

        I’d argue that it’s very important, especially since more and more people are using it. Wikipedia is generally correct and people, myself included, edit incorrect things. ChatGPT is a black box and there’s no user feedback. It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle. After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.

        Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet. This is the use case I dislike the most.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle.

          From what I can tell, running an LLM isn’t really all that energy intensive, it’s the training that takes loads of energy. And it’s not like regular searches don’t use loads of energy to initially index web results.

          And this also ignores the gap between having a question, and knowing how to search for the answer. You might not even know where to start. Maybe you can search a vague question, but you’re essentially hoping that somewhere in the first few results is a relevant discussion to get you on the right path. GPT, I find, is more efficient for getting from vague questions to more directed queries.

          After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.

          I find this attitude much more troubling than responsible LLM use. You should not be trusting tertiary sources, no matter how good their track record, you should be checking the sources used by Wikipedia too. You should always be checking your sources.

          Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet.

          That’s beyond the scope of my argument, and not really much worse than pasting directly from any tertiary source.