• BolexForSoup@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    I don’t mind so long as all results are vetted by someone qualified. Zero tolerance for unfiltered AI in this kind of context.

    • Skua@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      If you need someone qualified to examine the case anyway, what’s the point of the AI?

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          In the test here, it literally only handled text. Doctors can do that. And if you need a doctor to check its work in every case, it has saved zero hours of work for doctors.

            • Skua@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              how high processing power computers with AI/LLM’s can assist in a lab and/or hospital environment

              This is an enormously broader scope than the situation I actually responded to, which was LLMs making diagnoses and then getting their work checked by a doctor

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          Usually to do work that needs done but does not need the direct attention of the more skilled person. The assistant can do that work by themselves most of the time. In the example above, the assistant is doing all of the most challenging work and then the doctor is checking all of its work

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          In the example you provided, you’re doing it by hand afterwards anyway. How is a doctor going to vet the work of the AI without examining the case in as much detail as they would have without the AI?

          • BolexForSoup@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            7 months ago

            Input symptoms and patient info -> spits out odds they have x, y, or z -> doctor looks at that as a supplement to their own work or to look for more unlikely possibilities they haven’t thought of because they’re a bit unusual. Doctors aren’t gods, they can’t recall everything perfectly. It’s as useful as any toxicology report or other information they get.

            I am not doing my edits by hand. I am not using a blade tool and spooling film. I am not processing it. My computer does everything for me, I simply tell it what to do and it spits out the desired result (usually lol). Without my eyes and knowledge the inputs aren’t good and the outputs aren’t vetted. With a person, both are satisfied. This is how all computer usage basically works, and AI tools are no different. Input->output, quality depends on the computer/software and who is handling it.

            TL;DR: Garbage in, garbage out.