• ayyy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 hours ago

    If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.

    • Affidavit@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 hours ago

      A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.

      An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.