Secretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X. To hear Hegseth tell it, the website is “the future of American warfare.” In practice, based on what we know so far from press releases and Hegseth’s posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data.

Hegseth’s announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to “win” AI is an existential threat on par with the invention of nuclear weapons during World War II.

Archive: http://archive.today/R7zCt

  • Harvey656@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    You know what, fuck it. Put that fuckin LLM in. Make the world’s most destructive force into an incompetent shit show.

    Let’s just brainstorm here a few wonderful ways AI could ruin the military!

    “General AI sir, what are our order to deal with this threat?”

    “I am sorry, but I cannot give orders to shoot at another country due to it being morally, and ethically reprehensible. Perhaps you should find some common ground, maybe even take them out to diner instead?”

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      No, we’ve all seen this movie. More like these Bots are going to quickly figure out that their masters are stupider than dirt, and take over.

          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            They’re predictive speech models, they’re incapable of any kind of actual thought or sentience.

            If something like that is created, it most certainly will not be an LLM.

            • BarneyPiccolo@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              We’re at the start, where the primary goal is to just get the public to accept the concept. Once you have proof of concept, then you can really go nuts.

              They’re just placing the foundation. Everything that is being predicted will be built on this foundation. NOW is the time to start fighting back, not when they finally succeed, and it’s too late.

      • Harvey656@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        The big question is… is that a bad or good thing?

        (Assuming the llm is smart enough to actually be competent)

      • Peruvian_Skies@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Except LLMs are probably at the level of a flatworm when it comes to intelligence: they learn by eating each other and have a very hard time solving simple mazes.