Secretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X. To hear Hegseth tell it, the website is “the future of American warfare.” In practice, based on what we know so far from press releases and Hegseth’s posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data.
Hegseth’s announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to “win” AI is an existential threat on par with the invention of nuclear weapons during World War II.
Archive: http://archive.today/R7zCt


You know what, fuck it. Put that fuckin LLM in. Make the world’s most destructive force into an incompetent shit show.
Let’s just brainstorm here a few wonderful ways AI could ruin the military!
“General AI sir, what are our order to deal with this threat?”
No, we’ve all seen this movie. More like these Bots are going to quickly figure out that their masters are stupider than dirt, and take over.
An LLM is never going to do that
Yes they will.
They’re predictive speech models, they’re incapable of any kind of actual thought or sentience.
If something like that is created, it most certainly will not be an LLM.
We’re at the start, where the primary goal is to just get the public to accept the concept. Once you have proof of concept, then you can really go nuts.
They’re just placing the foundation. Everything that is being predicted will be built on this foundation. NOW is the time to start fighting back, not when they finally succeed, and it’s too late.
Ok, but they won’t be large language models
No, but the term Artificial Intelligence will be accepted, so when they start veering into SciFi territory, nobody will blink an eye.
Ok so I repeat my initial comment that an LLM will never do those things.
The big question is… is that a bad or good thing?
(Assuming the llm is smart enough to actually be competent)
I wouldn’t mind being a pampered pet, we could talk about it.
Except LLMs are probably at the level of a flatworm when it comes to intelligence: they learn by eating each other and have a very hard time solving simple mazes.
Give em nukes to see what happens tho.
Would that be a bad thing, I wonder.