Boffins at the Department of Energy’s Sandia National Labs are working to develop cheap and power efficient LEDs to replace lasers. One day, they let a trio of AI assistants loose in their lab.
Five hours later, the bots had churned through more than 300 tests and uncovered a novel approach for steering LED light that is four times better than methods the researchers developed using their own wetware.
The work, detailed in a paper published in the journal Nature Communications underscores how AI agents are changing the way scientists work.
“We are one of the leading examples of how a self-driving lab could be set up to aid and augment human knowledge,” Sandia researcher Prasad Iyer said in a recent blog post.
The experiment builds on a 2023 paper in which Iyer and his team demonstrated a method for steering LED light that has applications in everything from autonomous vehicles to holographic projectors. The trick was finding the right combination of parameters to steer the light in the desired manner, a process researchers expected to take years.
A lot is simply because science validates its results and uses a training data set that is very targeted, not everything on the internet.
Here’s the interesting part.
“We didn’t do any LLMs. There is significant interest in that. There are lots of people trying those ideas out, but I think they’re still in the exploratory phase,” Desai told El Reg.
As it turned out, the researchers didn’t need them. “We used a simpler model called a variational auto encoder (VAE). This model was established in 2013. It’s one of the early generative models,” Desai said.
By sticking with domain-specific models based on more mature architectures, Sandia also avoided hallucinations – the errors that arise when AI makes stuff up – which have become one of the biggest headaches associated with deploying generative AI.
“Hallucinations were not that big a concern here because we build a generative model that is tailored for this very specific task,” Desai explained.
Y’all remember when they used machine learning to make lightweight aircraft designs?

“AI Agent” is, today, synonymous with LLM. The headline is likely to communicate incorrect information.
Yeah, that title is just awful. When they have nothing interesting to write about, they use the title to intentionally mislead the readers. There should be laws against that.
I feel like calling this task-specific, tailored machine learning algorithm an “AI agent” is misleading. But hey maybe some AI-pilled lost soul will click on it and be surprised it’s not just shoving an LLM where it has no right to be shoved and learn about this other type of AI, a VAE.
Additionally, automating rapid iteration and investigation isn’t necessarily “smart” - it just let’s one try permutations more quickly with the parameters adjusting automatically. Handy, useful, but not this “magic” that tech bro billionaires keep fawning over.
VAEs are used in image generation too at the end of generation to convert latent images to pixel space.
Yeah, it’s more like a sparkling expert system.





