Boffins at the Department of Energy’s Sandia National Labs are working to develop cheap and power efficient LEDs to replace lasers. One day, they let a trio of AI assistants loose in their lab.

Five hours later, the bots had churned through more than 300 tests and uncovered a novel approach for steering LED light that is four times better than methods the researchers developed using their own wetware.

The work, detailed in a paper published in the journal Nature Communications underscores how AI agents are changing the way scientists work.

“We are one of the leading examples of how a self-driving lab could be set up to aid and augment human knowledge,” Sandia researcher Prasad Iyer said in a recent blog post.

The experiment builds on a 2023 paper in which Iyer and his team demonstrated a method for steering LED light that has applications in everything from autonomous vehicles to holographic projectors. The trick was finding the right combination of parameters to steer the light in the desired manner, a process researchers expected to take years.

  • Jul (they/she)@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    A lot is simply because science validates its results and uses a training data set that is very targeted, not everything on the internet.

  • Hamartiogonic@sopuli.xyz
    link
    fedilink
    arrow-up
    52
    ·
    edit-2
    2 days ago

    Here’s the interesting part.

    “We didn’t do any LLMs. There is significant interest in that. There are lots of people trying those ideas out, but I think they’re still in the exploratory phase,” Desai told El Reg.

    As it turned out, the researchers didn’t need them. “We used a simpler model called a variational auto encoder (VAE). This model was established in 2013. It’s one of the early generative models,” Desai said.

    By sticking with domain-specific models based on more mature architectures, Sandia also avoided hallucinations – the errors that arise when AI makes stuff up – which have become one of the biggest headaches associated with deploying generative AI.

    “Hallucinations were not that big a concern here because we build a generative model that is tailored for this very specific task,” Desai explained.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    31
    ·
    2 days ago

    I feel like calling this task-specific, tailored machine learning algorithm an “AI agent” is misleading. But hey maybe some AI-pilled lost soul will click on it and be surprised it’s not just shoving an LLM where it has no right to be shoved and learn about this other type of AI, a VAE.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      Additionally, automating rapid iteration and investigation isn’t necessarily “smart” - it just let’s one try permutations more quickly with the parameters adjusting automatically. Handy, useful, but not this “magic” that tech bro billionaires keep fawning over.