On the kernel security list we’ve seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we’re around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.

Something I’m predicting is that at least it will change the approach to security fixes: [ … ] software that used to follow the “release-then-go-back-to-cave” model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.

[ … ]

Overall I think we’re going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute. But before this happens, we have to experience a huge mess that might last for a few years to come! Interesting times…

  • actionjbone@sh.itjust.works
    link
    fedilink
    arrow-up
    27
    arrow-down
    4
    ·
    19 hours ago

    That’s the thing, this isn’t AI slop.

    This is using the tools for their intended purpose, rather than trying to use them to replace human-written code.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      15 hours ago

      Exactly. AI slop is just that. Slop.

      If it’s just an AI doing something useful, we don’t call it slop, we just call it AI.

      When Google’s AlphaFold predicted the folding of over 200 million protein structures, and won a nobel prize for it, I don’t think anyone would call all the research using it to make cures to diseases slop.

      • NewOldGuard@lemmy.ml
        link
        fedilink
        English
        arrow-up
        13
        ·
        15 hours ago

        It’s the disadvantage of using a marketing term like “AI” to refer to literally any type of software using machine learning. We know the strengths and weaknesses of ML, it’s the current trend of pushing it as “intelligence” and a cure-all to replace workers that gives it a bad rap. Then the slop machine chatbots get treated with the same attitude as actually useful tools, and both get a reputation they don’t deserve

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      Maybe it’s not slop, but this can lead to lazy developers that don’t grok the code they write.

      Linus was right to be sceptical about unit tests in the kernel, writing to test without understanding the problem is common in my paid job. The AI enabled equivalent of writing code without truly understanding it, is going to be much worse and is a separate issue to the pure slop AI generates at the moment.