On the kernel security list we’ve seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we’re around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
Something I’m predicting is that at least it will change the approach to security fixes: [ … ] software that used to follow the “release-then-go-back-to-cave” model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.
[ … ]
Overall I think we’re going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute. But before this happens, we have to experience a huge mess that might last for a few years to come! Interesting times…
kinda scary when ai slop becomes successful ai analysis
That’s the thing, this isn’t AI slop.
This is using the tools for their intended purpose, rather than trying to use them to replace human-written code.
Exactly. AI slop is just that. Slop.
If it’s just an AI doing something useful, we don’t call it slop, we just call it AI.
When Google’s AlphaFold predicted the folding of over 200 million protein structures, and won a nobel prize for it, I don’t think anyone would call all the research using it to make cures to diseases slop.
It’s the disadvantage of using a marketing term like “AI” to refer to literally any type of software using machine learning. We know the strengths and weaknesses of ML, it’s the current trend of pushing it as “intelligence” and a cure-all to replace workers that gives it a bad rap. Then the slop machine chatbots get treated with the same attitude as actually useful tools, and both get a reputation they don’t deserve
My parents have started calling CGI made by humans AI
Maybe it’s not slop, but this can lead to lazy developers that don’t grok the code they write.
Linus was right to be sceptical about unit tests in the kernel, writing to test without understanding the problem is common in my paid job. The AI enabled equivalent of writing code without truly understanding it, is going to be much worse and is a separate issue to the pure slop AI generates at the moment.
deleted by creator
I am thinking since a while that AI tools, as useless as they are generally, could for once become helpful in checking freshly developed code. Even if the actual code is smart, most bugs are in reality pretty dumb.
welcome to 24 months ago… its a good time to pay attention, because things are now functional.
Overall I think we’re going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute.
Finally someone else said that.
I don’t know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be puffing from the a long backlog (and I hope so).
It is going to take a given time then stabilizes then decline. I guess maybe a year or so, until all major flaws and bugs are discovered and addressed. Maybe the rust code would help in this. After that it would either go back to normal, which is most likely or developer get up to speed using right tools.
AI could help accelerate writing fixes for reported bugssmaw as it does for discovering them.
AI is also likely to write bugs faster than they are reported.
Maybe the rust code would help in this.
Why? Are these bugs in modules that are memory management related?
You could think that this development puts open source projects at a disadvantage.
But this does not seem to be the case: AI tools can also be used to automatically disassemble and even decompile closed-source code machine code, leaving it open to the same kind of analysis.
By the way, in the medium term, generalizing this development from the kernel to general distro packages, this could be a good argument to prefer using a rolling-release distro like Arch, SuSE Tumbleweed, or Guix over “stable” Distros like Debian or Ubuntu.
Debian has real advantages (it has one of the fastest response times to security vulnerabilities), but rolling release distros do have the advantage not only that they in theory can update fast, but that the dependent packages only need to be compatible with the latest version to ensure stability.
On the other hand, it could lead to Debian becoming so heavily tested and patched that it becomes its own thing akin to one of the BSDs.




