I’m desperate for a community driven review system for open source. We’re drowning in vibe-coded slop, and I honestly don’t have the time or a good slop detector to audit every tool I download. I know I should be checking under the hood, but the sheer volume of low-quality projects makes it impossible to keep up
This is what good distros do, well some of them, I don’t think low touch repos like AUR/Homebrew/PPA’s would catch this, but I doubt huntarr will ever make it to Debian.
Ofc the trend of running upstream unverted containers undermines this.
Sounds like the solution would be a public code sharing platform that specifically bans AI generated code. Then, at least, we’re moving in the right direction. Do any alts to GitHub provide such a rule?
It doesn’t need to be perfect nor catch every offender. No need for magic AI-coded detection sauce. If it just detected slop, human or otherwise, and obviously AI-written code, with a reporting mechanism for user-driven monitoring, that could be a good start
But, should we worry about it being a source for AI companies to scrape? How should we deter that?
I tend to look at a project’s Issues tracker, that gives me a feel for how the author(s) deal with feedback… some projects have hundreds of open tickets with barely any interactions, yet code updates “2 days ago”.
Being here and reading about who’s using what will help remove the major outliers
All opensource needs more eyeballs, which is still the advantage over closed source.
I’m desperate for a community driven review system for open source. We’re drowning in vibe-coded slop, and I honestly don’t have the time or a good slop detector to audit every tool I download. I know I should be checking under the hood, but the sheer volume of low-quality projects makes it impossible to keep up
This is what good distros do, well some of them, I don’t think low touch repos like AUR/Homebrew/PPA’s would catch this, but I doubt huntarr will ever make it to Debian.
Ofc the trend of running upstream unverted containers undermines this.
Sounds like the solution would be a public code sharing platform that specifically bans AI generated code. Then, at least, we’re moving in the right direction. Do any alts to GitHub provide such a rule?
It doesn’t need to be perfect nor catch every offender. No need for magic AI-coded detection sauce. If it just detected slop, human or otherwise, and obviously AI-written code, with a reporting mechanism for user-driven monitoring, that could be a good start
But, should we worry about it being a source for AI companies to scrape? How should we deter that?
I saw a project yesterday where the two main contributors were some guy and ‘Claude’. So, y’know, that one at least was an easy tell 😂
You’re here, that’s a good start…
I tend to look at a project’s Issues tracker, that gives me a feel for how the author(s) deal with feedback… some projects have hundreds of open tickets with barely any interactions, yet code updates “2 days ago”.
Being here and reading about who’s using what will help remove the major outliers
All opensource needs more eyeballs, which is still the advantage over closed source.
There are projects turning issues to discussions
Sometimes it’s really easy, open a bunch of code files and see if it’s littered witb comments. If it is: likely sloppified