The Internet faces an existential crisis as nearly 50% of all traffic is now non-human, with AI-generated content and bots threatening to overwhelm authentic human interaction[1]. According to recent studies, this includes automated programs responsible for 49.6% of web traffic in 2023, a trend accelerated by AI models scraping content[1:1].
The problems are stark:
- Search engines flooded with AI-generated content optimized for algorithms rather than humans
- Social media platforms filled with AI “slop” and automated responses
- Genuine human content being drowned out by machine-generated noise
- Erosion of trusted information sources and shared truth
However, concrete solutions exist:
- Technical Defenses:
- Open-source spam filtering tools like mosparo for protecting website forms
- AI scraper blocking through systems like Anubis
- Content authenticity verification via the CAI SDK[1:2]
- Community Building:
- Supporting decentralized social networks (Mastodon, Lemmy)
- Using open-source forum platforms that emphasize human moderation
- Participating in curated communities with active fact-checking[1:3]
- Individual Actions:
- Using privacy-focused browsers and search engines
- Supporting trusted news sources and independent creators
- Being conscious of data sharing and digital footprint[1:4]
“While exposure to AI-generated misinformation does make people more worried about the quality of information available online, it can also increase the value they attach to outlets with reputations for credibility,” notes a 2025 study by Campante[1:5].


Why would we want to stop it?