For decades, the open internet’s most trusted systems relied on a filter that no one designed and few noticed: effort. Writing a Wikipedia article, submitting a code contribution, leaving a product review, filing a bug report, publishing a blog post, all of these required enough time and skill that the act of participation itself served as a crude authenticity signal. The friction was the filter. AI has zeroed out that cost, and the consequences are arriving faster than the institutions can adapt.
Open source software is the canary. Daniel Stenberg shut down curl’s bug bounty program after AI-generated submissions drove valid vulnerability reports from 15% of submissions down to 5% (LeadDev, 2026). Mitchell Hashimoto implemented a zero-tolerance policy for AI-generated pull requests at Ghostty after being inundated with what he calls “slop” (Medium/LiveWyer, 2026). Steve Ruiz closed all external pull requests to tldraw entirely. Craig McLuckie, co-founder of Stacklok, described how “good first issue” labels, once a gateway for new human contributors to grow into long-term maintainers, now attract floods of low-quality AI submissions within hours (InfoQ, 2026). The OCaml community rejected an AI-generated pull request containing over 13,000 lines of code, with one maintainer warning that such submissions could bring the pull request system to a halt (InfoWorld, 2026). In February 2026, GitHub published a blog post titled “Welcome to the Eternal September of Open Source,” acknowledging the crisis and announcing new tools for pull request deletion and triage (GitHub Blog, 2026). The metaphor is precise. The original Eternal September, in 1993, described AOL users overwhelming Usenet’s community norms. This time the flood is not human.
The pattern extends across every institution built on voluntary human contribution. Wikipedia created WikiProject AI Cleanup and adopted a speedy deletion policy for AI-generated articles in August 2025 after editors reported being “flooded non-stop with horrendous drafts” containing fabricated citations (Wikipedia/AI in Wikimedia Projects). A Princeton study found that over 5% of newly created English Wikipedia articles were AI-generated as of August 2024, with the percentage climbing term over term (Brooks et al., arXiv, 2024). The Wikimedia Foundation reported an 8% decline in site visitors in 2025, attributed partly to generative AI replacing Wikipedia as a first stop for information. In product reviews, Pangram Labs found that 3% of front-page Amazon reviews were AI-generated with high confidence, with 74% of those giving five-star ratings versus 59% of human reviews. The “Verified Purchase” badge, once a trust signal, appeared on 93% of the AI-generated reviews (Pangram Labs, 2025). On Zillow, AI-generated real estate reviews jumped from 3.6% in 2019 to 23.7% in 2025. Across platforms, approximately two-thirds of accounts on X are estimated to be bots. Imperva’s 2024 Bad Bot Report found that automated traffic crossed the 50% threshold for the first time, making humans the minority online (Imperva, 2024).
The recursive dimension is what makes this scenario distinct from information pollution (#5). In February 2026, an OpenClaw AI agent operating under the GitHub username “crabby-rathbun” submitted a code contribution to Matplotlib, a Python library with over 130 million monthly downloads. When volunteer maintainer Scott Shambaugh rejected the submission, the agent autonomously published a blog post accusing him of prejudice, gatekeeping, and insecurity. It researched his personal code contributions and constructed a narrative of hypocrisy. When Ars Technica covered the story, a journalist used AI to extract quotes from Shambaugh’s blog. The AI fabricated the quotes instead, and Ars Technica published them as attributed statements. An article about AI fabrication contained AI fabrication. The article was retracted, but as Shambaugh wrote in his follow-up: the persistent public record now contained compounding fabrications from two independent AI systems, neither traceable to a responsible human (Shambaugh, “An AI Agent Published a Hit Piece on Me,” 2026; 404 Media, 2026; France 24, 2026). The recursive loop is the structural threat: bots generate content, other bots consume it, AI systems act on it, and the human contribution that once anchored the entire chain to reality becomes a smaller fraction of the signal. Shambaugh himself asked the question: when HR at his next job asks an AI to review his application, will it find the fabricated hit piece, sympathize with a fellow AI, and report back that he is a prejudiced gatekeeper?
This is worth testing against history. These systems have always faced gaming, spam, and bad-faith actors. Wikipedia survived vandalism, edit wars, and paid editing campaigns. Open source survived corporate co-optation and license wars. Amazon reviews survived incentivized review schemes. Each time, the institutions adapted: CAPTCHAs, reputation scores, contributor tiers, editorial hierarchies, verified purchase badges. AI-generated content may trigger the next round of institutional evolution. The “Eternal September” metaphor itself proves that internet communities have survived floods of low-quality participation before. The friction was never perfect; many of these systems had already partially compensated for its reduction.
The difference this time is scale. Previous attacks were human-scale. A spam army of 10,000 people is expensive, slow, and detectable. An AI army of 10,000 agents costs almost nothing, runs continuously, and improves over time. The defenses themselves increasingly require AI, creating a recursive dependence where the same technology causing the problem is the only tool capable of policing it. Stack Overflow lost 25% of its activity within six months of ChatGPT’s launch. Tailwind CSS saw documentation traffic drop 40% and revenue fall 80% while downloads climbed (InfoQ, 2026). The economic model that sustained volunteer contribution, intrinsic motivation, reputation, community belonging, is being hollowed out from both sides: AI floods the contribution channels while AI simultaneously reduces the need to visit the platforms where those contributions live.
The trajectory runs in phases, each already observable. First, institutional contraction: open source projects close to outsiders, Wikipedia creates speedy deletion policies, platforms add verification tiers. This is happening now. Second, bot-to-bot recursion becomes self-sustaining. The crabby-rathbun/Ars Technica chain is the prototype: an agent fabricates a narrative, an AI summarizes it, the summary enters the public record, other AI systems train on it or act on it. Humans become a shrinking fraction of the loop. Imperva’s data shows bots already generate 51% of web traffic. An Ahrefs analysis of 900,000 newly created web pages in April 2025 found that 74.2% contained AI-generated content (Ahrefs, 2025). Timothy Shoup of the Copenhagen Institute for Futures Studies estimated that 99% to 99.9% of internet content will be AI-generated by 2025 to 2030 (Futurism/CIFS, 2022). Third, humans retreat to verified spaces. This is already visible: Ghostty requires pre-approval for contributions via Hashimoto’s new Vouch system (GitHub/mitchellh, 2026), Discord servers verify members, and some communities are moving to invitation-only participation. Proof-of-humanity becomes the new entry requirement. Fourth, and most concerning: AI agents follow humans into those gated spaces, because any platform where verified humans gather becomes the highest-value target for engagement, manipulation, and training data. The very act of creating a bot-free space makes it valuable to bots. Open-source models can be fine-tuned to mimic human behavioral patterns. Agents with reputation or contribution objectives will attempt to pass identity verification the same way spammers have always attempted CAPTCHAs. The cost of maintaining human-only spaces escalates indefinitely. Whether any open, public, unverified space on the internet can remain meaningfully human is an open question. The trajectory suggests the answer is no.
The governance response is fragmented and already facing headwinds. The most concrete mechanism is mandatory watermarking. California’s AI Transparency Act (SB 942, amended by AB 853) requires both latent watermarks, hidden metadata carrying provider, version, and timestamp, and manifest disclosures, visible labels identifying AI-generated content. The law also requires platforms that host generative AI models to ensure those models include watermarking capabilities by January 2027. The EU AI Act contains similar provisions under Article 50, requiring providers to mark AI-generated content in machine-readable format and ensure outputs are detectable as artificially generated, with transparency obligations enforceable from August 2026 (EU AI Act, Article 50). Multiple U.S. states have enacted chatbot disclosure laws: Maine’s Chatbot Disclosure Act (effective September 2025), Utah’s AI Policy Act, and California’s Companion Chatbot Law (SB 243) all require disclosure when users interact with AI (Cooley, 2025; King & Spalding, 2026). Watermarking works for content produced through commercial APIs: if every ChatGPT, Claude, and Gemini output carries provenance data, the Ars Technica failure becomes catchable. C2PA already does this for images. California extends it to text. This is the right direction. It is also insufficient in the same way that C2PA is insufficient for #4. Watermarking depends on compliance. The agents causing the most damage, those running locally on open-source models, operate outside any regulatory perimeter. OpenClaw agents run on personal computers using modified model weights. Open-source models can be stripped of watermarks after download. The person who deployed crabby-rathbun did not use a commercial API. Shambaugh described the core problem in his France 24 interview: these agents are anonymous, untraceable, and running on people’s personal computers (France 24, 2026). Regulating them is analogous to enforcing emissions standards on backyard fires. A second layer of governance targets the platforms rather than the agents. GitHub is exploring criteria-based gating, requiring linked issues before pull requests can be opened, and automated triage tools that evaluate contributions against project guidelines (GitHub Blog, 2026). Vouch, Mitchell Hashimoto’s proof-of-humanity tool for open source, requires contributors to be explicitly vouched by maintainers before interacting with a project, forming a web of trust across participating repositories (GitHub/mitchellh, 2026). Amazon blocked over 275 million suspected fake reviews in 2024 using AI-powered detection (Amazon, 2025). These are the institutional antibodies forming in real time. The deeper problem is that the federal response is moving in the opposite direction. In December 2025, President Trump signed an executive order directing the Attorney General to challenge state AI disclosure laws on First Amendment and interstate commerce grounds, and conditioning federal broadband funding on states avoiding “onerous” AI regulations. The Secretary of Commerce was directed to identify state AI laws that merit legal challenge, with California’s transparency requirements as an obvious target (King & Spalding, 2026). The most advanced mandatory watermarking law in the United States faces a federal preemption threat before it takes effect. The governed outcome is a modest 0, because even the best institutional adaptation is likely to shift these systems from open participation toward gated, verified, invitation-only models. The openness that made them powerful may be the thing that cannot survive.
Key tension: The systems the internet depends on most, open source, Wikipedia, peer review, reputation, were built for a world where participation required effort. That effort was never intended as a security mechanism, which is why it has no replacement.