In January 2026, 75% of Tailwind CSS’s engineering team were laid off. Revenue had collapsed by 80%.1 Not because the framework failed. Because it succeeded too well.

AI coding assistants had learned Tailwind so thoroughly that developers stopped visiting the documentation site where Tailwind Labs monetised through premium UI components. Usage was at an all-time high. The business model was dead.

Days later, tldraw closed external contributions entirely. The project was drowning in AI-generated pull requests: formally correct but contextually hollow, submitted by developers who never engaged with feedback. Maintainer Steve Ruiz captured the paradox in a blog post: “If writing the code is the easy part, why would I want someone else to write it?”2

These aren’t isolated incidents. They’re symptoms of a fundamental incompatibility between how open source was designed to work and how AI operates.

The Social Contract is Breaking Link to heading

Open source runs on reciprocity. You use my library, find a bug, submit a patch. The ecosystem improves through this cycle. The licences we use (MIT, GPL, Apache) were designed for humans who read, learn from, and modify code. They assume attribution, derivative works and contributors who understand the implicit norms.

AI changes everything. When a large language model trains on millions of repositories, it absorbs patterns, idioms and sometimes entire chunks of code. The model becomes a compressed representation of that collective knowledge. But it reproduces without attribution, without understanding and without participating in the community that created it.

Consider the GPL, designed to ensure derivative works remain open. If an AI trained on GPL code generates a function that closely resembles that code, is it a derivative work? The model itself isn’t distributed. The generated code might be subtly different. There’s no clear chain of authorship. Traditional licence enforcement breaks down.

Even permissive licences like MIT require attribution. But when code emerges from a neural network’s latent space, who do you attribute? The model creators? The original authors? Everyone in the training set?

The Extraction Problem Link to heading

AI companies operate as extractive entities. They consume open source code for training. The models themselves are proprietary. Knowledge flows one way.

This creates perverse incentives. Why contribute to open source if your work trains a commercial model that competes with you? Why maintain a library if an AI can generate similar functionality without acknowledging your effort?

Tailwind illustrates this perfectly. Developers still use it constantly. They just ask Claude or Cursor how to implement a component instead of reading the docs. The framework succeeded so completely that AI models internalised it, eliminating the need for the documentation that funded its development.

Three engineers lost their jobs not because they failed, but because they succeeded.

The Spam Deluge Link to heading

Spam

tldraw’s decision to close external PRs wasn’t made lightly. As Ruiz explained in the GitHub issue, “An open pull request represents a commitment from maintainers: that the contribution will be reviewed carefully and considered seriously for inclusion.” When AI tools make it trivial to generate plausible-looking code, that commitment becomes unsustainable.

The problem isn’t just volume. AI-generated contributions lack the context and understanding that make human contributions valuable. Syntactically correct but semantically hollow. They solve the immediate problem without considering architecture, maintainability or project direction. Their authors rarely stick around to iterate based on feedback.

This breaks the collaborative model. Open source isn’t just about accepting patches. It’s about building relationships, transferring knowledge and growing a community. AI contributions short-circuit all of that.

Open source projects are also learning resources. Developers read them to understand patterns, idioms and best practices. If repositories fill with AI-generated code that lacks intentionality and design thinking, that educational value degrades. We risk a feedback loop: AI trained on human code generates code that gets committed to repositories, which trains the next generation of models. Each iteration degrades the signal. A photocopy of a photocopy.

Copy of a copy of a copy

Reactive Responses Link to heading

Projects are responding defensively. Ghostty has implemented a strict AI Usage Policy that bans low-quality AI-generated contributions. Some projects explore new licence models. But these are temporary fixes to a structural problem.

GitHub’s interface wasn’t designed for this world. The prominent display of open PR counts creates social pressure that made sense when contributions were scarce and valuable. Now it’s a liability. Ruiz noted they’re waiting for “better tools for managing contributions” before reopening.

The real solution requires rethinking assumptions. New social contracts that acknowledge AI as a participant with obligations. Possible technical solutions include watermarking, provenance tracking and cryptographic attribution. Or accepting that the open source model is incompatible with modern AI.

Business models need rethinking too. Documentation traffic can’t be the sole revenue driver when AI answers questions directly. Projects might need sustainability through support contracts, hosted services or dual licensing.

Values in Conflict Link to heading

Open source is built on transparency, community, and the belief that software freedom benefits everyone. The current AI paradigm prioritises capability and commercial value. It treats code as training data rather than as the product of human creativity and collaboration.

The question isn’t whether AI and open source can coexist. They already do, uneasily. The question is whether we can build systems that honour the spirit of open source while embracing AI capabilities. That requires deciding what we value: the commons, or the competitive advantage it provides.

A Weird Year Ahead Link to heading

I don’t have answers. The friction is real and intensifying. Maintainers are exhausted, trying to distinguish signal from AI-generated spam. Companies navigate uncertain legal territory. AI capabilities advance, making these questions more urgent.

As Ruiz wrote: “This is going to be a weird year for programmers and open source especially.”3 That might be the understatement of the decade.

The open source community needs to engage with these questions deliberately. We need to articulate what we want to preserve and what we’re willing to change. We need consensus around new norms before they’re imposed by legal precedent or market forces.

The code we write today trains the models of tomorrow. The question is whether those models will sustain the ecosystem that created them, or consume it entirely.


  1. Adam Wathan, GitHub comment on tailwindcss.com PR #2388, January 7, 2026 ↩︎

  2. Steve Ruiz, “Stay Away From My Trash”, tldraw blog, January 2026 ↩︎

  3. Steve Ruiz, “Contributions policy”, GitHub issue, January 2026 ↩︎