The Jevons Paradox of AI
A documentation project that grew into an MCP server, a competitive landscape analysis, and a realization about what AI actually does to software demand.
I’ve always set targets that feel slightly unreasonable. The logic is simple - fall short of an insane target and you’re still ahead of someone who hit a safe one. Two weeks ago I sat down to read some documentation. I ended up with a developer toolkit, a competitive landscape analysis, and an economic theory from 1865.
I Just Needed to Read Some Docs
My team started evaluating a new internal framework for our product. Standard research phase - understand the platform, compare it with alternatives, figure out the best way in. Nothing unusual.
The problem was the documentation. It was scattered across internal wikis, knowledge bases, and code repositories. Reading through all of it manually would mean days of context-switching between tabs and formats.
So I decided to build a small tool. Fetch the docs from multiple sources, convert them to markdown, and create a knowledge base I could analyze with AI. Simple enough.
It didn’t stay simple.
Some of the internal APIs I needed weren’t publicly documented. The team shared their OAuth-authenticated endpoints. Which meant I now needed to build an OAuth authenticator.
I’d been wanting one for a while - not just for this project, but for exploring any API in our ecosystem. So I built an authenticator along with an API explorer that saved variables between requests, like Postman but tailored to exactly how I work. One tool to manage tokens, explore endpoints, and test responses in one place. The web interface ran on FastAPI with HTMX and Tailwind.
Then came the MCP server. I wanted AI assistants to work directly with these APIs, so I built a custom MCP server - eighteen tools in total. Dynamic tool generation that auto-converts API specs into MCP-compatible tools, a shared credential store so you configure OAuth once and the MCP server picks it up, and zero external dependencies. The entire protocol implementation built from scratch using JSON-RPC 2.0 over stdio.
I also built a second MCP server specifically for the documentation. A catalog-first approach that indexes hundreds of markdown documents at startup, with full-text search using relevance scoring - title matches weighted 10x, headings 5x, body 1x. Any AI assistant could now search, browse, and read any document in the knowledge base through three tools: docs_catalog, docs_read, and docs_search.
Two MCP servers weren’t strictly necessary for the original task. But building them taught me how the protocol works internally and how integration patterns get structured.
The scope kept growing. Internal teams had published deep technical articles across forums and knowledge bases - hundreds of documents covering platform architecture and design decisions. Too many to pull through MCP. The context window filled up fast and the process was slow.
So I built a batch download script, converted everything to markdown, and stored it locally. I downloaded internal discussions as PDFs and added them to the knowledge base.
The final count: 604 documents, covering platform architecture, comparisons, SDK documentation, and server infrastructure.
I didn’t set out to build any of this. I just needed to read some docs.
Each step cost so little that the scope kept growing naturally. I never sat down and decided to build a developer toolkit. Each piece was a small extension of whatever came before it - and because AI made each extension cheap, I never had a reason to stop. That instinct - to keep going because the cost of trying is nearly zero - turns out to have a name. An economist noticed the exact same pattern 160 years ago.
William Stanley Jevons Had a Point
In the 1860s, William Stanley Jevons observed something counterintuitive about coal in England. As steam engines became more efficient, conventional wisdom said consumption would fall. Instead, it rose.
Better efficiency made coal viable for applications nobody had previously considered. More efficiency didn’t reduce demand - it created it.
The same paradox is now playing out in knowledge work.
The computing parallel is unmistakable. Mainframes existed in the hundreds. Minicomputers brought that to tens of thousands. Personal computers pushed it to hundreds of millions. Each era saw roughly a 100x increase in adoption. Software that required Fortune 500 budgets in the 1970s became available to every barbershop by the 2000s through the cloud.
But there’s a distinction that makes this era fundamentally different. Software automated deterministic work - accounting, CRM, document management. Tasks with clear rules and predictable inputs. AI agents now automate non-deterministic work - reviewing contracts, writing code, generating campaigns, conducting market research, handling customer support. This category of work was never automatable before.
This is where the Jevons Paradox cuts deepest. Most people assume AI’s value is in doing existing work faster. But the real leverage is in making it cheap enough to start new work - projects that wouldn’t have been worth attempting before.
When you’re a small team, every decision trades off against another. Marketing competes with product development. Support competes with finance. You can’t pursue everything, so you do the minimum viable version of most things. AI blows up that core constraint.
Consider a ten-person services firm that never had custom software. A year ago, the project wouldn’t have started - too expensive, too many competing priorities. Now, someone on the team builds a prototype in days.
This is exactly what happened with my research project. I didn’t plan to build a developer toolkit. But because each step was so cheap, the project kept expanding naturally. The OAuth authenticator took hours, not weeks. The API explorer was a natural extension. The MCP servers were a weekend experiment. The competitive analysis happened because once I had the knowledge base, asking questions was free. The “I” in ROI dropped so far that I kept investing without a second thought.
The Moat Was Never the Product
For the last fifteen years, the default wisdom in software was simple: don’t build it, buy it. Engineering time cost $200,000 a year. A SaaS seat cost $50 a month. Building your own CRM or workflow engine made zero financial sense for most companies.
That equation has broken. Agentic engineering has collapsed the cost of building custom software. The question has shifted from “which SaaS tool should we buy?” to “why are we buying this at all?”
Their moat was never the product. It was the cost of building the alternative. AI is mass-producing shovels for that moat.
Look at what I did without a second thought. I built a custom OAuth manager and API explorer instead of using Postman. I built two custom MCP servers instead of waiting for an official integration. Not because the existing tools were bad - but because building something tailored to my exact needs became cheaper than adapting a generic tool to my workflow.
Tobi Lutke demonstrated the same instinct in a different domain entirely. His annual MRI scan came on a USB stick that required commercial Windows software to view. Instead of buying the software, he pointed Claude at the data and asked it to build an HTML-based viewer. The result looked better than the commercial tool. A year ago, nobody built custom medical imaging software for personal use. The idea wouldn’t even occur to you. Now it’s a casual afternoon project.
The pattern keeps repeating. Michael Luo, a PM at Stripe, spent a weekend and roughly $50 building “Inkless,” a free e-signature tool. DocuSign - a $15 billion company - sent him a cease-and-desist. Joshua Wohle, CEO of Mindstone, was about to sign a $50,000-per-year SaaS contract. He paused, built the replacement in three hours using AI, and wrote zero lines of code manually. A CEO, a PM, and a founder all independently reached the same conclusion: building is now cheaper than buying.
This doesn’t mean AI kills SaaS. It kills slow SaaS. Companies that adapt will thrive. Shopify is a case in point. In April 2025, CEO Tobi Lutke shared an internal memo that set the tone for the industry:
“Reflexive AI usage is now a baseline expectation at Shopify.”
Before requesting additional headcount, teams must demonstrate why AI cannot do the job. AI proficiency is part of performance reviews - not as a suggestion, but as a requirement for everyone, including the executive team. That memo became a genre. Fiverr and others quickly followed with their own versions. Companies that treat AI as optional are the ones building on a shrinking foundation.
None of It Was Planned
Two weeks ago, I just needed to read some docs. The toolkit I ended up building - two MCP servers, an OAuth manager, an API explorer, a 604-document knowledge base, a competitive analysis - none of it was planned. A year ago, none of it would have been started. Not because the ideas weren’t there, but because the cost of trying was too high for what felt like a side exploration.
That’s the paradox. Not doing the same work faster - starting work that wouldn’t have existed.
Most of the software AI will help create hasn’t been imagined yet. Neither had mine, two weeks ago.
References:
Co-written with AI. Credit the prose, blame the opinions.