Samisen AI

Are We Entering the Gillette Model of Software Development?

·8 min read

Introduction: The Razor and the Blade

Gillette perfected one of the most enduring business models in history: sell the razor cheap, charge forever for the blades. The product that looks like the solution is actually the beginning of a dependency. You don't own the outcome. You rent it, indefinitely, at whatever price the vendor decides.

We've been building software for nearly two decades—starting in C++ in 2007, moving through Java, C#, and eventually into the React and Node.js ecosystem. And as we watch the AI revolution reshape our industry, we can't shake a nagging feeling: we're entering a Gillette era for software. The razor is the AI-powered development experience. The blade? Maintenance. Compute costs. Vendor lock-in. And an ever-growing bill passed quietly down to the customer.


The Efficiency Illusion

When we started our careers, efficiency was a discipline. Writing C++ meant every allocation mattered. You thought in bytes. You thought in cycles. There was no room for waste because waste had immediate, visible consequences.

Then came the abstraction layers. Java promised portability and developer productivity. C# brought the richness of the .NET ecosystem. Eventually we arrived at JavaScript frameworks—React, Angular, Vue—and suddenly we were shipping hundreds of kilobytes of JavaScript to render a button. Nobody complained too loudly because the developer experience was great, and compute had gotten cheap enough to absorb the bloat.

Each transition in the technology stack followed the same arc: we traded efficiency for speed of development, and the market rewarded us for it. Fast shipping beats lean code, every time.

Now AI has entered the picture, and the arc is repeating—but at a scale that makes every prior transition look modest.


The Code Nobody Fully Owns

Here's what the data is starting to tell us.

GitClear's research on 211 million changed lines of code from 2020 to 2024 found an 8-fold increase in the frequency of duplicated code blocks during 2024—a prevalence of code duplication ten times higher than two years ago. That's not a productivity story. That's an entropy story.

According to the 2025 Stack Overflow Developer Survey, 84% of professional developers are already using or planning to use AI tools. Both Google and Microsoft disclosed in 2025 that AI now writes over 20% of their new code. The volume is staggering. But volume and quality are not the same thing.

A report from Ox Security, analyzing 300 open-source projects, found that AI-generated code is "highly functional but systematically lacking in architectural judgment." Think about what that means: the code works today. It passes the tests. It ships. But it wasn't designed to evolve, to be debugged by a developer who didn't write it, or to survive a system-level change six months from now.

Contrary to perceived productivity benefits, the State of Software Delivery 2025 report by Harness found the majority of developers now spend more time debugging AI-generated code than before. We built the razor. Now we're buying blades.


The Maintainability Question We've Stopped Asking

Maintainability has always been one of the foundational pillars of software engineering. We teach it in computer science programs. We evaluate code against it in code reviews. We build entire careers around it.

But the industry is failing it on two fronts right now—and not talking about it enough.

Developer maintainability. AI-generated code proliferates faster than it can be understood. Nobody fully owns it. When something breaks six months from now—and it will—who understands it well enough to fix it? Previous studies have shown that AI-generated code contains code smells, maintainability issues, and security vulnerabilities. When such issues are accepted into production repositories, they accumulate as technical debt in the codebase. This isn't technical debt we knowingly took on. It's debt we didn't even know we were borrowing.

Customer maintainability. This is the dimension nobody talks about. If the cost of running software keeps climbing because of the AI infrastructure underneath it, the system is ultimately unsustainable for the people it's supposed to serve. And the numbers here are alarming.

SaaS inflation is now running at nearly 5x higher than standard market inflation rates. Businesses now spend an average of $7,900 per employee annually on SaaS tools—a 27% increase over the last two years. 60% of vendors deliberately mask rising prices by bundling AI features. The playbook is familiar: add AI, raise prices, make opting out impossible.

Gartner forecasts enterprise software spend rising at least 40% by 2027, with generative AI as the primary accelerant. Vendors lure customers with generous pilot credits, yet scaling to production routinely reveals 500–1,000% cost underestimation.

We adopted AI partly to reduce complexity. We may be creating an entirely new, far more expensive layer of it.


The Gillette Parallel

Here's where the Gillette analogy sharpens into something uncomfortable.

Gillette's genius wasn't just the pricing model—it was the dependency model. Once you've bought the handle, switching costs are high. You stay in the ecosystem. You pay for the blades.

In software, we are building that same dependency architecture at every level of the stack.

Developers depend on AI coding assistants they don't fully understand. Codebases depend on AI-generated modules that are opaque to the engineers who inherit them. Products depend on AI infrastructure from vendors whose pricing can shift mid-contract. 78% of IT leaders reported unexpected charges on SaaS due to consumption-based or AI pricing models.

And the customer at the end of the chain? AI economics are fundamentally different from SaaS—every AI query incurs real compute costs, compressing gross margins from the 80–90% typical of SaaS to 50–60% for AI products. That margin compression has to go somewhere. It goes into your subscription invoice.

The razor is the productivity promise. The blade is everything that comes after.


Is the Customer Actually Better Off?

This is the question worth coming back to. For all the investment, all the compute, all the tooling—is the end customer actually in a better position?

Adobe adds AI features. Adobe raises prices. Microsoft adds Copilot at $30 per user per month—but only if you already have a Microsoft 365 license, making the actual cost significantly higher. The capability expands. The bill expands faster.

Forrester predicts that by 2026, 75% of technology decision-makers will face moderate to severe technical debt. Not legacy debt from years of neglect. New debt, freshly generated by the tools we adopted to move faster.

Speed and capability are real. But when we ask whether customers are better off, we have to ask better off compared to what, and at what cost. A faster car that requires a specialized mechanic and premium fuel isn't obviously better than a reliable one you can maintain yourself.


Nobody Is Choosing This

To be honest: we're not optimizing for frugality in our own work right now either. We're building fast. We're using the AI tools available because speed matters in the market we're competing in. Every founder, every engineer, every team is making the same rational choice locally that adds up to a collectively irrational outcome globally.

That's exactly what makes this hard to solve. Nobody is choosing to create this problem. We're all just responding to incentives.

But it raises a question: should we, as engineers and builders, start treating computational frugality the way we once treated code quality? Not as a constraint, but as a discipline? Not as a tradeoff, but as a professional value?


What a Different Path Might Look Like

We're not arguing we abandon AI-assisted development. The productivity gains are real, and the competitive pressure is real. But there are questions worth starting to ask:

Are we measuring the total cost of ownership of the code we ship—including the compute costs, the maintenance burden, and the customer cost of running it? Are we auditing AI-generated code for architectural judgment, not just functional correctness? Are we designing software systems where the AI layer is transparent and replaceable, not a silent dependency buried in the stack?

Some teams are already moving toward a "vibe, then verify" culture—granting developers the freedom to experiment boldly with AI while maintaining a rigorous accountability framework to verify the results. That's a start. But verification needs to extend beyond code quality into cost architecture.

The best engineers we've worked with shared a common instinct: they could do more with less. That instinct didn't come from the language they used—it came from how they thought about problems. It was a discipline, not a limitation.


Conclusion: The Blade Is Coming

Every Gillette customer knew, eventually, what the business model was. The razor was never the product. The razor was the commitment.

We are in the early innings of a software era where building has never been easier, cheaper, or faster. That's genuinely exciting. But maintenance—of the code, of the architecture, of the customer's ability to afford to run the system—is going to arrive as a reckoning.

The question isn't whether the AI-driven development wave will produce a maintainability crisis. The data suggests it already is. The question is whether we, as an industry, will start valuing the discipline of frugality before the bill comes due—or whether we'll keep buying blades until we can't afford to shave.


What do you think—are we sleepwalking into a Gillette model for software? We'd love to hear how engineers and CTOs are thinking about this tension between speed and sustainability.

Written by the Samisen AI Team · Book a call if this was useful