Future-proofing code is a cost, not a saving
13 min read
Have you heard yourself saying “We need to add this code so we can easily change it tomorrow”?
This vague notion of “future-proofing” can sometimes become self-evident, something you should do because it’s inherently good. Surely, who would question the idea that code we write today should work tomorrow?
So let’s question it.
You’re terrible at predicting the future 🔗
Good software gets written in short bursts. This makes it easy to switch direction when you accidentally wander down the wrong path. This is built on the assumption that we’re usually wrong, which is why we need to make adjustments early and often.
But when developers think they’ve already figured out the correct path, what the future is, they spend months future-proofing the code, thinking of all possible scenarios the code could be used.
Then the magnum opus pull request arrives. Someone points out a fatal flaw in the assumptions. And now we’re in the awkward position of having to either start from scratch again or push through (with force) to recoup all that wasted time and effort. Or perhaps just make some minor adjustments and call it a day in order to save face.
It’s not because developers are stupid. They’re just terrible at predicting the future. Which we all are. Otherwise we’d all be billionaires on the stock market.
Meanwhile, here are things that will likely happen and that none of us have a clue about right now:
- New tools and frameworks that replace the current ones
- Better hardware that suddenly solves (or creates!) major issues
- New procedures, routines, or “best practices” in the industry
- Evolving security standards as new threats emerge
- Codebase rot and technical debt
- Unpredictable users with new requirements
- Scalability issues from growth
- New laws and regulations
- Vulnerabilities in external dependencies
- Backwards compatibility nightmares
- Changes in schemas, contracts, APIs, or databases
- Changes in integrations
You know what’s not on this list?
The exact future scenario you’re coding for right now.

You may reap the benefits, but you will pay 🔗
It’s very important to understand that future-proofing is actually a cost that you incur on yourself right now.
If you’re adding a second payment provider that you will only use for emergencies, you’re spending time writing code that (hopefully!) will never be used. That is a cost, and that cost is weighted against the expected utility of having the code in place during crisis. Think of it like an insurance.
In other words, you may reap the benefits later, but you will pay the costs today.
This is why it’s so important to understand that future-proofing is a cost because it requires trade-offs.
As Wil Shipley puts it in Be Inflexible!:
The fundamental nature of coding is that our task, as programmers, is to recognize that every decision we make is a trade-off. To be a master programmer is to understand the nature of these trade-offs, and be conscious of them in everything we write.
[…] In coding, you have many dimensions in which you can rate code:
- Brevity of code
- Featurefulness
- Speed of execution
- Time spent coding
- Robustness
- Flexibility
Now, remember, these dimensions are all in opposition to one another. You can spend three days writing a routine which is really beautiful AND fast, so you’ve gotten two of your dimensions up, but you’ve spent THREE DAYS, so the “time spent coding” dimension is WAY down.
Time spent future-proofing things people don’t need isn’t just wasted if the future doesn’t unfold as predicted—it also steals time from work that could have been done instead. It’s like insuring a cat you will never buy. This is a serious opportunity cost.
Don’t mindlessly assume that adding thousands of lines of code is future-proofing when one line would suffice.
And don’t believe that you somehow don’t have any costs anymore because AI (large language models) wrote the code for you.
You now have thousands of extra lines of code to maintain, regardless who wrote it, which could contain bugs and security vulnerabilities. You’re not managing risk this way—you’re increasing it by adding bloat and complexity.
To add insult to injury, adding thousands of lines of code in hopes someone will need it someday makes the codebase harder to read and maintain today.
The fact that AI can write code instantly can nevertheless be a benefit since you can postpone more decisions. However, only time will tell how much developers will actually restrain themselves in using AI to “future-proof” their code.
Figure out what the future-proofing you want to do actually costs (like having a second payment provider). Then figure out the cost (and risk) of not having that code in place. Then decide if it’s worth it.
You probably don’t know what to abstract 🔗
A popular future-proofing strategy involves adding abstractions and generalizations. We build generalized versions of current use cases to handle future ones. We send emails today, but maybe we should support any kind of message, like push notifications?
So we investigate all communication protocols. We browse documentation for every platform. We build a grand unified messaging system.
This is a bad idea.
We think we’re keeping options open, but we’re actually fumbling around in the dark with no clear goal. Without a goal, we don’t know what to abstract or generalize. We just add as many options as possible and fool ourselves into thinking we’re prepared for whatever eventuality that we may stumble across.
We say “this is good code if we need this in the future.” But at this point we’re just gamblers trying to justify our addiction. Because we can actually justify any code, however ridiculous, by slapping “if we need this in the future” on the end of a sentence.
In We are good at abstractions. We are bad at abstractions., Kirill writes:
Nothing wrecks a codebase faster than an abstraction born before its time. Engineers love to “future-proof” things, extrapolating from one requirement to build a towering generalization fit for every imaginable use case. This leads to bloated codebases full of indirection and awkward interfaces. It’s how you end up with a FactoryFactory or a 300-line class that exists solely to format dates. As the saying goes, duplication is cheaper than the wrong abstraction—yet engineers keep trying to solve problems they don’t have, as if code complexity were some kind of investment portfolio.
Keeping the options open (the right way) 🔗
Future-proofing isn’t always bad. The trick is doing it with purpose.
Aaron Stannard calls this approach optionality. We make it easy to replace parts of the application when we need to, for example when problems emerge.
Take third-party services like payment providers or email services. If these fail, they could break your entire business. Managing that risk by being able to quickly swap to another service? That’s smart engineering.
Stick an interface on these services so you can switch implementations when needed.
This works because it’s tied to actual goals. Those goals determine what relevant future-proofing looks like. Keeping your application resilient so it achieves its purpose despite problems—that’s what good developers do.
Bad developers, on the other hand, stick interfaces on nearly everything and call it future-proofing. Sure, you can replace things more easily, but why would you?
If your answer is “because you can” or “if you want,” then you probably don’t know what your goals actually are. You’re gambling and adding code for the sake of having it.
Remove things you don’t use or need. Or better yet, don’t add them in the first place.
To quote Allan Kelly in The Philosophy of Extensible Software:
[…] “less is more” is the starting point. Extensible software development is no license to add bells and whistles to your code in the hope that someone may use them. Quite the opposite, extendable software should be free of bells and whistles, it should be minimal while allowing itself to be extended.
How should you actually future-proof? 🔗
If we should not add abstractions or generalizations, but still keep the application extensible, does this mean that we never should think ahead about the design or never generalize the design? Not quite.
Even though we should not add more code than necessary, we should think carefully about the interfaces and make them general, while keeping the implementations concrete.
This may sound paradoxical, but it can actually mean writing less code in the end. In his chapter on general-purpose modules in A Philosophy of Software Design, John Ousterhout explains:
What particularly surprised me is that general-purpose interfaces are simpler and deeper than special-purpose ones, and they result in less code in the implementation.
[…]
In my experience, the sweet spot is to implement new modules in a somewhat general-purpose fashion. The phrase “somewhat general-purpose” means that the module’s functionality should reflect your current needs, but its interface should not. Instead, the interface should be general enough to support multiple cases.
— Ousterhout (2021, p. 40)
The key difference: you’re not building elaborate systems for imaginary future needs. You’re designing interfaces for current needs that happen to be flexible enough to accommodate reasonable variations. They become extensible with no extra code. Less is more. Open–closed principle. You name it, they’re all similar.
You can watch Ousterhout’s YouTube video for the quick version of the book.
Value fast feedback over too much thinking 🔗
If you have a problem, you can solve it in two ways. You can think about it deeply or you can try something.
The point here is not that these two ways are mutually exclusive, but rather that people might differ in the amount of time they spend on each (or where they prefer to start).
For example, a developer might think very deeply about the wrong thing, and then implement the wrong solution perfectly. That is a tremendous waste of effort, even though the execution may be impeccable.
We want to strike a balance between thinking deeply (about the right thing) and acting fast (in order to inform our thinking). The way we do this is through short feedback loops.
We think about it intensively, then try something quickly just to learn. It will likely fail, and that is the point. The way in which we fail then inform our following thoughts. We know something more now than we did before, and we know something more than those who just reason from first principles and try to deduce what will happen in reality.
We try, we fail, and then we try again with this new knowledge.
This is not the antithesis to thinking, it is directed thinking. In other words, the constant feedback from reality is what helps revise our assumptions and guide our thinking to better solutions, much like Bayesian reasoning, rather than etching our assumptions in stone beforehand.
Therefore, if you find it difficult to think about software extensibility in the abstract, there are some very practical tricks you can try from Be Inflexible!:
- Don’t add code to a class unless you’re actually calling that code.
- Don’t make an abstract class (superclass) or interface until you have at least two classes that depend on them. Then find the commonalities.
- Don’t make a class extensible until it’s been used in at least two places.
- Don’t move a class to a shared library unless it’s actually used by at least two programs.
Notice how empirical these rules are? They value experience over pure reason. They rely on actual feedback from the outside world, not you sitting alone imagining what the outside world will become.
Feedback from the outside world is one of the best indicators for what your code should do. Don’t neglect it.
Let the design evolve over time 🔗
If we should value fast feedback, write less code, and make the design extensible, then it follows that the design should be able to evolve over time. It has to. We assume we’re wrong most of the time and have to correct the course after receiving feedback.
Andrew Lock has an interesting example. He built a library for developers that started to receive more and more requests for new features, but he didn’t have the time.
He writes:
Eventually, the feature requests and PRs got on top of me. I got stuck trying to pick which features were the most “worthy” to be implemented, and I basically got burned-out with the project. I wasn’t actively using it in any projects personally, and the weight of the various feature requests led me to leave it neglected.
What would you do in this case?
You could work yourself to death by adding more and more features. That is one way of extending the software.
Or, you could make it possible for the users of the library to add their own customizations on top of the library. That is another way of extending it, and it will ultimately require less code than the first way, and will likely result in a very minimal and well-designed core. But that core will be much more difficult to build, which is a trade-off.
Andrew Lock continues:
Eventually, I decided something had to be done, so I came up with a proposal that I hoped would give the best of both worlds: a simple getting-started experience, but complete customisation if you wanted it.
[…]
In March of 2023 I created an issue in the project, promulgating a design for the future direction of the library. The idea was to support only a small subset of “fixed” templates, but allow users to provide their own templates if they need it.
Indeed, less is more.
This is a good case of an evolving design. The design starts out relatively small. More bells and whistles are added, but only up to a point. Then a rewrite is needed because the circumstances have changed. The circumstances was not anticipated. Sounds familiar?
Summary 🔗
Developers aren’t very good at knowing what will happen in the future, yet they love to future-proof code anyway.
Future-proofing comes with considerable risk of adding expensive bloat that does more harm than good in the long run. “Future-proofed” code can itself become the very problem it was intended to solve.
You should future-proof when:
- You have a specific goal in mind
- You’re managing real risk (like being able to switch payment providers if one fails)
- You can maintain the extra code—keeping it updated and documented
- You acknowledge the upfront cost you’re paying today
Avoid future-proofing when:
- You think future-proofing is the goal
- You’re trying to anticipate all possible use cases
- You’re adding complex code with no intention of maintaining or documenting it
- You hope someone will discover your code later and thank you
- You think more code means more future savings
References 🔗
- Kelly, A. (2002). The Philosophy of Extensible Software. Overload, 10(50).
- Kirill (2025). We are good at abstractions. We are bad at abstractions.. Coffee-driven software engineering.
- Ousterhout, J. (2021). A Philosophy of Software Design. Yaknyam Press, Palo Alto.
- Shipley, W. (2007). Be Inflexible!. Call Me Fishmeal.
- Stannard, A. (2021). High Optionality Programming. Petabridge Blog.
