Performance Wiggle Room

image credit: xkcd

Premature optimization is the root of all evil: the famous Knuth-ism that we all know and… well, that we all know. It’s hard to go a day of reading programming blogs without someone referencing this and leaving their particular footnote on it. I suppose today I am that someone.

So what’s my footnote? Well, everyone loves to pick apart either “premature” or “optimization,” and I’m no different. So today, I’m focusing on the term “premature.” What counts as premature, anyway? For many people in the performance world, this quote is frustrating, because a lax interpretation of it can justify any amount laziness about optimization: “It’s slower than it could be, but I’ll optimize it when it becomes a real problem.” A prudent engineer would never say this about correctness. With correctness we are expected to obsess over corner cases that shouldn’t normally be hit, because as we all know they will eventually be hit, and the program better work correctly or bad things will happen.

Bad things can mean a lot. If you’re writing code for a self-driving car, it means people could die. If you’re writing software that handles people’s money, it means they could lose a lot of money. Correctness can cause these kinds of bad things; it’s certainly harder for poor performance to do this. But the more banal, everyday kind of bad thing is just that the software doesn’t work the way that the user expects, and it frustrates them. It makes the user say f*** this, leave your program, and never come back. (Unless your software is named Photoshop and has a practical monopoly.) In this regard, correctness and performance are the same. Your program might be correct or performant enough today, but small problems propagate themselves out as new code consumes yours. And eventually something will, directly or indirectly, use your code in a way you didn’t intend, in a way for which your implementation is no longer “good enough.”

imaginary reader says:

“But correctness bugs are hard to find down the road. If the code just becomes too slow, I’ll just profile it and find the problem”

Sure, if you write all of the software by yourself, and profile every new piece of it by yourself, maybe that’s valid reasoning. But that’s rarely the case. We write software that other people consume. Should they test the performance of their code? Yes. Do they? Sometimes. Should you have performance tests to catch regressions? Yes. Do they cover everything? Certainly not, and they certainly don’t cover new end uses, which don’t have a baseline to measure against.

But let’s say they do notice that performance isn’t quite up to snuff. And they do profile it. All they will see is that the component you wrote takes up… some arbitrary chunk of time? When you see that allocating memory shows up in your profile, do you think “I’m going to go make the allocator faster.” No, you assume that it’s already been optimized. That’s probably what this other engineer is going to think about your code. They don’t know that you have a whole list of optimizations that you’re planning to make “when it becomes a problem.” So performance suffers just a little bit, or they change their design to something more complex to accommodate for your slow code. And then the next system comes along, and uses their code.

imaginary reader says:

“But programs can be fully correct – they can’t be expected to be fully performant! If that were the case then we would have to write every program in assembly!”

Well, for one, when’s the last time you wrote a program that did anything other than crash if it couldn’t allocate memory? Have you ever written a program that tries to check for bits being flipped by cosmic rays? We don’t write fully correct programs. Some things are unreasonable, and the same goes for performance. But it does bring up a valid point – when is enough performance enough?

The common answer to this, and the one I am objecting to, is that you’ve optimized it enough when it stops being a problem. If it’s no longer the heaviest code in a profile, maybe, or if the latency of a request is below some target value, or some other line in the sand. If you are not building a system that anything else will ever depend on, then this is fine. But as we mentioned above, other code will use what you built. Other code which does not exist at the time that you wrote yours. Other code which wants to use yours in ways that you didn’t exactly plan for.

In engineering more broadly, there is a concept called the “safety factor” of a system. This is a measure of how much stronger something is than it needs to be to support its intended load. Stronger than its intended load. A rope bridge has to be able to support more weight than it’s ever intended to hold in practice, because people will misuse it, or it will degrade. We don’t want the ropes to snap just because it’s a bit old or because teenagers were swinging it back and forth.

The same ought to be true of performance – we should design systems to perform better than they need to for their intended use case, because other engineers are going to misuse it, or it’s going to get old, and little additions here and there will creep in and make it a little less performant than it was at the start, or something.

So when is premature? If you’re building a new system, and you’re trying to determine if you’ve optimized it enough, try going for a “performance factor” of two. I.e., if it is “fast enough” (as fast as an existing system, or something) at 6ms, try to shoot for 3ms. This is a rule of thumb, though. The important thing is trying to shoot for some kind of buffer between what is acceptable and what is actual. Because the actual performance is only going to suffer over time.

Don’t get me wrong, premature optimization is still possible and even likely with this measure. Often times we prematurely micro-optimize, wasting our time on the performance of cold, relatively inconsequential bits of code when we could be spending that time optimizing hotter, more significant pieces. But if you’re working on a whole system, which is bordering on being slow enough to impact a user, and you’re trying to determine when to stop optimizing, just remember to give the performance some wiggle room.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s