Tuesday, March 8, 2016

Premature Optimization


This topic has bugged me for most of my career in software development.  In nearly every project I've been involved with, there's always someone on the team who wants to solve all the anticipated performance "problems" well before any such problem has been encountered.  The trip off into the weeds of analysis-paralysis usually starts with some wild, worst-case-scenario supposition that often derives from a fundamental misunderstanding of real-world system demands (actual load) or a fundamentally flawed mental calculation regarding real-world resource capacity (e.g. processing or network speed).



In nearly every circumstance I can remember, it would have been WAY easier to just write the single-threaded, memory-hogging, un-cached, poorly organized, klunker of a piece of software, and THEN profile it to see where it really suffered in a real-world load scenario.  Attempting to anticipate which optimizable areas will present the real problems is a sure-fire way to waste 80% of the available development time figuring out how to solve 20% of the contribution to poor performance.  That old 80/20 rule can be turned in your favor so easily by waiting until you're 80% done with the functional bits and then spend 20% as much effort fixing the things that contribute 80% of the "slow."


If you didn't get the point by now, I'd be wasting my time to try to convince you.  Just try suppressing your urge to make that next program/module you write run in 0.1ms until you have enough information to understand that the rest of the system won't notice any difference until your stuff takes more than 15 seconds to complete.

No comments: