ContextThis topic has bugged me for most of my career in software development. In nearly every project I've been involved with, there's always someone on the team who wants to solve all the anticipated performance "problems" well before any such problem has been encountered. The trip off into the weeds of analysis-paralysis usually starts with some wild, worst-case-scenario supposition that often derives from a fundamental misunderstanding of real-world system demands (actual load) or a fundamentally flawed mental calculation regarding real-world resource capacity (e.g. processing or network speed).
ManifestoBottom line: DON'T PREMATURELY OPTIMIZE!!!
In nearly every circumstance I can remember, it would have been WAY easier to just write the single-threaded, memory-hogging, un-cached, poorly organized, klunker of a piece of software, and THEN profile it to see where it really suffered in a real-world load scenario. Attempting to anticipate which optimizable areas will present the real problems is a sure-fire way to waste 80% of the available development time figuring out how to solve 20% of the contribution to poor performance. That old 80/20 rule can be turned in your favor so easily by waiting until you're 80% done with the functional bits and then spend 20% as much effort fixing the things that contribute 80% of the "slow."