There are a few general strategies to improving the performance (speed) of a software application.
1. Not Working. You can’t beat this in terms of speed. Always try to avoid doing (or generating) work in the first place. Unfortunately most of the time we must do something to deliver a service but we should always keep this in mind when tackling any performance problem. This is one area in which computers could learn from some of the “best” of us.
2. Working Harder. Getting more out of the hardware and resources available to the software possibly upgrading, improving or aligning the hardware components to the nature of the workload. At present we have reached some physical limits.
3. Working Smarter. Improving the efficiency of the software in terms of algorithmic work needed to be done and possibly its ordering (coalescing) and operation (collocation). Sometimes in applying this we need to do more work but of a different nature to reduce the cost of current or near immediate more expensive work to be done (i.e. sorting => searching). The first strategy is this taken to an extreme.
4. Working in Parallel. Getting more work done in a much shorter time window by adding computing capacity and consumers of such capacity which process (map & reduce, split & stitch) smaller parts of the overall execution concurrently (simultaneously). A lot of the current focus is here both in a distributed and non-distributed form.
Even with the best programming models (message/actor based), languages and runtimes (Java virtual machine) in the world its still pretty hard to achieve the speedups required (or anticipated) whilst fully utilizing the ever growing computing capacity due to sequential nature of our thought process (not the internal workings of the mind) in such endeavors and the obvious physical limits (bottlenecks) that lie elsewhere (IO) in the processing pipeline.
So what can be done differently to move beyond such limits? Predicting and Curling: