Cache size, multi-core effect on application performance


Blogs: Cache size, multi-core effect on application performance

  1. In a follow up to his multi-core maybe bad for Java analysis, Billy Newport has posted about how multi-core CPUs could reduce the effectiveness of on chip caching. The premise of the argument is based on the idea that the amount of cache on multi-core chips hasn’t increased proportionally to the number of cores. The effect is that each core will have less cache to work with. With less cache there will be far more cache misses which will lead to reduced overall performance.
    This game is finished. CPU designers are using the transistors for cores not cache. You will likely not see a proportional increase in cache memory compared with the number of cores. This is because cache is a more expensive way to use the transistors. The makers seem to be using performance per watt as the metric. This means there is less cache to go around.
    Once again Billy brings in the idea that longer execution paths are bad for multi-threaded processing and current software is written with the expectation that there is a lot of cache memory. With lots of cache memory inlining methods make sense. However the longer path length will take more cache and could prematurely evict other things. In a strange twist this logic leads to the conclusion that multi-core chip manufacturers may do better in some benchmarks in the short term if they match the core count in the box by adding more sockets (i.e. chips).
    Software that needs to run fast on multi-core will therefore be optimized very differently than software today. Small path length is a must. Small path means less clock cycles to execute and more effective use of instruction cache memory
    The prediction is that JIT technology will become less effective in a mulit-core multi-threaded environment. This may lead to them being turned off completely. One point that does resonate in this posting is that with 32 and 64 core CPUs in the pipeline it seems unlikely that our applications will be able to take advantage of all the cores. It takes a lot of effort to engineer software to take advantage of that many threads which means it maybe more economical to let the excess cores stand idle. Billy’s final thought, multi-core is here to stay and how we write applications will need to change because of it.
  2. As a sound bite, no JIT sounds great :) I meant it in jest, the JIT will still be there but may be optimising in a different way to today given the new circumstances. Thats one advantage of a JIT over a compiler, a JIT can sense where it's running and optimise for that environment. Billy
  3. c-code vs bytecode[ Go to top ]

    In that sense Java bytecode could be compared to c-code :) Both are platform independent and can be optimized based on the environment they are running in.