“Very few businesses realize how much valuable information is locked in their application code. By instrumenting every line of code in your business, you have a window into what’s going on in your business in real time. The possibilities for how you can use that data are endless.” – CTO, APM Vendor

I’ve pulled the above statement from an interview, just published a few days ago, with the CTO of highly marketed APM company. What surprised me the most about this statement was not the fact this was coming from a company that today relies mainly on the use of call stack sampling to collect measurement data but that after more than +10 years in the industry this CTO would consider this actually feasible.

Though our technology offers the lowest measurement overhead on the market (and we’ve benchmarks to back that claim up), instrumenting every line of code is a recipe for disaster in production unless this “every line” set is adaptively shrunk based on an assessment of actual costs, value, risk (cover) and context. Even just instrumenting code but not actually performing any measurement will kill the throughput and response times for an application. If you are not convinced then add -XX:+ExtendedDTraceProbes to the command line of your Java application or service. This is effectively like instrumenting every line of your code. In our own benchmarking of this option we have observed more than 5x drop in throughput for Apache Cassandra 2.0.x and that was without any D script active and collecting data. The only time this would not create an observable performance impact would be when you have a huge database latency cost that dwarfs the cost of any code execution within the request processing pipeline.

Instrumenting every line of Java code would only ever be feasible if you could adaptively shrink the instrumentation set at runtime based on a cost-benefit analysis or similar evaluation strategy. 

http://www.jinspired.com/site/instrumenting-every-line-of-code-without-slowing-down-every-request