We are often called in to help companies solve application performance (management) problems that are in fact capacity (management) problems - well to be more specific resource management problems. This generally entails profiling, protecting, policing, prioritization and predicting resource consumption requests (or reservation). In such cases the resolution of the performance problem, bottleneck, goes hand in hand with the management of application level resource capacity as it is common for the removal of one performance resource bottleneck to introduce a much greater bottleneck further downstream once the flood gates have been opened upstream. Unfortunately knowing that such constraints exist within an application (or system) is the relatively easy part. The hard part is deciding how to control the (work) flow that elicits such related resource consumption and execution behavior.
Note: A performance bottleneck is used here to refer to points in the execution in which throughput is decreased and/or latency increased.
Somewhat counter-intuitive we generally need to introduce delays (choke/throttle/shaping points) or buffers in order to meet overall performance objectives per some service level agreement (SLA). But again the task in setting parameters for such controlled delay are not as straightforward or optimal at least initially. In the article I will show how our activity based resource metering technology helps alleviate some of the trial and error (and possible waste) that goes into the introduction and configuration of such control points with the Quality of Service (QoS) for Apps technology in JXInsight/OpenCore.