Article: Understanding the User Experience

Discussions

News: Article: Understanding the User Experience

  1. Article: Understanding the User Experience (10 messages)

    The most critical event in a project's life is when users get their glimpse at it in production. In this article, Kirk Pepperdine explores how we can understand the user experience so that performance doesn't spoil that first impression or, even worse, be failed by the users.
    When constructed with care, a benchmark can be the difference between identifying aspects of an application that are performing poorly or having your users finding it for you. The key phrase in the statement is "constructed with care." Let's eliminate the ambiguity in this phrase by defining some key components and their roles in the construction and execution of a successful benchmark. These components include the application, data, external data sources, the underlying hardware platform, a test harness and, last but not least, a set of performance requirements.
    Read Understanding the User Experience.
  2. Hi, I think before starting any performance benchmarking it is important to understand the context. Here is my list of activities taken from various standards and books to proactive software performance engineering. William Louth JXInsight Product Architect CTO, JInspired "JEE tuning, testing, tracing and monitoring with JXInsight" http://www.jinspired.com 1. Identify sensitivity of the application design to key business factors (forecast, analyze) Understand the relationship between business factors (number of uses, service calls or incidents per day, peak hour traffic) and computing applications. Activities - Identify Volumes: Users, Number of Transactions, Number of Records, Resource Consumption - Identify Events: Expected cyclic fluctuations, Rate of Growth Business factors form the context for other SPE activities. Identifying them early in the development process is important as it establishes when and how often it will be used and how its intended usage is related to business factors. 2. Specify business priorities and performance objectives Set quantifiable measurable performance objectives then design (deployment topology) with such objectives in mind. Activities - Work with users to establish specific performance goals that will drive design, monitoring and tuning efforts. - Understand the mix of update transactions, decision support queries, reporting and other types of application processing does the server handle. - Assess whether current performance levels are acceptable. - Determine priorities. Reduce response times? Increase transaction throughput? Time to deliver? Cost? - What trade offs can be made for different transaction types and users? - Determine budget constraints for hardware improvements? 3. Evaluate application, database and system design alternatives Create an architecture that permits growth using shared resources efficiently and meeting application response times. The design activity must address 3 performance goals: meet performance objectives, conserves resources, and accommodates growth. Activities - Databases: larger and more complex - Users: more users online - Transaction rates: higher and prolonged peak business periods - Data analysis: More complex data analysis - Networks: Linking between LANS and WANS 4. Summarize application performance profiles Collect information about the performance characteristics of an application. Application (users groups / scenarios) performance profiles characterize the workload at three distinct levels of detail: workload, application area and transaction type. Activities: - Identify workloads: aggregate all resource consumption into a single business activity - Identify application areas: partition the resource consumption at the application level (console, integration, web) - Identify transaction types: characterize the workload as individual business transactions 5. Predict performance on target platform Create queuing models for application simulation. Perform full scale benchmarks. Activities: - Collect Published Vendor Measurements - Catalog Rules of Thumb (Product Specific) - Performance Modeling - Simulation - Hardware & Software Component Benchmarking 6. Monitor ongoing software performance (measure, observe) Collect performance metrics across important tiers and technology layers. Activities: - Level 1: Continuous, low cost exception monitoring - Level 2: Regular, targeted performance tracking - Level 3: Occasional, focused performance audit - Build application, resource and component catalog detailing resources used by applications, resource used by software components, software components used by applications? - Instrumentation of systems to enable measurement and analysis of workload scenarios, resource requirements and performance goal achievement. 7. Analyze observed performance data (review) Observe relationships among performance variables, detect trends, and identify performance problems and their likely causes. Activities: - Baseline: Record performance statistics under normal operation - Trend Analysis: Detect relationships among measured variables and trends over time - Identify Important Measurements: The absence of an obvious pattern during testing or production monitoring with increasing loads indicates that variables important to performance are not being monitored. 8. Confirm performance expectations (verify, validate, corroborate) Perform frequent execution of formal monitoring (Load/Stress) and informal monitoring (User Experience) tests. Activities: - Create load testing suites and representative lab environments - Create database population scripts to replicate the product environment - Execute Automated Load and Stress Tests - Involve users in User Experience Testing 9. Tune application or system performance (optimize) Improve aspects of system performance such as database physical design (partitioning, clustering, and indexing), Java process tuning (clustering, loading balancing, and memory heap sizing), and client usage (OLAP versus OLTP). Activities: - Workload: Minimize the total processing load - Efficiency: Maximize the ratio of useful work to overhead - Locality: Group-related components based on usage - Sharing: Share resources without creating bottlenecks - Parallelism: Use parallelism when the gains outweigh the overhead - Trade-off: Reduce delays by substituting faster resources 10. Manage ongoing system capacity (plan capacity) Plan to deliver sufficient computing capacity to maintain acceptable performance as workloads grow and change. For both SPE and capacity planning the goal is to maintain acceptable performance levels in the face of change. For optimal system performance the data storage, network communication, and processing demands imposed by the business workload must be matched with the capacities of the hardware and software resources that make up the system. The previous activities will help in identify peak projected workloads.
  3. How very consultanty of you. Did you paste this from a powerpoint presentation? Is we readers supposed to be impressed by what amounts to an 8th grade outline of a paper you might like to write? You wascal you!
  4. I listed the activities not to impress on others my ability to list an already documented software performance engineering methodology but to actually promote awareness of the important steps. Honestly if readers are not already impressed by JXInsight then what good will it do me pasting a "powerpoint presentation" which it is not. This is a list I have given to many companies around the world (oops impressing on you again) when asked about how to apply JXInsight within their application lifecycle. By the way JInspired is a commercial software product development company and not a consultancy company. If you visited the website you would see that I publish many articles on performance engineering with some general and others related to why JXInsight has been designed so. Other Performance Articles http://www.jinspired.com/products/jxinsight/insights.html Other Links: http://www.perfeng.com/ (SPE !!!!) Good Introductory Book: http://www.amazon.com/gp/product/0471162698/ I cannot believe I even replying to what I suspect is an alias. William Louth JXInsight Product Architect CTO, JInspired "JEE tuning, testing, tracing and monitoring with JXInsight" http://www.jinspired.com
  5. I cannot believe I even replying to what I suspect is an alias.
    Neither can I ;-). But nice follow up William. You know, there is just so much to talk about on the subject that it is difficult to know what to put in and what should be left out. As it was I reduced this article to fit in this forum by several topics and several thousand words. As for links, you left out two obvious one 8^) Jack's execelent book on performance tuning and of course the Java Performance Tuning website. Kind regards, Kirk Pepperdine http://www.cretesoft.com
  6. Hi Kirk, Sorry to not list the number one Java performance tuning site. My intention was to focus on the actual methodology though I did list my own articles - a force of habit. By the way when conducting User Experience benchmarks I typical perform a single user test of all the major use cases collecting the underlying application execution flow across tiers, the base line resource consumption metrics as well as various counters related to the remote procedure calls, resource transactions, SQL operations, component invocations....I find that by doing this first a performance engineer obtains a better understand of the actual workflow and performance behavior. Working with application architects, developers, and database administrator with a behavorial trace model tends to resolve many issues that would very likely arise in any real world benchmark exercise. It is best to get these out of the way before proceeding so that teams can start to narrow down the actual bottlenecks within the system under concurrency loads. For example if a simple form based use case today is making up to 30-50 roundtrips across client to server to database then you have already identified an issue that needs to be discussed both in terms of performance and transaction semantics (does the business use allow tx chopping). Here is an article on how I wrote on how JXInsight allows testers to play out a use case and create various tags during the processing phases that spans across tiers http://www.jinspired.com/products/jxinsight/usertesting.html By the way love the location of the training courses. Regards, William Louth JXInsight Product Architects CTO, JInspired "JEE tracing, tuning, testing and monitoring with JXInsight" http://www.jinspired.com
  7. Hi Kirk,

    Sorry to not list the number one Java performance tuning site.
    No need to appologize ;-)
    By the way when conducting User Experience benchmarks I typical perform a single user test of all the major use cases collecting the underlying application execution flow across tiers
    Any methodical approach is better then none at all. The one that I currently subscribe to is to go right at the jugular. Load the sucker up and watch which bits fly off!!! 8^)
    By the way love the location of the training courses.
    Thanks, the pictures and descriptions just don't do the place justice. And the atmosphere makes for a much better training experience. Cheers, Kirk
  8. How very consultanty of you. Did you paste this from a powerpoint presentation? Is we readers supposed to be impressed by what amounts to an 8th grade outline of a paper you might like to write?

    You wascal you!
    At least William is making an effort to provides a positive contribution to the subject. And he's posting under his real name. PJ Murray, CodeFutures Software Data Access Objects and Service Data Objects
  9. I really do appreciate the thorough methodology you put in place to introduce performance testing. This is what we should be doing in an ideal world. The big question is how can we convince project manager and sponsor to spend the required time and effort to achive this. Especially, concerning point 5 :
    5. Predict performance on target platform
    Create queuing models for application simulation. Perform full scale benchmarks.

    Activities:
    - Collect Published Vendor Measurements
    - Catalog Rules of Thumb (Product Specific)
    - Performance Modeling
    - Simulation
    - Hardware & Software Component Benchmarking
    I am curious how often you had seen such kind of performance prediction model brought up seriously in real project.
  10. It is indeed a very interesting subject but I really would like to see the user experience topic treated for users as humans with other concerns than just the performance and the robustness of the system.
  11. This is really an excellent article. I follow you on most of the points, or at least that is what we should be doing normally to ensure performance. The reality is unfortunately a bit different. For instance, having a realistic test environment with a similar volumetry as the production environment is usually absolutely impossible to get from project manager, no matter how aware of performance issues they may be. In my experience, performance experts are usually politely asked to extrapolate from a very down-scaled test environment the future performance of the production system. Sometime people had budgeted the resource for one benchmark, but it is of course during development because top management want to be reassured that the new technology they bought is indeed able to scale up as promised. When you say to them that it is not so simple, that no computer system obey proportionality rules, that you have to build very complex queue model to represent realistically the performance and that it will take a lot of time to do the estimates, you are asked to prove your added value by doing a good estimate at low cost and timing, and of course commit on its validity : this is your expert job, after all. I written a little bit more comments on my blog if you're interested.