Tag Archives: ui

Some Factors Driving UI Performance Analysis

Sometimes it’s hard for performance engineers to wrap their minds around when it’s time to study a transaction in isolation versus going through a full-scale design of experiment. I wanted to put some thoughts down about what drives a UI performance analysis project. What I mean by a UI performance analysis project is a profiling project in which we are studying the end-to-end response time characteristics across our certified browsers. We have been calling this work E2E, which it is.

We first start by measuring end-to-end round trip response time for all of our browsers. Subsequently, we profile each browser to understand the impact of latency caused by the browser platform. Next we profile at the application tier. Finally, we study query execution at the database tier. Our end goal is that we can take a transaction and decompose latency at each layer in the end-to-end pipeline.

I’ve decided to put some initial thoughts about factors that should be used by SPE’s for recommending a UI Performance Analysis project. I’ll briefly summarize below.

Criteria for End-to-End Analysis

1. New Design Pattern: At the heart of SPE, we as performance engineers are to identify good design patterns and call-out poor anti-patterns. By design patterns I am talking not only API patterns, but also new approaches to workflow and client-side interaction from the UI. Whenever a new interface design pattern is introduced, we should study the behavioral characteristics of this interface across multiple browsers.

2. Variable Workload: Any time we allow the user to interact with a flexible or variable workload of data from a single page request, then we should without a doubt study how the workload affects page responsiveness.

3. Rich User Interface: Without a doubt, the richer the interface, the greater the need to study UI performance behavior.

4. Predictable Model of Concurrency: My argument about predictable models of concurrency is that use cases should be studied under non-concurrent scenarios in order to understand the service time of a single request. Once this is understood, a clearer picture can be had of the model under concurrent conditions.

5. Core State or Action: I am a firm believer that when we introduce a use case that will most likely change session behavior of a user, then it should be studied. If we essentially force users to perform a particular operation or traverse a particular page, then it should be studied.

6. Use Case Affecting User Adoption: This is a fairly broad statement. What I am getting at is that when a use case is going to increase adoption of the product or tool, then it’s a worthy candidate for studying. For example, a few weeks ago, 1-800-Flowers was the first to set-up a commerce site in Facebook. Their ultimate goal is to drive sales for their company. The underlying goal for a company like Facebook to enable such applications is to keep the application sticky so more users adopt and remain loyal to the application platform.

7. Resource and/or Interface Intensive Transaction Hypothesized: What I mean by this is that as SPE’s we hypothesize whether a transaction will be resource and/or interface intensive as part of our modeling efforts. If we have a shred of doubt that a transaction will have impact on the system execution model, then it should be an immediate candidate for analysis.

8. Transactions Affecting Cognition: We need to call-out transactions that affect how users perceive the transaction they interface with. Users have response time expectations. When those expectations are not achieved, users become impatient and/or abandon. Ultimately, poor responsiveness decreases adoption.

Five Articles Worth Reading About Client-Side Performance

The days of focusing on server performance appear to be shrinking. We need to build-up our skills in the area of client side processing. While I’ve made a number of posts on the subject over the past 18 months, new posts might become more of a daily or weekly pattern. There are five articles that I would like to have the team read. They are quick reads. Four out of the five should take less then 10 minutes. The PowerPoint presentation from Yahoo might take a little longer. Abstracts below:

A Study of Ajax Performance Issues

The first article comes from the blog Direct from Web2.0 . It covers 6 points that are primarily about the competitive browsers during the early part of 2008. Nothing is captured about Google Chrome as the browser was not available at the time. It’s definitely a good primer to read.

  • Array Is Slow on All Browsers
  • HTML DOM Operation Performance in General
  • Calculating Computed Box Model and Computed Style
  • FireFox Specific Performance Issues
  • IE Specific Performance Issues
  • Safari Specific Performance Issues

Optimizing Page Load Time

The second article is about Optimizing Page Load time in web applications. The author Aaron Hopkins covers a lot of the information that the YSlow team has written about over the past 2 years. The author has a very comprehensive list of tips, plus about 1/2 dozen links of comparative information on the topic of client side performance. This is definitely worth reading and following the links.

HTTP Caching and Cache Busting Presentation from Yahoo

This third article takes the prize for being the most comprehensive of the group. It’s really a presentation from the Apache Conference back in 2005 from Michael Radwin of Yahoo . For those of you who want to know more about HTTP caching, this takes the prize.

Caching Tutorial

This is a somewhat obscure article that came from Mark Nottingham. What I like about this posting is how he simplifies the topic of web caching. He doesn’t make the sophisticated reader feel bored or the unknowing reader feel stupid. He simply states the data with very clear and easy to understand content.

Circumventing Browser Connection Limits for Fun and Profit

This fifth article comes from Ryan Breen at Gomez. He’s the author of Ajax Performance. What Breen is talking about is that not all browsers behave the same. Most load in a synchronous fashion that still cause latency when interacting with a client-rich page. These browsers can be manipulated to do parallel operations, but require configuration changes. The author makes a great point about why configuring these changes can really speed up performance.