SPT Notes About Transactional Analysis

Most of the information in this blog relates to questions that I am asking inside my head. Anyone is free to respond if they have ideas. The blog is kind of an amalgam of random thoughts, initial process and some reporting ideas.

I would say that we have some problems that need addressing. The problems that I am thinking about relate to our inability to thoroughly perform transactional analysis during the test cycle. There are a number of reasons for it. More then reasons we have excuses. I’m kind of tired of excuses. I think it’s time to just do it. The question that come to mind relate to who will perform the analysis and when will it be done? How early in the cycle can something be called out? Who responds to the initial escalation?

I think the key word I just used is escalation. Yesterday I gave a presentation to the Leadership Group within Product Development. One of the slides I presented was about differing skill sets on the Performance Engineering team. Each of our engineers have different skills and interest. Because there is such differences in the skill sets on the team, it means that not everyone on the team can perform all of the functions that a cross-functional team member might ask of a performance engineer. As it relates to a performance forensics problem, which we would define as a problem to be solved from the result of a test simulation, we need to be able to provide a chain of escalation so problems can be effectively raised, solved and prevented again.

Request #1: The Need for Robust Forensics Escalation Process

We need to figure out an effective chain of command in which first level and second level forensic analysis can happen directly by our AppLabs team. This means that we must be able to build the hierarchy of response, as well as invest time from a training perspective. There are certain forensics skills that each member of the team has to have. Then there are a set of secondary skills that a second tier is expected to acquire. We as a team will have to decide what those skills are. From our second tier, we must then have a third tier of responsiveness. This is really to be determined with regards to who plays this role. It’s simple to say Cerbibo, but I don’t think it’s quite that easy. The Cerbibo team has some very advanced skills that might not be effectively used performing tier three support. We are going to have to step back and decide whether it’s our North American Engineers or Cerbibo. What I can say is that I would expect a fourth and possibly fifth tier of escalation. The fourth tier would most likely be whichever team was not selected for tier three (North American Engineers or Cerbibo). Tier five would be the more senior members of the Performance Engineering team. Tier six and higher would be either Steve or an Architect within the Development team. We need to sort this escalation chain over the course of 9.0 testing. Now is the best time to roll-out tier one and two to AppLabs.

Galileo Report Enhancement #1

I will definitely open up a Galileo ticket before the day ends. We need to develop a report with detailed meta-data calling out transactions in question. This might sound fairly obvious and we might even have a report that does some of this. I would call this report a slightly more intelligent report the the existing reports we have today. This new report would be criteria/rule based when an either/or is met from below:

  • Transaction Response Time is greater then X seconds
  • Mode Percentage is Lower then X Percentage
  • Standard Deviation of a Transaction is greater then X seconds
  • Skew Factor is Off-Set

We obviously need to provide parameters to these inputs I specified below. I don’t think it’s as easy as saying when a transaction is greater then 10 seconds or a mode percentage is less then 25%. Then again, maybe it is as easy saying that, otherwise how are we going to say whether we need to investigate a problem.

The report itself will need meta-data about the transactions. Specifically, it will need the following:

  • Transaction Name (Grouped by a Count)
    • If more then 1 sample of the same transaction meets the criteria, we will need to either drill down into a child report page or a tree.
  • Dimensional Information
  • Response Time
  • Transactional Mean (all samples of this transaction)
  • Transactional Standard Deviation (all samples of this transaction)
  • Transactional Mode (all samples of this transaction)
  • Transactional Mode Percentage (all samples of this transaction)
  • Transactional Skew (all samples of this transaction)
  • CPID
  • Time Stamp
  • Server (if applicable)

I could envision this report looking a few ways. I know there are a few things that I am interested in seeing. First, I would love to see a scatterplot represented over time. I don’t want to rely solely on a scatterplot. I would love to see a table/chart as well. If the data could be easily exported to Excel that would be of interest. Not sure if this is possible, but can we generate image files from our scatterplot?

To go along with this report, we need to train the team how to effectively use the Galileo data to dig into the logs in order to collect enough information to account for a systemic issue. We then need to go into our other forensic data to make a correlation. If we can conclude that the issue was not a resource/interface issue, we might need to perform another sample as that user with different forms of instrumentation enabled.

Galileo Report Enhancement #2

This second report enhancement is about transactional performance comparison from dimension to dimension, platform to platform and test to test. Let’s say that we have a transaction, we will call it T1. The transaction T1 took 20s during a most recent test. It turns out that the transaction came from a Solaris test and was in the XL dimension. All other samples of transaction T1 were less then 1 second. How do we explain this accurately and work this problem out?

The problem isn’t incredibly simple. I kind of lead the listener hanging. I don’t same how many samples of the transaction we have taken, nor do I say whether any of those were in the XL dimension or a larger dimension. Assume the data can provide clarity to this problem. If that assumption could be made, would I would love to see if something like the following:

  • Color code each dimension
  • Provide different shapes by platform
  • Insert the line for the mode value
    • When comparing multiple tests, we could color code the line of mode

This would be an individual transaction only report. So it would be linked from other reports in the system that presents transaction summary details. Below is a crude attempt at visualizing this chart.

Galileo Report Enhancement #3

One thing that would be great would be the ability to dynamically alter a report. Let’s say you are looking at a report that displays all transactions greater then 10 seconds for a particular test. It would be awesome to apply a filter that says, show me every transaction that is greater then or less then X seconds. Basically, having the ability to customize a report on the fly would be the ultimate goal.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s