Different Views of Testing

One of the break-out topics we had with Cary Millsap was about our approach to performance testing. In a somewhat whirlwind of 2 hours, Patrick and I managed to cover everything from our data model to our user abandonment approach to even our distribution model. Cary then put together a simple visual below about how his team used to performance test at Oracle.

The visual is fairly easy to understand. There are essentially (3) views of the world. The first is the pessimistic view in which the degree of testing variables is skewed towards the negative case such as a densely populated data model and/or arbitrary workload substantially higher then realistic. The realistic view is an approach to testing workload variations (data and users) in a more realistic or known fashion based on collected/studied data. For us this might be a challenging task given we have close to 4000 unique installations of our various products. The final is an optimistic view which is a testing approach where the workload variations (data and users) are more likely to produce a favorable result.

They key thing that Millsap was calling out is that when you put these three testing approaches head to head and then compare to simple criteria such as pass/fail, that you can in essence minimize your testing activities. In Millsap’s perspective, he notes that there is far more valuable gain from a pessimistic test that yields a pass, as well as an optimistic test that yields a failure, mainly because those combination of results are unexpected. We expect a pessimistic test to fail, just like we expect an optimistic test to fail. What happens when these do not happen as planned?

I’m simply drafting my thoughts on Cary’s message to us. One perspective we can infer is that our approach to Optimistic/Pass based testing could be minimized during the release. It doesn’t mean we remove it outright. It simply means that maybe we don’t run it all of the time during every cycle if it yields a consistent Positive/Pass. The second message is that we need to consider more pessimistic testing in order to collect more samples of data to make a better decision about cutting/adding/sustaining Pessimistic tests. Right now we do very little pessimistic tests. Our dimensional tests skew away from LG, XL, EX and M3 other then in Apdex. It might mean we need to make some amendments to our data model for scaling, as well as scenarios to embrace larger dimension scenarios.

If we can do a better effort with realistic tests, that too would be helpful. I wrote a blog early last week about sampling our install base more effectively. We need to be more progressive in our sampling of scenario interactions, as well as data model analysis (volumetric sampling). I find this somewhat humorous as these are the same thoughts I brought to the team 5 years ago when we started PE. We simply can’t defer from our roots.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s