Hotsos Symposium Day Two Continued

I really like Cary Millsap. It’s not that he’s some wicked smart guy or brilliant with databases. In fact, I would say Cary has above normal intelligence and simply a knowledgeable database practitioner. What Cary brings to the table is reason and common sense. I definitely feel more in line with Cary then let’s say a Dan Tow or a Jonathan Lewis. I was lucky to sit in on Cary’s second presentation of the conference called Lessons Learned Version 2009.03 (see attachments). It was 140 slides of common sense…a lot of preaching about Software Performance from a design perspective. He talked about the Messed Up App which every time I see it or hear about it, it makes me laugh. At the same time it makes me realize that it’s quite possible to miss such a really simple performance design pattern when SPE is not fully applied. He also spent a lot of time talking about Doug Burns’ blog called Time Matters: Throughput vs. Response Time. His key point has to do with the tradeoff of Response Time versus Throughput. I myself tend to focus on RT more so then TP. Burns’ and Millsap’s argument is that there really has to be a happy medium. I suppose we need to find that happy medium out for Bb.

Right after Millsap mentioned Burns’ blog, it got me thinking about a few things. The first thing has to do with our SPE efforts in Sprint 1. I think it would be a good idea to have Pengfei and Mesfin design an SPE/UML diagram for a critical use case as a means of mentoring the team. Then come Sprint 2, the team would be responsible for working with Pengfei and Mesfin on their own diagrams. Come Sprint 3 they would be able to design their own diagram hopefully with little or no help.

I was thinking we would then as part of our DOE, try to understand E2E response time from the Client to the Web/App to the DB layer and visually overlay response times with the diagram. This would be a great way to show design anti-patterns in a quantifiable way.

One last point is that we really need to start looking more at OS level profiling. In our Microsoft project a few weeks back, we learned a lot about our performance issues with the Grade Center by looking at the OS level profiling capabilities in Microsoft. Knowing that we are seeing unusual latency on Solaris, I think we need to become more familiar with D-Trace. Same goes for Red Hat profiling. We simply need to gain those skill sets in order to get to the heart of some of our performance issues in the lab.

Cary’s presentation was doomed from the start to run over. He prepared 140 slides. Sadly he didn’t finish the one section I wanted to see which was about designing a test. His notes from the presentation attached ask the question: What are you trying to prove? Success or Failure…He offers his advice in the following manner:

  • If prove success, then use pessimistic assumptions
  • If prove failure, then use optimistic assumptions
About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s