Monthly Archives: March 2011

Top 10 Users…Why This is Important

I’ve been meaning to write this blog for quite a while. Procrastination has definitely held me back…that and the fact that what I want to do is no simple task. When you want a complex task to be done, you better have your thoughts organized.

Over the years, I’ve considered so many different ways to forensically study what people do in the system. I’ve looked at logs. I’ve looked our ACTIVITY_ACCUMULATOR table. I’ve looked at aggregates of data as well. I’ve brought in tools like Coradiant Dynatrace and Quest User Performance Management. None of these tools has ever met my real needs. The reason is that I haven’t been able to articulate what I am really in search of.

I think I’ve had a few eureka moments as of late with what I’m interested in seeing. I know that I want to see what is being done when in our product. I know that I want to understand the sequence of events and the orientation of where events happen in the system. I want to understand the probability of something happening. I want to see the frequency of something happening. In the case of frequency, I want to understand the volume related to frequency. I think all of this data is relavent because it will give us more insight into predicting patterns of usage of a system.

Where a lot of this has come from centers around coversations I’ve had recently about Assessment performance. A lot of customers have been complaining about high-stakes assessments in which they have hundreds of students taking tests all within a lab. They have been complaining about both memory issues (makes sense) and I/O issues (inserts/updates on QTI_RESULT_DATA) which also makes sense. In the case of I/O they didn’t really call them out. Rather, after discussing, I called-out that there likely were some I/O issues based on the behavior an assessment. One of the things I’ve been suggesting to customers was to query the QTI_RESULT_DATA table to get a resultset of row’s inserted versus modified. Then put it in a scatter plot (from an isolated period of time) to see the volumes of inserts versus updates to see when the timeslices of these events were occuring. From that data, then go into their I/O sub-system and graph their IOps for those same periods of time and overlay the two charts…

SQL> desc QTI_RESULT_DATA;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 QTI_ASI_DATA_PK1                          NOT NULL NUMBER(38)
 PK1                                       NOT NULL NUMBER(38)
 POSITION                                           NUMBER(38)
 ASI_PLIRID                                         VARCHAR2(255)
 ASI_TITLE                                          NVARCHAR2(255)
 DATA                                               BLOB
 BBMD_RESULTTYPE                                    NUMBER(38)
 PARENT_PK1                                         NUMBER(38)
 BBMD_DATE_ADDED                                    DATE
 BBMD_DATE_MODIFIED                                 DATE
 BBMD_GRADE                                         NVARCHAR2(32)

 

Back to My Point

So all of this talk about using scatter plots to isolate time of when certain events happened in mass, got me thinking about why I wasn’t getting what I really wanted (aka…my rambling above). What I really wanted to create an identity of a user. I didn’t care about their name, just their role. I would call them “Insanely Ambitious Student” or “Constantly Connected Teacher”. It really doesn’t matter. What matters is that you can start building profiles about these users. Before you can build the profile, you have to have a starting point.

My starting point is to look at every entity in the system. I would like to be able to directly or indirectly trace back a row of data to a user. It’s not as simple as you think. First off, not every table has a foreign key relationship to USERS. Some tables have a tie back to COURSE_USERS, which is not a problem per se, but it’s not a straight-up look at each table with USER_PK1 foreign keys.

As a starting point, I would like to do a gap analysis to determine what entities can be directly tied back to the user. From that, we need to know whether the row entry can be presented as time/date value. In some cases, the entity can even show the initial INSERT versus an UPDATE. We really need to understand this system-wide, which means yes we could/would touch the monster ACTIVITY_ACCUMULATOR table.

We could even start with a single entity as a starting point. I would even compromise for an entity that stores USER_PK1 in it. It has to be a table that can present a many to one reference of rows to a user. A good example might be MSG_MAIN as a starting point since it covers all of the criteria.

We could easily look at time series data by user, as well as aggregate statistics. Both are relevant, but obviously time series is a little more visual. I think you need aggregate statistics or at a minimum binned data (binned by time series per user) like aggregate counts by user over each week as a key data point.

SQL> desc MSG_MAIN;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PK1                                       NOT NULL NUMBER(38)
 DTCREATED                                 NOT NULL DATE
 DTMODIFIED                                         DATE
 POSTED_DATE                                        DATE
 LAST_EDIT_DATE                                     DATE
 LIFECYCLE                                 NOT NULL VARCHAR2(64)
 TEXT_FORMAT_TYPE                                   CHAR(1)
 POST_AS_ANNON_IND                         NOT NULL CHAR(1)
 CARTRG_FLAG                               NOT NULL CHAR(1)
 THREAD_LOCKED                             NOT NULL CHAR(1)
 HIT_COUNT                                          NUMBER(38)
 SUBJECT                                            NVARCHAR2(300)
 POSTED_NAME                                        NVARCHAR2(255)
 LINKREFID                                          VARCHAR2(255)
 MSG_TEXT                                           NCLOB
 BODY_LENGTH                                        NUMBER(38)
 USERS_PK1                                          NUMBER(38)
 FORUMMAIN_PK1                             NOT NULL NUMBER(38)
 MSGMAIN_PK1                                        NUMBER(38)

Static Code Analysis of LoadRunner C Code

One of the things I would like for our new Performance Test lead engineer to work on this year is improved C coding for our LoadRunner library. There’s very little we do in terms of proactive code management. We obviously have a lot of functions. Many of the functions need to be deprecated. At the same time, all we do are light code reviews. Even those we struggle with providing very basic guidance. What I would like to do is implement static code analysis for our C code similar to the way we are doing this with our Sonar project.

Here’s the good news…It appears that we can actually integrate directly with Sonar. I’m not sure what tool is the engine for the Rules Engine they use. It looks like a custom rule set. We could also look at the following tools to see if they integrate with Sonar.

 

What Was the Best Advice You Ever Got?

What was the best advice you ever received? Who was that sage person that dropped a little nugget of life on your ears? That’s the question in my mind this morning. Sadly, I can’t recall who gave it to me, but I certainly remember the message. I think I was 15 years old and finishing up my first year of high school. I was trying to get through biology, which isn’t really all that tough of a class. It was a challenge to me for the first time in my life, or better yet it was a challenge that I internally struggled whether to quit and do poorly or take the class by the reigns and do well.

The message was pretty straightforward…It was work hard now and have fun later, or have fun now and pay later. The point was I needed to put in the work now in order to see any rewards in life later.

So now that I shared my message with you, what was your advice you received?

 

Moving from E2E to EoE

I wrote my first reference to E2E back at the Hotsos Symposium in 2009. Quoting that blog…

“I was thinking we would then as part of our DOE, try to understand E2E response time from the Client to the Web/App to the DB layer and visually overlay response times with the diagram. This would be a great way to show design anti-patterns in a quantifiable way.”

The whole point of using the term E2E was to explain end-to-end response time and resource breakdown for the purpose of identifying software anti-patterns. I had the right idea, but it was the wrong term.

What I really meant to say was EoE (EoE=Execution of Experiment). Patrick helped me realize this recently that E2E just didn’t make sense because it implied everything was End 2 End, meaning a UI experiment was required. Not everything requires a UI experiment. Some experiments are run by non-UI mechanisms and therefore it might cause confusion.

So the solution going forward is to change E2E to EoE. The execution of experiment is intended to be a full analysis from end to end, but the dependency on a UI simulation is really driven by the test type and approach.

That Got Me Thinking About Our DoE

There are a few things missing with our DoE’s right now. First, our DoE’s do not necessarily have goals. By goals I’m really talking about non-functional requirements for performance and scalability. I think we need to address this gap in Sprint 7 going forward so that we have a goal to strive toward and then attempt to go beyond. Otherwise, how do we really know when the DoE is complete?

The other thing that’s missing is our testing approach. We should really justify why we are going to use a particular test tool. I was talking with Patrick the other day and I had mentioned that we have all of these different ways to run a test. We could run a Selenium test. We could execute a batch script. We could run a JUnitPerf. We could run a SQL script. We could even use LoadRunner. I’m sure the team is laughing at that one because I’ve been against using LoadRunner for DoE tests. I now want to make it such that any tool (so long that it could automated from Galileo) can be used for EoE as long as we justify it in the DoE. We really should justify our test approach in the DoE document.

 

Welcome 2011…I’m Back

So it’s been a few weeks since my last blog. I was lucky enough to take the last week and half off of 2010 for some much needed R&R. Going into Monday, I was pretty ready to start the new year off with a bang and lucky me had a high fever (~101) which was not so good for coming into the office. Turns out I had a little virus and some fluids (ie: Gatorade and Water) did the trick.

  

As I said, I’m back and ready for business. I’m going to be putting together some informative blogs together over the next few days about some of the changes I would like to introduce to the team in 2011. So keep an eye out for thos blogs. I will leave you with some pictures of my favorite Christmas present my sister-in-law gave me.

 

The Mind of a Performance Hacker

I had an unusual eureka moment in my car this evening. I get them now and then. Well actually according to Confluence, I’ve had 5 other “eureka” blogs since 2007. Apparently I had none this year. I had (3) in 2007 which must have been a good year for the team. One each in 2008 and 2009. So I was definitely due…

Within the last few months I have had the opportunity to reframe my technology perspective with the addition of Stephanie Tan on our team. Stephanie runs our Security practice. As I attempt my hardest to provide her with modest leadership and direction, I’ve found myself engaged like a mad man trying to minimize my learning curve in software security. I’ve re-read some old books that I had on the shelf. I’ve subscribed to a few periodicals. I’ve found myself at times scouring Google for hours upon hours researching new terms and concepts so that I have more context when I talk with Stephanie and others about security.

While I’ve been doing this, it finally hit me that security engineering (the safe kind we want to practice) is just a clandestine form of performance engineering. Now before you challenge me or turn the page, hear me out. If we say the platform of a security engineer is to engineer or construct solutions with the intent of penetrating, breaking or dismantling a software system, then I think it’s safe to say that a performance engineer has an almost identical focus. The performance engineer should be intent on breaking the system to affect responsiveness and/or scalability. In my mind they are one and one the same…both are trying to break the system. We are trying to break the system from a fairly positive perspective. We want to determine when users will abandon, when processes become unbearable and when the system shuts down.

 

Is that what we are doing today? “Not really“…says my inner voice. I don’t think we ask the question or questions about breaking a use case, component or system. I think deep in the back of our mind we want to ask this question. Our intention during our SPE exercises is to ask questions that should or could lead us to that ultimate question or question set about breaking a system. Unfortunately we don’t ask the question. The reason I know we don’t ask the question is that we don’t build experiments with the intention of breaking the use case, component or system. Lately we have been building experiments that tell us how fast or how slow something is. That something is usually a microscopic view into the use case, component or system. It typically interacts with “realistic” data attributes under “realistic” or normal behavioral characteristics.

 

Should we stop doing that? Well, not really. We need to take a step back and lead from the alternative. I say alternative because today our comfort zone is asking questions about “normal”, “realistic” and “common” attributes and characteristics tied to what we are building. It’s OK to ask those questions as they are essential to giving us better context on our product. The alternative questions have to be skewed toward “performance hacking”, meaning how can break the user experience from a responsiveness and scalability perspective. We don’t care about making a motherboard fry or burning a CPU to the ground (figuratively speaking). We do care about use cases that can single handedly bring a system to its knees. We care about processes that aren’t predictable and run from undisclosed periods of time. We need answers about how we can literally make a page unresponsive or a process die midstream. We need to see if we can force out of memory exceptions, deadlock a row or table or even cause a thread to lock. We should look at a scheduled task meant to run ever hour and try to figure out if we can make each iteration run for 65 minutes, then what?

 

We need to put our performance hacker hats on and figure out our performance and scalability vulnerabilities…

Building a Case for Some Fast Path Principal Implementations

For a while now I’ve had this idea about providing new ways to improve time on task. This is more of a usability concept than anything, but from the perspective of performance, I’m specifically talking about providing a mechanism to shorten the critical path and reduce the time and effort a user spends performing a task in the system. So the idea is nothing spectacular, but it happens to be something that just doesn’t exist in the system. It’s something I would like for PerfEng to model and prototype so that it could conceivably be brought into the product in a future release.

What’s My Big Idea

Over the years the thing I’ve learned more so about Blackboard than anything is that while the system is intended to improve the learning and livilihood of students, it’s really a system designed from the perspective of teachers and instructional designers. What we seem to do real well is provide a solid canvas for structuring and organizing content. In fact, there are several fast paths for authoring or manipulating context such as:

  • Edit Model
  • Context Menus for Constructing Content and Other Artifacts

Where I see an opportunity is to provide fast path capabilities for teachers to assess content interactions, content contributions and even user activity by students and/or class participants. Imagine I’m a teacher and I’m stepping through my course. I decide to go into one of my discussion forums and threads to monitor participation, or quite possibly reply to a thread. I’m presented with a screen below with a list of participants in the class. Most are students, maybe my TA or even my own posts are listed as well.

 

Now, I’m given a mechanism to perform other operations associated with the user because I have a drop-down context menu next to their name, or I’m able to roll-over their name and hovering window is presented (either is possible). The menu/window allows me fast path views directly associated with the user. For example, I roll-over Amina Brook’s name and a context menu gives me the ability to do some of the following operations:

  • Drill into Amina’s Personalized View of the Grade Center
  • Send Amina a message
  • Review unread messages from Amina
  • See a 360 view of Amina’s activity
  • Look at Amina’s Course Map
  • Drill into Amina’s Discussion Posts for the Course
  • See other tools (maybe from B2’s) that present data from Amina

The possibilities are endless. The idea centers around giving teachers the ability to quickly access the content that has the most relevent purpose that moment. I maybe working on Amina’s end of year comments (another interesting feature) to send to her or if she’s in K-12 to her parents. I’m trying desperately to get a more wholistic view of Amina’s performance and contributions to the course. In this case, I need a fast way to aggregate data, as well as drill into areas of the application to inspect participation.