You have received your marching orders from your boss to move forward with a benchmark of the next release of Blackboard. At first glance you might consider this a fairly easy task. You might have even done this before with another application or even with a previous release of Blackboard. Unless it’s your day to day job, tackling this problem might be the greatest challenge you have ever faced.
No matter how you look at this some money is going to be spent. You are going to experience costs in terms of gaining skills, paying for consulting, tools, or simply taking your time away from other priorities and projects. Time costs money and time is what it’s going to take to get this task complete. Accomplishing this kind of project takes more than grit and determination. Rather it takes skills and capabilities. Do you or your team have the right stuff to get the job done?
Then of course your boss has some expectations of how this is going to go in their head. He or she wants this to be a flawless and seamless exercise. The notion of any small or big problem isn’t exactly in their head. That’s actually the worst case scenario, because in their mind it means the software vendor or the developers of the project were wrong. It also means your boss might have put their reputation on the line in order to take the latest and greatest feature sets before the community was ready for them, or worst before the product was ready.
It’s important to understand why you are asking or being asked to go through a performance/scalability testing exercise. There has to be some set of transparent drivers for going through such an expensive project. Are the goals offensive or defensive in nature? Are we looking for greater accuracy in determining the deployment? An exercise in testing won’t necessarily provide precision or accuracy with the deployment. The outcome of a testing project should be learning experiences and planning, not a guarantee.
Personally, I use testing in my lab to tell me what I can’t do and not necessarily what I can do. It doesn’t mean I can’t increase my confidence in what I can do, but the results of the testing that comes out of my lab certainly do not give me a guarantee that I’m ready to broadcast out to my fellow system administrators. Start simply by asking these very elementary questions below:
- Why are you going through this exercise?
- What do you expect to get out of it?
- Who will be working this effort?
- When will it be accomplished?
- How much will it cost?
The road ahead will be long and tiresome. Putting together a benchmark takes a lot of work in terms of preparation, execution and analysis. The best place to start is by figuring out what you can and can’t do to accomplish this project. Once you have a better idea of what you can’t do and a somewhat cloudy perspective on what you can do, you are at a point of plugging your gaps.
Beyond Project Expectations
There’s a lot more to a project like this then the actual preparation, execution and analysis work. Identifying measurable goals is a difficult and challenging exercise. Good attributes of a performance goal would identify criteria around page responsiveness (performance) and workload/data conditions under exponentially increasing load (concurrency/parallelism).
Be prepared for more than measurable goals. You will also need to define acceptance criteria, both positive and negative. When I test, I accept a very small percentage of error in my results. The percentage is less than 1% for business transactions and less than .01% for HTTP 400’s. We never accept any HTTP 500’s. Ideally I want no errors, but if that was the case, I would be testing infinitely.
You can approach this a couple of ways. You could evaluate based on business transactions. If you tested 1000 samples of different transactions, you would not accept any test results that yielded more than 10 failures. Another approach would be to evaluate HTTP 400’s and 500’s. As I mentioned above, we never accept any HTTP 500’s as they imply something is not working correctly in our application. Let’s say the same 1000 business transaction test produced 20,000 HTTP 200’s and 30,000 HTTP 300’s. If I accept .01% of all HTTP 400’s, then in this case I would not accept more than 5 HTTP 400’s.
What to Avoid
A poor way to approach defining performance and scalability goals would be to define system and resource utilization requirements such as the system shall consume 80% CPU utilization, or the JVM will reach 4GB of memory. Even worst would be to use ambiguous words such as pages have to be fast and the system can’t throttle. Be as discrete and descriptive as possible to define your goals.