A few months back at the offsite I pulled Patrick and Chris aside to discuss the notion of diverging from LoadRunner. At the time I had a lot of reasons, but the main reasons were tied to frustration with the LoadRunner/PC product, as well as the costs of maintaining a product that we rarely update. The more and more I think about it, I realize that there are so many other aspects of LoadRunner that frustrate me.
First off, the language is C and while C is a simple language, it’s not ideal for managing reusable libraries and utilities in the same ease of use that an object-oriented language would be. I can’t actually recall a true IDE designed for C. I use Visual SlickEdit, which is great for reading individual C files, Perl or SQL scripts, but in general SlickEdit isn’t an IDE. Second, debugging LoadRunner is a complete pain in the @$$. Granted, much of the LoadRunner debugging issues we have our self-inflicted based on our approach to codeline management. You would think the LoadRunner community would be more interested in our approach to code line management, then their rickety ways. Third, the wlrun and analysis engines are really unreliable. We have had more issues with those utilities than anything.
Truth is…I’m not necessarily blaming LoadRunner for everything. It’s an OK tool. It gets the job done. We have invested a lot of money in it to date. So why get rid of it?
This is where I get really lost in my thoughts. So please be patient with me in this blog. I hope to have my thoughts organized by the end.
What we do with LoadRunner and performance testing in general is different than any other organization in the world. I can say without a doubt that we have by far one of the most advanced testing approaches and frameworks ever known to the software world. You could put our framework in a room with the 10 best software companies and none of them would come close in any of our capabilities. We not only have an advanced scripting framework, customized to the tee…we also have a robust integrated data generation framework suitable as a load testing tool in its own right. Our distribution modeling capabilities (Servlet) set us apart from any load test or benchmark. The fact that we have reverse engineered the LoadRunner schema in order to extract and transform performance metrics is another one of those (no one else is doing this). Then you factor in our Galileo statistical capabilities which allows so much data analysis. I’m not even counting the Fusion framework that manages and conducts all of the work. Seriously…nobody in their right minds are doing what we are doing.
But it’s not enough
I have a lot of thoughts about what we are not doing and things we should consider doing better…as well as things that would continue to set us apart even further from the competition.
Open Up the Network…We are Way too Closed
So this isn’t really a new idea. There are a couple ways to handle this notion of being closed. I use the term closed primarily in that we build, maintain and run the performance test automation independently. Developers can’t plug into our network. They can ask for a request for a load test, but a) they can’t contribute to the code base b) they don’t have access to run a test c) they don’t have the environments to run the test.
I think we need to be able to give our customers (Engineering) more independent capabilities. They should have the ability to leverage what we have already built with little or no effort. They should have the ability to contribute new pieces to our framework in an open and constructive manner. We need to be more flexible, even if engineering isn’t asking us to be flexible. The more flexible and open we are, the more willing engineering will take us up on taking advantage of leveraging or contributing to our framework.
Too Much Junk in the Trunk
The root of all evil in our framework is our ClickPaths and Servlet tool. The ClickPaths were intended to be disposable. They are hardly disposable. In fact, they are a giant mess with little to no rhyme or reason in how they are managed and maintained. I don’t even have a handle on how they are defined anymore. That’s more an artifact of less day-to-day involvement from me.
The Servlet might be in a bigger mess than the ClickPaths. The Servlet is so antiquated. It’s unnecessarily confusing, very unreliable and a maintenance nightmare. While it gives us control of percentages, it doesn’t give us the true distribution we want. We need more control in our tests. We need greater configuration control with better checks and balances. Heck, we need this to be 100% automated.
We Need a True IDE…Because We Need a Code Library…Not a Script Library
We tried to make our C code as close to a library of reusable components. In the end it’s a gigantic mess. We have redundant functions. We have major standards violations. We have no static analysis of our code. We don’t have any code metrics. We have no development tool set (IDE) that makes development easier to build and manage.
Our problem isn’t necessarily that we don’t have an IDE. Rather, I think our problem is that we are using a script language that’s very atomic in nature. I personally think our testing language should outweigh the tool decision. We should decide on the programming language, our coding standards, utility classes, debugging capabilities, extensibility, etc…One of the inputs into the programming language should be the IDE’s available for coding. Another input should be test engines that support the language we are most comfortable with. If there are none, this opens the door to either eliminate the language in lieu of another language, or consider building our own test engine.
Need to support multiple testing types (HTTP, Browser, API, SQL, Web Services)
I’ve been pushing hard for us to be able support multiple test types. When I say that, I mean from the same test ID. It doesn’t need to be the same test tool. For example, I think it’s important when running an HTTP test, you should have the ability to sample a full-page load via a browser. There should be hooks in our automation that presents the Browser transaction in the same view as our HTTP transactions, but obviously with filtering capabilities. The same could be said for running an API request, a SQL transaction or even a Web Services request.
I’m a believer that be able to do something like this, you need more granular control of your test process in order to control the request to do the test and the process to extract the results.
We Need More Conductor Control
So I’m referring to the conductor as an extension of Galileo/Fusion. I see Galileo is the input engine, Fusion is the coordinator engine/framework and some third or fourth component becomes worker bee. In our world today, you could probably say that LoadRunner is that worker bee. What I’m imaging is an abstraction of the worker bee in which many worker bees can work in parallel with granular assignments. We could have load tests over HTTP mixed with browser load tests. The test results could ultimately be presented in the Galileo test details with some differentiation. We might even create new modules…
Build a browser plugin and server agent for HTTP
I’ve spent all of this morning to get to this last paragraph. I’m in the camp of “building a tool” over leveraging an open source project. We build all of this customization within Galileo and Fusion to begin with, plus the servlet, datagen, etc…
First off, I’m imagining a tool makes the coding of a load test a lot simpler. I think it would be cool to create a plugin to Firefox (Firebug) and/or Chrome that captures HTTP requests and inputs for automation. The plugin would have the ability to search a code repository to see if any previous code existed to perform the function/request.
For example, imagine you open up your browser and log into a Blackboard release. The plugin would detect which version you are running, hence would identify the code branch you are running on. It would also evaluate the URI parameters (for HTTP). If it detected that the code already existed in the library, it would present a screen dialogue or tab that informed you that the code was available and could be reused. You could even have another tab that allowed to re-run the request (like a play tab). The whole premise of this plugin is that it would replace VUgen, but give substantially more features and control. If new code was needed, it would help organize and identify new packages, classes and methods. Then it would allow for the creation of variables and declarations for the purpose of parameterization.
The browser plugin wouldn’t just be able to talk with our source code repository. I would recommend that we also build an agent within the JVM that allows the plugin to instrument the execution of code. I’m still trying to figure out all of the reasons for doing this. One reason off the bat that I’m thinking is the ability to handle code coverage and mapping. I’m also thinking we may be able to handle the server side requests more appropriately by having the server side data with the client/http request. Then of course there are other benefits such as having awareness of what code can be used for verification (at an API or even directly into the DB). As I said…this idea is not baked. It’s just a thought…
If you build all of this to facilitate recording, then you most likely will need to build something that can act as the harness…I haven’t thought that one out yet. I’m looking for ideas.