Well maybe it’s Jeopardy and rather than the question…it’s the answer. I’ve been thinking about this question for quite some time now. It comes in the context of how we should simulate/execute or EoE during a sprint. There really are three core ways we simulate: JUnitPerf, Selenium or HTTP via LoadRunner. For quite some time I pushed JUnitPerf as the primary way to test server side code. I’ve done this because JUnitPerf is one of the best testing tools used in an agile development environment.
JUnitPerf doesn’t complete the story. For one thing, JUnitPerf does not provide browser characteristics. Second, JUnitPerf only gives you part of the picture from an API execution. It doesn’t provide the layers of statistics that are inate to our MVC (Struts/Servlets). We really only see the lower level method execution. Third, in our own laboratory we don’t have the same maturity with JUnitPerf that we do per se with HTTP (LoadRunner) or even Selenium.
HTTP simulation isn’t always our best course of action. For similar reasons as JUnitPerf we do not see the browser characteristics as it’s one layer removed from the browser. Second, our agile development process doesn’t always leave us enough runway timewise to test over HTTP. Third, not all scenarios are run over HTTP. They could be command line oriented or service-oriented.
There is a key point about HTTP simulation that we can’t deny. That key point is it’s by far our greatest, most reliable and stable testing capability in our performance lab today.
Years ago a childhood friend of mine by the name of Andy Glover wrote this article about JUnitPerf. In the article he calls out that JUnitPerf is a valid testing tool, but it’s not necessarily the most accurate tool. I agree with Andy whole heartedly. I’ve always seen JUnitPerf as an Agile SPE tool used to faciliate feedback, but not as the quintessential tool for performance feedback. SPEs should leverage JUnitPerf during the development lifecycle for cases when a critical scenario cannot be simulated in its final shape or form. If a scenario is HTTP driven, or even command-line or services-oriented, but is not entirely ready for showtime, JUnitPerf should be leveraged as a stop gap for giving feedback.
|“While JUnitPerf is a performance testing framework, the numbers you place around it should be considered rough estimates…”|
We might have become victims to our own performance goals. I’ve seen a lot of Selenium tests over the past 18 months. In many cases I’ve been promoting heavier use of Selenium to accomplish our testing goals. But why? Selenium is not the be-all…end-all of simulation tools. It has a purpose and we can’t lose sight of what that purpose is. Selenium executes the browser in the same intended manner a user would execute a browser. Therefore, you use Selenium as your test driver when you are predominately concerned with what is happening or executing inside the browser such as rich client-side interactions.
Justifying the use of Selenium because the scenario is executed via the front-end isn’t enough. We really have to breakdown the implementation from the perspective of which layer is executing code. If all that’s happening in the browser is rendering, then it doesn’t necessarily warrant a Selenium script. In a case like this, the code is predominately executing server side, whether that be in the container or in the database.
What happens when the browser is responsible for handling a voluminous amount of data to present from a DOM perspective? For example, maybe we aren’t executing code in the browser, but we return a 4mb data set to render. Isn’t that a problem? Absolutely and might require extra work to accomplish. It may make sense to simulate using two tools, with the emphasis being placed first on the server-side code and secondary emphasis placed on browser rendering.
In our world, we use LoadRunner for HTTP simulation. So for the context of this section, it makes sense to use LoadRunner in the first place if the ultimate method for the scenario is over HTTP. Second, when the state of the code is ready to be executed over HTTP. Third, when there are attributes of the simulation that are best tested using our HTTP testing infrastructure such as timed tests, conditional abandonment, concurrency, workload throttling, etc…
We might find that HTTP simulation ends up being the home for 99 out of 100 tests we do. That’s ok in the long run when http simulation is capable of providing server-side feedback in a timely manner that does not jeopardize the browser experience. The key is timely feedback with our application. As an agile SPE shop, we have to be able to provide instant feedback to our teams so that the transition or even the next sprint can be planned appropriately.