Home > BDD > User story driven performance testing

User story driven performance testing

Quite often performance testing is only done after the system is developed and is in it’s acceptance phase. Of course this can give you some unpleasant surprises. You would like to get feedback on performance issues as quickly as possible.

We could tackle part of this problem by doing performance testing each iteration and by making performance testing part of the daily or continues build. To make the test results most valuable, the test environment should resemble the production environment as much as possible. I know that it’s very hard to realize this especially in big projects at big companies. But even if you cannot have a representative environment, doing performance testing each iteration can give you some good insights into performance trends during development. If some piece of functionality is getting slower and slower it could indicate that there are performance issues. Having insights in these trends gives you the possibility to investigate these possible performance issues early on while fixing them is still cheap.

JUnitPerf is a library for decorating JUnit tests to measure the performance of functionality contained within existing JUnit tests. You can write TimedTests that test if a JUnit test finishes within a certain time. You can also write a LoadTest that lets you measure the elapsed time under concurrent use. Because it is based on the Decorator pattern you can easily combine load en timed tests to create more complicated ones.

Using JUnitPerf can give you trends based on written JUnit tests. But a JUnit test is not that suitable for performance testing. A JUnit test fixture can have a number of various tests including expected failures etc. What would really be interesting is measuring performance of functionality that is valuable to a system user. You would like use your user story scenarios not only for acceptance testing of functionality but also for performance measurement.

The idea is to use a BDD framework like easyb for writing your acceptance test scenarios for a iteration as I explained in Easy Requirements by Example. The scenarios are complete pieces of user valued functionality and so are a perfect candidate for performance testing.

It would be great if you could write something like this in easyb for measuring performance of individual steps;

performancetest {
given “scenario X”, {
Scenario x = new Scenario(“X.groovy”);
}
when “load is generated by 10 persons”, {
x.load 10
}
then “measure individual step performance”, {
x.measureSteps
x.start
}
}

or the following for a timedtest under certain load;

performancetest {
given “scenario X”, {
when “load is generated by 10 persons”, {
x.load 10
}
then “response time should less then 3 seconds”, {
x.start
x.responseTime.lessThan 3
}
}

or just for a timed test as

performancetest {
given“scenario X”, {
when “load is generated by 1 person”, {
x.load 1
}
then “measure end to end performance”, {
x.measureEndToEnd
x.start
}
}

I am currently working on getting this done in easyb. I’ll write about it in the next weeks.

Advertisements
Categories: BDD Tags: , , , , ,
  1. April 26, 2008 at 9:06 am

    Great idea! I wrote a book about JUnit (http://www.junit-buch.de) in German, including JUnitPerf.
    I think it is the exactly right way to un-technify current possibilities of configuration and specification! Keep me informed about your stats, please!

    Best

    Klaus

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: