Gå till innehåll

I love test design and I think it is an essential part of good testing. I enjoy analysing complex data and functions and often create graphical models or tables of the design. But when it comes to actual test cases I start to hesitate.

What is the reason for me to create test cases? Well the answer must be that I or someone else can use them later as support for executing tests. Executing tests is often interpreted as following some detailed instructions and comparing predicted results with actual results. So what is the reality of test cases, let´s hear it! In most cases I do NOT need any test cases at all in order to do some really good testing! All of the tests I run at the moment are done without any test cases at all. I am doing a menu tour of the whole application, taking one part at a time and at the same time making notes. I am using the user guide as inspiration and am also updating that document with what I find. Every page has my comments, questions and bugs. I am setting up the basic functionality of the system and at the same time doing some software attacks. The most common are the input constraint attack, trying to make all error messages appear, OOPS - making mistakes and trying to correct them. All of these techniques help me find lots of problems, some already known but forgotten. Read How to Break Software by James Whittaker or get some info from James Bach on how to perform this kind of general attacks.

Next step for me will be to test the whle system flow. I have created a simple general process from creating a survey, asnwering, analysing and then presenting results. That is basically what the system does, helps users to collect data and get statistics from it. My plan is to create a number of test charters - high level test cases with test ideas - and collect data in excel from where I can analyse the results.

We are at the moment evaluating an administrative system for test cases, logs and bug report but so far I have only been using the bug reporting part. I find most test cases administrative systems to be really crappy and not supporting any reasonably good way of working. The reason they are built the way they are is that that was the easiest way to build them! They are all built like a file explorer with folders and subfolders and unnatural stepwise descriptions of testcases. Boring and ineffeicient.

SAY NO, when you think it is a bad idea to work a certain way or with certain systems. We testers need to lead the way. Who else is going to create a better way of working for us? And you WILL meet resistance, changing hurts. But befioe you give up, think about if you want to spend the rest of your career working like you are today. The choice should not be that hard when you look at the long term picture.

Let´s get better!

I recently read a very interesting exam work made by two students at BTH in Sweden. The title is Predicting Fault Inflow in Highly Iterative Software Development Processes. They apply different predictive models to real projects and see how good they are at approximating the number of faults that appear over time. You have probably heard enthusiastic measurement people that try to convince you that they have the best answer. I won't go into details regarding the actual measurements but will go directly to the conclusions. In brief thay found out that the S-curve was the worst model of the ones compared. Their measurements showed that more complex models did not necessarily give more accurate results and a simple linear model was a valid alternative.  

So IF you need to measure. Use a simple linear model that will be good enough. And in my experience bug measurements can be one factor to measure progress with but it is seldom very exact and never give you the full picture.

Remeber: It does not matter how exact you are when the measurements are wrong to start with!  

I have decided not to translate the older blog posts that I have written for the last three years. It will be too much work. Sorry about that for the non swedih testers. I hope to write some new interesting stuff instead. I am also hoping for some insightful comments on what I write. The comments in Swedish have been limited to something like five comments a year. I am not sure how to interpret that fact.

I keep on reading new books and write an occasional book review. I try to use my new knowledge in testing and in teaching. I will report the things I find interesting and useful.

The books I am reading at the moment are: What did you say - the art of giving feedback by my favourite author Jerry Weinberg, I almost feel like a collector since I got the last six books from Dorset. He really is a fountain of wisdom. Behind Closed Doors - project management written by Johanna Rothman and Esther Derby. The PSL workbook material from the class I took last month. User Stories Applied by Kohn - I like the idea of skipping USe Cases but am a bit disappointed in the ideas of testing that I have read so far.

Since many of my colleagues are non-swedish speakers I have decided to write in English from now on.  Last week I went to Tallinn to give a class in Test design. I was invited by KnowIT, one of my business partners in Sweden as well. They arranged a class together with Webmedia - the largest IT-company in Estonia. In spite of business beeing somewhat slow there were nine people in the class.


All of them were young men with good experience and eagerness to learn so the class was really fun to teach. I think that Estonia is a good choice for outsourcing, they speak good English and Tallinn is only 45 minutes from Stockholm by plane. If you are interested in outsourcing to Estonia contact Kaspar.Loog at knowit.ee

Kaspar and Per took me to a nice Russian restaurant in the old town were we had blini and vodka for starter and a stew for main course. The stew was made in a clay pot with dough in top as a lid. It looked like a giant mushroom and was very tasty.

A colleague of mine, David Barnholdt, who is boiling with creativity, created an exercise the other day. The goal was to explain the effects of having to much detail in requirements. I follow along the same lines and think the same applies to test cases. For SOME tests there may be a reason to have a lot of detail in the test specification but for MANY other tests detail can be counter productive. The main reason for this is that lots of details means that the creatvity of the tester will be choked and the performance of tha testers executing tests will be inefficient. Following a detailed script will often result in a worse performance than having a more open ended description. This is exactly what James Bach and Michael Bolton and many others have tried to explain for many years.

I found David´s exercise very interesting and decided to copy it. This means that we can also compare results.

I divided the class in two groups (consisting of  4 people each) and told them that they in this exercise would do a drawing , following a requirement I would hand out.  The would only have one minute to complete the drawing. I provided them with a number pencils in red, green and blue and one big piece of paper each.  Then I handed out one paper with the requirements to each group and started a visible timer counting down 60 seconds. (I searched for timer on the web and shoed it via the projector)

One of the groups got the following requirement:

Draw a beutiful summer meadow with blue and red flowers in green grass, some cows and birds under a shining sun.

The other group got the following requirement:

Draw a beutiful summer meadow with

  • 10 blue flowers with 5 petals each
  • 5 blue flowers with 6 petals each
  • 13 red flowers with 6 petals each
  • 2 cows with 3 black spots
  • 1 cow with 5 black spots
  • 2 cows with 4 black spots
  • 2 birds to reside in the upper left corner
  • 3 birds in the middle
  • one sun to the right with 5 sun beams

Here are the results from the exercise.


So the leftmost drawing, made by the group given the detailed requrement, had a lot of details but lacked the coherence of the right drawing - made by the group given a more open ended requirement. I feel the exact same thing happens when testers get too detailed test cases. They follow them in detail but miss the big picture. We had a discussion in class on how detailed requirements and test cases should be and I think that all participants will think twice befire creating highly detailed scripts in the future. We of course agreed that there MAY be situations where we want to have a lot of detail but that is OK as long as we realise the consequences. So thanks again David for creating this great exercise!

Well , it is Saturday night so I am going to be a bit social...

Jag håller på att ta fram en kurs för beställare. Ämnet är att beställa och ta emot IT-system.

Jag har studerat PENG-modellen som handlar om ekonomin i investeringen. Verksamhetsspecialisterna sitter tillsammans under 3-4 halvdagar under ledning av en PENG-coach och tar fram vilka effelter/nyttor de vill ha och vad priset och vinsten blir. Både hårda - direkta faktorer och mjuka faktorer som mindre stress beaktas. En intressant reflektion är att det ofta visar sig att de 20 procenten viktigaste nyttorna står för 80% av nyttan. Pareto-principen igen. Värt at notera är att PENG helst inte vill kalla det IT-projket utan verksamhetsprojekt där IT ofta spelar en viktig roll.

Nästa steg är effektstyrning där vi identifierar målgrupper och effekter på ett mer detaljerat sätt. Jag frågade en  specialist på effektstyrning om PENG och fick följande svar:

PENG är en metod för att göra en investeringsbedömning. Den ge inget (eller nästan inget) stöd för att styra projektet i analys, genomförande eller förvaltning. Effektsyrning syftar till att _styra- mot förväntade effekter. PENG syftar till att ta investeringsbeslutet. Vi har utvecklat PENG så att man kan använda det i samband med att man definierar effektmål.

Så detta verkar vara ett naturligt nästa steg. Först därefter går vi in och skriver mer detaljerade krav. Jag har precis handlat boken User Stories applied av Mike Cohn. User stories används av det agila folket för att beskriva krav och planera utvecklingen. Den verkar mycket lovande. Jag har länge tyckt att användningsfall inte fungerar särskilt bra då de är för detaljerade, innehåller lösningar mer än krav etc. User stories används tillsammans med konversation mellan beställare och leverantör. det är mycket viktigt att inte missa kommunikationsdelen - då är det kört! Att skriva detaljerade användningsfall och sen hoppas på att utvecklarna levererar exakt det jag vill ha...det funkar inte!

Det fina är att User Stories och Effektkartor sen kan användas som underlag för test och PENG för att följa upp investeringsnyttan. Låter det bra eller?

Det vore kul att få återkoppling om någon känner att de vill bidra.