Last week I talked about “Spicing up you retrospective” at the ALE2012. I did a similar talk in June at the ACE!Conference in Krakow. But after the talk in Krakow I had a chat with Bob Marshall and he pointed out that fun isn’t the (only) answer to make retrospectives better. He also pointed me to a blog post were he wrote about these issues. So, the following article is mainly based on his ideas.
For me, there are two main retrospective challenges:
- Create a motivating environment and keep the people them engaged
- Work on the identified items
On first sight it seems that these two challenges may be caused by:
Repetition –> Boring Retros
Same problems –> No effect
Tasks are not visible
Tasks are too big
But these are only the things you see on the surface, if you dig deeper you’ll find the real root causes.
IMHO, the main root cause is a missing purpose. Any retrospective without a purpose is a complete waste of time. (the same applies for any other meeting). It doesn’t make sense to change your retrospectives regularly and introduce new ideas as long as there is no purpose behind. But how can you inject purpose into your retrospectives? The answer is: by using hypotheses. To do so I adapted the original retrospective flow by Diana Larsen and Esther Derby the following way:
The first two steps didn’t change. But instead of directly generating insight you check the hypotheses from last retrospective. This is really powerful, as it offers you the possibility to check if the tasks from your last retrospectives had the effect you expected (your hypothesis). In most cases you’ll find out that your hypotheses were wrong. In lieu of simply checking if you worked on all of the tasks you identified the last time, you additionally check if they were helpful and had a positive effect. If your hypotheses were wrong this gives you the opportunity to check why they didn’t have the expected outcome. Now you can enter the step “Generate Insight” and check what went wrong. This approach helps you to iterate on your tasks until you are able to fulfil your hypotheses.
But it could also be, that you find out that a hypothesis was complete nonsense. This is also fine. Another change to the standard flow is the adaption of the step “Decide What To Do”. You have to add a hypothesis to any task you identify, otherwise you won’t be able to check if the task helped. Make sure that your hypothesis is testable as described in the scientific method. If your hypothesis is not testable it doesn’t make sense.
The closing step is the same as in the normal retrospective flow.
I was asked at ALE2012 to give some examples:
- Task: Collocate the team with the PO
–> Hypothesis: The response time to questions to the PO will drop.
- Task: Introduce a Definition of Ready (DoR)
–> Hypothesis: Better prepared User Stories and we are able to keep the time-box of the Sprint Planning.
- Task: Stand-up in front of the task board
–> Hypothesis: More focused stand-up (keep the time-box).
Keep in mind that this are only examples to give you an idea how this could possibly work. I know that it is difficult to measure “more focused stand-up”, but I’m sure you’ll find a way
In the next blog post I’ll write about some ideas how to shape such retrospectives by using metaphores.
Last but not least: Here are the slides of my talk at ALE2012: