Just make it work…
I first introduced the concept of a Fatlog a couple of months ago.
Basically, a fatlog is a Scrum backlog, or any Agile user story, that is too big and can be sliced vertically into thinner functions of the system.
I talked about the kinds of symptoms which might indicate a backlog is indeed a fatlog, but today I want to hone in on a special kind of fatlog that really only exists in maintenance mode.
I like to call this kind of fatlog the “just make it work” fatlog. It is kind of hard to describe what this fatlog looks like, but perhaps easier to describe one of the ways it comes to be.
Let's say you have a major feature for your application. Some feature that you built over several backlogs, perhaps over several sprints. Some users start reporting a few bugs for that feature. This doesn't work, it doesn't work when I do this, etc. The business doesn't really understand what the user is saying or doesn't want to take the time to fully understand what is broken, they just know some of the parts of this feature are broken. So what do they do? Create a user story which describes all of the requirements of the entire feature and make it one single backlog.
What you end up with is a large backlog that really should be 10 or more backlogs, but is excused as one backlog, because “most of the stuff is working anyway.” The idea here is not so far fetched. The idea is that if most of it is working, the team should be able to identify which things are not working according to the requirements and fix just those things, and in the end the whole thing needs to work anyway, so they should make sure the complete feature works.
What is wrong with this?
It may seem like not that big of a deal, you can just estimate the backlog based on the parts you know are broken and the testing for the entire feature. There are a few problems with this line of thinking.
- What if one or more of the things you thought were working are not working? What if they are horribly broken?
- How will you know how to test the entire feature without the entire detailed requirements? If you think you're just going to pull up the tests from back when you first implemented the feature, you had better check with your business first. Chances are their idea of the correctness of the feature has changed.
- How do you know where to start working? You will have to access what is broken first. This will mean that you must run all the regression tests to see what is broken. The only problem is if your original tests would have failed for the failures in the first place, you would have already known about it and fixed it the first time. You will have to write new tests to exploit the bugs that you don't know about.
- If the business cannot tell you what is broken, how will they know when it is fixed? Your success criteria will be very interpretable. That is a recipe for failure.
Have you ever played one of those games where you try to figure out what is different between two almost identical pictures? Some of those pictures can be very difficult to spot the differences. It is a good thing they tell you how many wrong things you need to find. When you have a “just make it work” fatlog you have exactly that. Except you don't have the count of how many things are wrong. You have to keep guessing and hope you got them all.
Here is the key point: a user story sizing should not be based on the size of the work, but rather the scope of the work. The difference may not seem like much, but it is huge.
Consider the difference between these two statements:
- Find the one 1 needle in all of the haystacks on the entire farm.
- Find 50 needles in the one haystack by the side of the barn.
The first statement is not much work, (just finding one needle), but the scope is huge. The second statement is much more work, (finding 50 needles), but the scope is much smaller.
Which statement do you think is likely to have more success?
One key thing each Scrum or Agile team should ask about each maintenance story they are going to take into an iteration is: can you tell me what is broken? If the business cannot tell you what is broken, but only lists a set of requirements of how something should work, you are going to be searching for a needle in an unknown number of haystacks. Instead, the backlog should be changed to either reflect exactly what is known to be broken, or be broken up into many smaller stories with smaller scopes.