On the one hand, saying that a story is "done" too soon will leave your project with hidden debt. If you're still writing code for a story after you've said that it's done, then it's not done! And if you're still writing code in the next sprint for something that was theoretically done in the previous sprint, then a few bad things are happening:
- Your velocity for the previous sprint will look too high.
- Your velocity for the current sprint will look too low.
- You may or may not be "in sync" with the test team and product management on what's really done and what still needs to be tested.
- You lose the benefits of time-boxed iterations.
But what about things like Bidi enablement, visual design clean-up, or extensive logging? I would argue that much of this code hygiene work can and should be done toward the end of the release. Get the new function and risky changes out there so they have time to mature. Then save a sprint or so at the end for clean-up work.
I believe it's reasonable to create a story like "Add Bidi support to the following areas: ...". It's testable, and it's new function. Plus, you get yourself into a Bidi-enablement mode, and you can make a single pass through the code making the same changes everywhere. This is good because it decreases the amount of context-switching your brain has to do.
On the other hand, I believe that logging/tracing, JUnit testing, and factoring out text for translation should be done before a story is "done". Logging/tracing make it easier to debug the code from the beginning, so you don't waste time finding where the bugs are. JUnit testing finds bugs early, so you can fix them more quickly and easily. And factoring out messages is easier to do while you're in the code; you'll inevitably miss some translatable text if you try to do it all later.
Performance testing is a tricky one. You can't leave it until the end because you may find that you need to make significant changes to your algorithm to improve performance, and then everything will need to be re-tested. But you could ship a product with new function that takes too long to run, if you ran out of time for performance testing. If a new feature warrants performance testing, I would recommend doing the performance testing one sprint after the new code goes in.
In my current project, we say a story is "done done" when all of the function test scenarios have been executed, and it has no severity 1 (the function doesn't work, and there's no work-around), severity 2 (the function has bugs that you can work around), or must-fix (as determined by the testers) defects that are still not closed. It may have severity 3 (small problem) or 4 (annoyance) defects. It also has to be demoed at the end of the sprint.
I know someone who works on a project where there is a long list of criteria to be met before a story is "done done done" as they call it. In addition to code hygiene, it also has to go through system test, translation, accessibility testing, and so on. As a result, no story is marked as "done" until the product is about to go out the door. This makes their story points meaningless. It's overkill!
On the other hand, it's not appropriate to say that a story is "done" just because you've completed unit testing on it. If the testing is not completed by the last day of the sprint, the story needs to be moved to the next sprint as debt. I've seen people try to fudge their way out of this... "well, this story only has one open defect, so we should get credit for it". We need to be at peace with sprint-to-sprint debt. The good news is that the stories that are almost done should be closed quickly at the beginning of the next sprint, so your velocity for the next sprint will be higher. Over time your story point velocity will average out to an accurate number.
I'd love to hear what some other teams are using as their "done done" criteria.
No comments:
Post a Comment