W. Edwards Deming in Tokyo (Photo credit: Wikipedia)
W. Edwards Deming was a man instrumental in helping Japan get back on her industrial feet after World War II. It’s hard to imagine the United States being able to tell someone else how to do things efficiently and without waste, but apparently we used to be in a position to do this without anyone laughing, and this guy did it.
Deming was a prodigious thinker, writer, and analyst, although once you worked through the core of what he had to say, a lot of it fell into categories like “Common Sense” and “No Seriously Why Aren’t You Guys Doing This Are You Touched In The Head Or Something.” One of these ideas was the Plan – Do – Check – Act flow, which is often shortened to PDCA because… well, actually, those are all one-syllable words, so I guess I have no idea why it gets shortened to PDCA.
Plan: What are you trying to accomplish and what do you think will get you there?
Do: Execute the Plan.
Check: Did the results of the Do correspond with the expectations in the Plan?
Act: Make adjustments if necessary (most often to the Plan).
As simple as the basic idea is, it’s astounding how many business processes, activities, and decisions leave out one of these components, or at least fly right through them as if it’s not important to do them carefully and well. Because I have no life to speak of, I got to thinking about the various ways this cycle occurs in software development.
PDCA at the project level
When we begin a project, we ideally have some strategic goal this project is supposed to accomplish. Maybe it’s to increase market share. Maybe it’s to cut costs by automation. Maybe it’s to give more control to our customers. This goal and this project have ideally been prioritized, and now its time has come.
In the Plan stage, we want to define what the project should look like (e.g. a feature list) and how we think we’ll get there (e.g. prioritized sequence, high level design and infrastructure, etc.). It’s important to note that we are planning at the Whole Project level. It’s just waste to start doing things like getting all the details about all the features at this point.
The Do stage, I’ll expand out in a moment, but this is basically building the application and getting it all into production.
In the Check stage, we make sure our intended features are in there and working. We also want to measure how this project is doing in terms of supporting the strategic goal that brought it to us.
In the Act stage, we either move on to the next project priority, or we enhance and rework this one to get closer to our intended feature list and strategic goal.
PDCA at the feature level
Now, we’re down to cranking out the features identified at the project level.
In the Plan stage, we want to take the top priority feature (or possibly features if you have enough developers to roll off onto a second feature) and we want to find out what it means for this feature to be complete. In other words, the business, testers, and developers work together to flesh out the Acceptance Criteria in an attempt to answer the question: How will we know when we’ve got this feature ready to deliver? It is a time of getting down business rules, various requirements, etc. This is also a good time to create any design or documentation artifacts that will actually help you deliver.
In the Do stage, we write automated tests around our acceptance criteria (I guess you could argue this goes in “Plan;” I won’t fight about it), then produce the stuff that will make those tests pass. Web pages, controllers, databases. We create according to the Plan, no more and no less.
In the Check stage, we’re going to see if our feature makes our acceptance tests pass. This is also when we want QA and the user to give it the once over, although hopefully they’ve been checking out our progress before now.
In the Act stage, we either make adjustments to accommodate the feedback we got from Check, or we get this feature out the door and start on the next priority.
NOTE: Feedback should be ubiquitous and, as such, it’s hard to find a nice, neat place to put it in diagrams like these. It’s great to have user and QA feedback as you go, for instance. It’s good to have a peer confirm your direction before you get halfway into something that won’t work. It’s good to have regular meetings where the team reflects on their own performance and things to improve. Those things don’t really fit in at a specific project, feature, or unit level, but they should definitely be liberally sprinkled throughout.
PDCA at the code unit level
When we code a feature, we have to write units of code to deliver that feature. Controller methods. Domain object properties. Service APIs.
At the unit level, the Plan stage involves making sure that we’re writing code for the specific purpose of satisfying an acceptance test. If this isn’t the case, then we screwed up. Either we didn’t capture a necessary requirement, or we’re writing something nobody asked for. We also want to write the automated unit test that our code will support. This helps us both design the code and write less of it.
The Do is basic – write the code that makes your unit tests pass.
The Check is mostly that the unit tests pass. It’s also a good idea to have some level of peer feedback, whether you do this by way of an official review or whether it just happens naturally from pairing and swarming. Having someone look at your code after every single unit might be annoying, but you want some mechanism to get feedback.
Based on whether or not your unit tests pass (and the feedback you get from other developers), you may need to rework your code. If the tests are passing and your colleagues agree your code isn’t draining their will to live, then check it in. I recommend checking in at the “my unit of code is making its test(s) pass” level, which probably means several times a day. Then, start your next unit of code, which also should be supporting an acceptance criterion.
I haven’t really fleshed all this out or even thought critically about it, but there are some interesting ramifications for looking at things in this way. For example, some people feel automated tests are waste, but when we look at them through Deming’s eyes, we see that they’re actually a mechanism for reducing waste (which in this case means code that does not work or adds no value to the customer). We find also that testing is done early and often, which addresses what is often pointed out as the highest risk in a waterfall approach. We also see that, in order to limit waste, planning needs to be level appropriate. A comprehensive UML diagram is completely inappropriate before you start work on a feature, for example.
More on this as it develops.