Jul 152013
 
Picture of an AT&T TDD 2700

Picture of an AT&T TDD 2700 (Photo credit: Wikipedia)

I’ve been in some discussions sparked by the biggest brain in Kansas City regarding TDD/BDD/ATDD, specifically around the issue of how much is too much.

These discussions have come at an interesting time.  In the first place, I’ve been growing more frustrated with how brittle UI-based ATDD tests can become without some serious engineering.  In the second place, having gotten used to writing the kind of code that TDD has driven me to write, I find that I experience a law of diminishing returns on most code these days.  Do I really need to test-drive out yet another repository now that I know what a decoupled, reusable, and testable one looks like?

These thoughts came to a head when Troy shared this presentation by Ian Cooper entitled “TDD: Where Did it All Go Wrong?”

In this presentation, Ian argues that we’ve gotten away from TDD.  Some of it is spent correcting the usual “don’t test modules, test behavior” stuff that you do when you’re dealing with this issue, but some of it was thought provoking in the following ways:

  • Unit tests are tests that can be run in isolation, not a unit of code being tested in isolation
  • Code to make a test pass can be horrific; you refactor out the duplication later
  • Internals should not be unit tested, just the APIs

I wish I were warmer to these ideas, because they would cut down on a lot of unit tests that I write, but I can’t give over.

Repositories notwithstanding, the “internals” are where a lot of things go wrong.  The “internals” also tend to house business logic.  The “internals” are the components most likely to be reused.  I feel like getting an API-level test to pass, then refactoring the code down to internals without any further test-driven guidance is risky.

In fairness, though, I haven’t tried that approach.  Generally, I do BDD at the layer I happen to be writing.  “What does a controller need to get done to support Behavior X?”  Maybe if I went back to roots, what Ian is advocating, I would find the risk to be less than I envision.  So, next side project, I’m going to try it that way and see what kind of hot mess I end up with below the API.

Enhanced by Zemanta
Nov 052012
 
EI-DHH (Boeing 737-8AS), short final to RWY 30...

EI-DHH (Boeing 737-8AS), short final to RWY 30, València-Manises (LEVC). (Photo credit: Wikipedia)

Here’s the context. Some guy with a smidge of pull in the Ruby on Rails community made a post about testing when coding. I was actually surprised to see a somewhat immature perspective on testing and what makes it valuable. His conclusions make perfect sense if the primary value of testing is bug detection and prevention, so I’m not saying he’s stupid or anything (he’s not). However, I was surprised to see that testing, for him, pretty much has bug detection and prevention as its primary value. That tends to be the ground floor understanding of unit testing that gets people in the door, but nobody stays there.

There have been a great number of responses to the article in comments, other blog posts, etc., and there’s not really a need for me to rehash what they’ve said or what I’ve said in the past on this issue over and over again.

Instead, I thought I’d interact with the view in more of a Q&A format, using questions and objections that people have actually shared with me when discussing this issue.

Q: How do you justify the extra time writing tests takes? Don’t your clients pay you to write code, not write tests?

A: My clients do not pay me to write code. My clients pay me to deliver valuable, high-quality software, and it is the most moral thing in the world to do what I can to make sure what I deliver is what they want and working properly. Can you imagine Boeing saying something like, “We get paid to build planes, not create mathematically sound blueprints, run stress tests, and do safety inspections?” Nobody wants a plane that can’t fly or accomodate passengers or will likely kill them, and you are pretty much to blame if that’s the plane you sell them.

Tests make sure that we are delivering what the client is asking for as well as only writing code that makes those requests into reality as opposed to unnecessary layers of abstraction, features we don’t need, features that deeply misunderstand the client’s expectations, etc. I don’t think of it as the amount of time necessary to write tests taking away from coding; I think of it as all the time I’m not wasting writing code that’s useless or will need rework.

Q: Do you think you should have 100% test coverage?

A: In the sense that all the code I write is in response to a test that captures a specification, yes.

Q: So you’d write a test for the getters and setters of every property?

A: See, that’s question begging. I don’t write code then write tests to “cover” that code. I write tests, then I write the code I need to pass those tests. If I write a property with a getter and setter, it’s because I needed that property to make an existing test pass. So, no, I wouldn’t write a unit test around a getter and setter of a property, because that doesn’t make any sense. Where did that property come from to begin with? If not from a test, then why is that property there?

Q: But what about writing tests around code that already exists?

A: Very different ballgame, because at that point, you’re writing tests primarily to catch and prevent defects, and then DHH’s points have a lot more value to them. But it also depends. If I’m writing tests for the purpose of rewriting that code or decoupling it or what have you, then my tests serve more of a design purpose, and we begin to shift back to writing tests around the expected behaviors first, then refactoring the code to those tests. But, yeah, if it’s just a matter of some class sitting out there and someone wants regression tests around it, then I’d probably triage my tests along lines similar to what DHH recommends. But that isn’t an ideal situation and shouldn’t be typical, either.

Q: Isn’t using Cucumber for tests the result of a drug-fueled vision of a magical land where non-programmers write your tests for you?

A: That’s what DHH said, but once again, it reflects a reasonably immature view of development and testing. I do find that, in the Ruby community, a lot of the development seems to be done by one person or two to four man teams who are trying to build a workable product as fast as possible and get it out the door. In that context, some of the views make more sense. You’re not collaborating with lines of business, federal compliance officers, third party vendors, mainframes, volatile APIs, and all the other things that go into many line of business applications when you’re trying to build Coderizer or Snufflehumpr or whatever and get it out the door that weekend, so the considerations are somewhat different.

Also, if you have a relatively Waterfallish view of software development where the business sits in Silo A and produces an artifact that is carried over to the developers in Silo B to build, then the views make a little more sense as well. If you tried to introduce Cucumber (or Gherkin, in my case) across the process in a phased-gate approach, the only real question left is which Silo will devolve into the Lord of the Flies first.

However, when you are working on decently-sized line of business applications in a more (ugh) agile manner, you find yourself collaborating together with said business people as well as QA people. When you have a business representative, a developer, and a QA person sitting down at a table together to talk about features, requirements, and expectations, Cucumber is an amazing tool to help bridge the language gap. And if Cucumber is still too techy, you can at least come up with acceptance criteria a grammatical step up from Cucumber that can easily become Cucumber scenarios.

Enhanced by Zemanta
Sep 142012
 

Everyone is going to have a slightly different answer for this, but I can tell you what I do.

When we begin work on a story, one of the first things to do is figure out the acceptance criteria. This is generally done with the business, the developers, and the testers. These criteria will drive the development team’s automated acceptance tests as well as the testers’ test scripts.

Since all of us have agreed on what this story needs to “pass,” developers are writing their automated versions of them and coding to make those tests pass. The business and testers are giving their input as we go. When all our tests pass, the story can be put through a formal QA or UAT checkpoint, but hopefully that should flow pretty quickly since we’ve been coding to the same criteria that testers and users are expecting, and we’ve gotten their feedback as we’ve developed.

To sum up: Test planning starts at the very beginning, dictates all the code you write, and puts the bow on it at the end.

Nov 222011
 
A photo of the apocryphally "first" ...

Image via Wikipedia

I really like this article from James Grenning that talks about the time and effort savings in debugging when you write your unit tests first. In it, he demonstrates that what we commonly think of as “debugging” a piece of code is typically a chunk of effort that is fairly significant with each piece of code that we write, so he compares the cost of this effort as part of the development process as opposed to coding, “finishing,” then debugging as part of testing.

I totally agree with all that.

The problem is that this makes the primary value of unit testing “time saved while debugging” and, in the article, is generally contrasted with having no tests at all. Some of the commenters chimed in with, “What about writing tests before you check in?” James maintained (rightly so) that writing tests first is still a savings, but we’re clearly on shakier ground, now. If the value of TDD is finding bugs early, is there much of a difference between writing tests first or writing them last? I would say the difference is relatively small either way.

The primary value in writing tests first is that you’re designing your code. You write the tests around what you expect the code to do in order to complete its job. This is defined, ultimately, by the functionality you’re trying to create (i.e. features).

Then, you write the code to satisfy those tests.

By doing this, you aren’t just going to catch bugs in an automated fashion, you’re:

  1. Making sure you only write the code that you need to support the requested functionality.
  2. Making sure that your code actually does what the functionality requires rather than simply working bug free.

Unit tests should be prescriptive, not descriptive. In other words, unit tests should define what your code is supposed to do, not illustrate after the fact what your code does.

In archery, you set up a target, first, then you shoot at it. You define success, then try to implement that success based on the definition. You don’t shoot an arrow, then paint a target around it. In other words, you don’t define success by what you did, you do the right things based on how you’ve defined success.

If we want to talk about time savings, the sheer amount of savings in the code that you don’t write is usually much more than enough to justify taking time to write the tests, and that’s not even adding the debugging time savings in that James talks about.

If you write unit tests (or worse, generate them – MSTest, I’m looking at you) after you write your code, you are missing out on what I believe is the most significant benefit to TDD.

Nov 182011
 
Software_development_process

Image by janek030 via Flickr

This post is mostly a notepad for myself for a series of posts I’d like to do over time. I want to take an application – real or hypothetical – and go through the process of building it out, starting with requirements and going through the first release, or at least the first feature.

What follows is a list of the steps I typically follow in an ideal development process:

  1. Developers meet with users/stakeholders to create list of features, including business reason for each requested feature. Developers relatively size features. Stakeholders prioritize at least the first seven to ten features.
  2. Features go up on a Kanban-style board in priority order.
  3. Devs get any necessary details about first feature from stakeholders. Document where helpful.
  4. Devs break feature into small tasks.
  5. Devs write acceptance tests in plain English that define when the feature will be done. Use acceptance tests to ask stakeholders if they’ve missed or misunderstood anything. Can be done in conjunction with QA testers if you have them.
  6. Build UI around making first acceptance test pass. Ask stakeholder to look at UI and confirm you’re on the right track.
  7. Write unit test(s) around first bit of code to make UI “work” for the first feature. Tests should define what that layer of code needs to do to support feature.
  8. Write code to make test pass.
  9. Write unit test around layer of code to make previous layer of code work.
  10. Write code to make test pass. Repeat this process until all tests pass for your feature.
  11. Get stakeholder to review what’s been done and suggest changes if necessary.
  12. Measure time feature spends in each Kanban column. Use this data to project timeline for other features.
  13. Repeat steps from “get any necessary details” for next prioritized feature.
  14. Release/deploy as often as you possibly can without being stupid.
Nov 112011
 
Logo of FindBugs

Image via Wikipedia

In the forest of acronyms one has to navigate in software development, one set that comes up in questions a lot is TDD and BDD, which stand for Test-Driven Development and Behavior-Driven Development, respectively.

If you’ve been in the game for a while, you know that there’s an X Driven Development for just about every helpful concept someone has come up with and, as a result, people might be reluctant to get into a “new” XDD practice, especially since so many people struggle with TDD to begin with.  But here’s the dirty, little secret:

If TDD is done as intended, there is no particular difference between TDD and BDD.

The problem is that, over time, TDD has evolved into a fairly unhelpful practice. BDD is an attempt to return TDD to its roots, and those roots are writing automated tests first around expected system behavior.

Today, TDD means (for most people) writing automated tests around the code you wrote to check for bugs. Or, if you’re a little more progressive, writing automated tests around the code you plan to write to check for bugs. These test classes usually have names based on the class and methods that they test, like ProductRepositoryTests and SaveProductTest().

By contrast, BDD tests are written first around the behavior you expect without initial regard to specific classes or methods you might use to achieve that behavior. They tend to have names like, When_Saving_A_Product and Then_The_Product_Should_Be_Added_To_The_Unit_Of_Work().

The main difference between TDD and BDD these days is that TDD just checks out the code you wrote. It doesn’t make sure you wrote only the code necessary, and it doesn’t make sure the code does what the functionality requires. It just checks that the code you wrote works. It’s sort of like handing in a book report and a teacher going, “Let’s see… paragraphs… text… it’s about a book… yep, this is a book report all right.  A+.”

So, my TDD test will check that my ProductRepository code is bug-free, but it doesn’t tell me if I needed a ProductRepository or if having a ProductRepository actually accomplishes any features. I just assume that in my head, sort of like the teacher who assumes that, if all the book report ingredients are there, then it must be accurate.

BDD, on the other hand, defines what I want the system to do, then I have to figure out the minimum amount of code I need to implement that functionality. I’m testing that the system does what it’s supposed to and that I’ve got sufficient code to get that feature done, not simply that a given class or method is bug-free. I set up a test to make sure I’m adding products, not simply that some class I wrote is running bug-free.

In terms of coverage, you generally end up about the same place. If I do BDD correctly, I should have covered all my classes and methods (mostly because I shouldn’t have any classes or methods that the tests themselves didn’t drive out), but I am testing them in the context of doing a particular function – achieving a particular goal. I’m really asking, “Does my code do what it’s supposed to do?” in addition to “Does my code work?”

Sep 292011
 
St. Augustine writing, revising, and re-writin...

Image via Wikipedia

When this blog was fresh and new, I wrote a post on how unit tests test your requirements (not your code) and another on how unit tests are used as a design tool.  In that second article, one of the things I pointed out was that you’re writing just enough code to satisfy your test.

I want to take a moment to reflect on the connection between these two articles: testing the requirements and only writing enough code to satisfy your tests.

If your tests define your requirements, and your code is only sufficient to pass your tests, then you have succeeded in making sure that every line of code provides business value.  You are not spending time coding things that you don’t need.  Even if you’ll need that extraneous code in the future, code it then, not now.

I’ve talked before about ReSharper’s innocuous ways of telling you that you screwed up, and one of these ways is a Find Usages run that returns no usages.  If you can do that on one of your classes or methods or properties, guess what?  You wasted your time writing that.  You don’t need it, and you never did.

“But we have this database table, so we’re going to need a domain obj….”

No, you don’t.

“But we need to get this data out of the database, so we’ll need a reposi….”

No, you don’t.

You want to know when you need a domain object?  When you need a domain object.  You want to know when you need a repository?  When you need a repository.  Sounds so simple, doesn’t it?  Yet so many very good developers merrily churn out infrastructure code and more that, ultimately, just isn’t necessary to provide business value.  It is a huge time waster, and I would offer that developers writing code they don’t need is a much larger factor in missed deadlines than user requested changes.  Don’t believe me?  Track it and tell me what you find.

TDD is not magic.  It is not a cure-all for all your coding problems.  TDDers can still write bad code, and non-TDDers can still write good code.  Somehow.  I guess.  In theory.

But when you don’t use tests to drive out the code you write, you are in grave danger of wasting a lot of time and money, albeit with the best of intentions.  Use your tests.  You will get done faster.

Sep 212011
 
A mirror reflects Sarge and the Quake III logo...

Image via Wikipedia

The problem isn’t their work ethic.  Your developers work hard.  They often work overtime that you may not even know about.  They kill themselves to meet their deadlines.  They aren’t playing Quake when you aren’t looking.

The problem isn’t their intelligence.  Your developers are extremely smart people.  They have put themselves in an industry that is constantly changing, and the rate of that change will only increase.  They have to deal with abstractions and concepts that rival math, linguistic analysis, and other varieties of pattern-spotting and computational jobs.

The problem isn’t their industry knowledge.  Your developers are reading magazine articles, blog posts, going to conferences and Code Camps, books, and working on side projects at home to sharpen their skills and stay on top of things.

The problem with your developers is that they know there are better ways out there to do what they do, and they’d do them if they had someone to show them how to successfully implement them in real projects instead of just giving them theoretical knowledge and examples, but they don’t have that.

It will take them literally years to compensate for that – to claw their way up into a higher level of development skill and productivity – all for the want of someone just to show them how these fancy concepts like TDD, Interface-Based Programming, Dependency Injection, etc. actually work together to get a real project done.

That’s their problem.

Sep 012011
 

The older I get, the less dogmatic I become – about anything, really.  I’ve been wrong so many times and watched my beliefs go through so many evolutions in my life that it seems silly to assume that, at any given time, I am the walking embodiment of absolute truth.  That doesn’t mean that I never think I’m right and that nobody is wrong.  In fact, this blog entry will prove this in a moment.  It does mean, however, that with few exceptions, most of my beliefs are always up for revision at any time in the light of better arguments or discoveries.

It’s because of this that I appreciate disruptive thinkers – people who challenge the established dogma.  I appreciate heretics, “liberals,” and rabble-rousers.  Even if I end up rejecting their position, they cause conversations and critical evaluations.  They cause me to really examine the foundations of my beliefs and either shore them up or get rid of them as the situation dictates.

So it was with some degree of enthusiasm that I read “Breaking Away from the Unit Test Group Think” by Cédric Beust – a man who has even written a book on Java testing.  Unfortunately, this article reflects the worst kind of disruptive thinking – the kind based on a thorough ignorance and misapprehension of what they are disrupting.

It wouldn’t be so bad, except Beust is condescending toward TDD, saying that, “It means well.  It really does,” and citing that it might be useful for junior-level developers to get them thinking about testing.  He goes on to say that, since code is often refactored a few times at first, you also have to refactor your tests, so there’s no point in having them right away.  He also points out that writing your tests afterward is just as good and, in fact, illustrates this with an example from his own interview process.  Additionally, he states that unit tests are really more of a luxury item and not nearly as important as functional/integration tests around the UI.

The problem is that all this, if I may, betrays a junior-level developer understanding of TDD. The main point of TDD is not to “make sure that the code you wrote works.” The main point of TDD is to design your code according to what your code needs to accomplish.

No process is ever perfect or done perfectly, but if you find yourself regularly changing your code and in the process causing changes to your test code, then you are Doing It Wrong, because the point of the test is to drive out the code you write, not the other way around.  You don’t shoot an arrow and paint a target around where you hit; you paint a target and try to hit it with the arrow.

By writing tests first around what the code needs to do, then writing the code to satisfy that test, you are ensuring that you are:

  • Meeting all your requirements
  • Only writing enough code to meet your requirements
  • Writing the simplest code you can to meet your requirements

The fact that you have an automated way to prove on a continuous basis that your code is working and future changes don’t break it is just icing on the cake.

Beust’s criticisms and counter-recommendations come from a serious,beginner-level misapprehension of the practice of writing tests.  It wouldn’t upset me at all if I heard it from a junior-level developer.  In fact, this misunderstanding is by far and away the most common misunderstanding of unit testing and TDD in general.  But a published author on Java testing practices?  Claiming to challenge the “Unit Test Group Think?”  Come on.

Aug 172011
 
User stories

Image by star5112 via Flickr

When I was a wee lad, I went through ScrumMaster training.  Being a ScrumMaster is a lot like being a DungeonMaster, except one of those positions involves a lot of storytelling, keeping players on task, and dragon slaying, and the other position is the DungeonMaster.

Anyway, during this time, we learned to write user stories in the following fashion:

As a [role], I want [feature] so [benefit].

An example might be:

As a loan servicer, I want to see which loans are behind on their payments so I can easily know which loans are in trouble.

I want to say right out of the gates that this template for writing user stories isn’t bad at all.  This isn’t so much about that format being bad so much as it is that I think there’s a better way.

Although this may sound odd, the potential issue with the above format is that it ties features to benefits.  While that may sound like a good thing, something to consider is that there’s no end to the benefits a person can come up with and what features might support that.  You end up with a list of features that accomplish business needs, sure, but you also end up with a lot of personal preferences and “nice to haves.”

Lately, I’ve been writing my user stories in this format:

In order to [business goal], a [role] should be able to [feature].

An example of this might be:

In order to lower loan default rates, a servicer should be able to see the current status of all the loans they service.

It’s a subtle difference, I grant you, but the difference is that the feature is now tied to a business goal (which is larger than the person occupying the role) as opposed to something nice it will do for the person occupying the role.

There are a number of little benefits this gives us, because we are making sure everything on the list supports a business goal and isn’t just there because someone thought they’d need it or just wanted it.  Every feature can be directly tied back to a business goal and, consequently, business value.

It helps keep our list down to features the business actually needs.  It helps make sure developers are only working on things that add business value.  It allows people to answer the question, “Why are we doing this?” with a better answer than, “Well, that’s what Kathy asked for.”

Another benefit that’s a little more behind the scenes is it forces everyone to think critically about the feature list.  I can hand a list of these features over to a business user, and if a business goal is wrong, they can say, “Wait, that’s not why we do that.”

Understanding the real why provides a lot of value to all parties concerned.  Understanding the real why is a catalyst for both users and developers to come up with solutions that might satisfy the business goal even better than the initial idea.  Understanding the real why promotes mutual understanding and buy-in on the final product.

In the end, either template is good, but I’ve been using the second one, lately, and it’s been working out really nicely in a number of respects.

P.S. Features worded in this way become almost direct ports to SpecFlow tests, just in case it might be helpful to, I don’t know, have automated tests that make sure you fulfilled all the requirements.