Mar 082012
 
Object oriented design object. Showing how the...

Image via Wikipedia

Not too long ago, I had a Twitter discussion with Dru Sellers in response to an article he posted. It was really the sort of discussion we should have had properly over scotch, but we did the best we could with 140 character constraints.

The article he posted advocated teaching someone to program with a procedural language and ignore object oriented stuff, because it was an additional layer of difficulty and programmers who over-architected even small problems, which is essentially the view Dru advocated in our discussion. I took an opposing view that this approach produces people who have difficulty solving problems in an object oriented manner, and it was the late introduction of those concepts that made them so difficult for people.

I want to say at the outset that I’m totally right. Just because Dru brought up some incredibly good points that made me critique my own position doesn’t mean we should go crazy, here. In the workplace, it’s pretty rare to find programmers who are solid (heh) on Object Oriented Programming, and I do believe this is a result of their education, and I also believe they find the paradigm difficult because they’ve been taught to solve programming problems in one paradigm, and now have a very different one dumped on them. I would say that most people learn to program in a procedural (or crypto-procedural) manner, and still basically code that way even if the language is object oriented.

However, Dru brought up some very good points, and as someone who still considers himself a teacher, I really wanted to weigh them against my own thoughts and see where I might need to change to be more effective. Here are some things that came out of the wash.

The Best Teaching Method is Connected to Why You’re Teaching Something

In our discussion, Dru brought up that learning procedural programming allowed him to teach a five year old. He’s right. If I were teaching a small child to program and I had no cool graphical way to introduce OO concepts, I’d probably go procedural, too.

This is when I realized that we were approaching the question with different goals in mind. Dru was thinking about programming for the masses, and I was thinking about teaching developers. My own bias toward introducing OO concepts was because most developers need that paradigm in the work force. But when I was a kid, I learned programming like this:

10 PRINT “PHIL IS AWESOME! ”

20 GOTO 10

It occurred to me that I would actually teach programming paradigms very differently to a child being introduced to the wonders of commanding a computer and a college student aiming to become a professional software developer. Their needs are different and the end goal is different, so maybe the teaching methodology should be different as well. Maybe there is no one Best Way to Teach X, or at least, you can’t meaningfully address that question without first deciding who your audience is and what the end goal of the teaching is.

For all those of you who are currently saying, “Well, duh. You teach children differently than adults,” I’m not talking about the level of complexity or what have you – I’m talking specifically about procedural versus object oriented programming.

Programming is About Problem Solving

It occurred to me that Dru and I actually agreed way more than we differed, and one of the important areas was that object oriented paradigms are not the ideal solution to every coding problem. Some problems, for instance, are best addressed functionally.

So, once again, in terms of preparing a professional for the field, one of the things I want to make sure I impart is the ability to select the right paradigm for the problem. I do have to pick one to start by nature of the case, and I still believe starting with object oriented principles is the best pick for someone who wants to become a professional developer, but the issue isn’t really about procedural programming versus OO programming; the issue is about whether it’s enough to simply teach people how code does stuff.

The more I talked to Dru, the more I realized this was actually the issue I was concerned about: do we create professionals who know how to write code, or do we create professionals who can solve problems in the best way to meet the needs of the situation?

A carpenter could be a master at using tools and crafting things, but if she always builds two-legged tables, they aren’t actually crafting the best solutions.

And Then I Was All…

The next time I teach someone how to program, I think I might start by teaching them how to write tests in English. It’ll be an experiment; I don’t know if that will actually work well or not.

The idea, though, is that they would first learn to define problems and break them down into chunks. We could then use that as a basis to start using code to solve those problems in small chunks. You don’t have to architect a huge OO solution to satisfy one test, but at the same time, you’re operating in a larger context than just a “Hello World” exercise.

A lot of my colleagues are also teachers and mentors, and I’d really like to hear what your experiences have been.

Dec 062011
 
English: Elektronika MK-52 with an ERROR message

Image via Wikipedia

Regardless of how agile your team is, you have to have some strategy for dealing with defects.

On one extreme is the Defect Bureaucracy. This typically involves filling out Tester Discovered Defect Form 22A which includes various pieces of information such as what events created the defect, where the user was, steps to recreate the defect, error messages, etc. This then goes into some kind of defect logging and tracking system where a developer can have the defect assigned to them, report on the defect’s status, and perhaps even tie it to a code change set. The main thing I like about this system is that it typically forces people to be pretty thorough in their bug reporting, and you don’t get emails like, “The reports page is broken.”

On the other extreme is the Just Fix It system, whereby someone says to the team, “This thing is broken,” and someone just fixes it. Chris McMahon provides a nice analysis of how Just Fix It can work with teams of varying sizes and circumstances in this blog post.

Predictably, I tend to lean more to the Just Fix It way of dealing with defects. Oftentimes, the amount of effort spent logging, tracking, and reporting on defects outweighs the amount of time it took to implement a fix.

On the other hand, one of the issues of the Just Fix It way of doing things is that it presupposes that fixing that defect is the highest priority thing you could be doing. In other words, if a bug is discovered, a developer is going to stop whatever they’re working on and fix the bug.

For small bug fixes that take just a few minutes, this kind of ad hoc prioritizing is probably no big deal. Likewise, high-impact, show-stopping defects have a priority that’s relatively obvious to everyone.

However, in my experience, most defects fall somewhere in between; they’re not trivial to fix, but they aren’t huge deals, either. Heck, some defects aren’t even clearly defects. Every developer has probably gotten “defects” like “Alert color is too bright” or “Dropdown is confusing.”

I guess what I’m saying is that I’m not sure there’s a one size fits all approach to handling defects, especially since defects come in all shapes and sizes. Instituting a heavyweight process for dealing with all defects seems silly when you just need to correct a misspelled word. Running all defects through a Just Fix It mentality might bite you when you’re spending five hours getting a heading to align flush with a paragraph instead of finishing your Search feature.

For a mature developer team, I think the answer is probably, “Do what seems right.”  If something takes me ten minutes to fix, I’ll just fix it. If something will take half the day or longer, then I’ll put it on a card and let the business prioritize it along with the other features I have to work on. If something seems to fall in between, I’ll just use my best judgment.

Any way you slice it, defects are a challenge to fit smoothly into a workflow. They tend to arise at unpredictable times with unpredictable levels of severity. Just make sure your process is flexible enough to gracefully handle defects of all sizes.

Dec 012011
 
Division by Zero

Image by shane_d_k via Flickr

I read an article awhile back (the title eludes me, buried as it is in a half week tryptophan haze) that extolled the need for developers to anticipate as many possible problems that might arise in their code as they can and code for those possibilities. A big part of this is coding defensively. In other words, you write your method as if you have no control over the inputs and Molech knows what might be coming in.

I really, really disagree with this.

I want to say at the outset that I’m not talking about coding defensively against things you know are going to be issues. If your database allows for nulls in a field, then the code that works with that field needs to be prepared to handle a null value. If someone has the ability to enter zero and that would cause a divide by zero error, you need to handle it. So, if you know that the normal operations of your system can cause exceptions, then by all means, handle those exceptions gracefully.

However, I do contest the idea that developers should speculate on everything they can think of that could conceivably go wrong and code for it.

First, you do have control over the input of the majority of your methods, because the input comes from other methods that you or another developer on your team has also written, and ideally, you’ve taken care of bad input at the entry point of that input.

“A-HA!” says the fool. “What if another developer on my team is a bad coder and, as a result, his method allows bad input? I need to code my methods to protect against this.”

I cannot fathom the hell it must be to work on a team like this, where you have an inherent mistrust of your co-workers and the code that they write. Even so, if this is genuinely the case, then test coverage, communication, and coding to interfaces will go a long, long way in resolving this issue.

Second, our fears are almost always worse than the reality. People can imagine all kinds of things that could go wrong and have very distorted views of the likelihood of those things happening.

Are you afraid of being bitten by a poisonous spider? Do you know how many people have died of spider venom in the last five years? Are you afraid of driving? Do you know how many people have died in auto accidents in the last five years?

Spending time/money to write a ton of defensive code around extremely unlikely or flat out impossible scenarios is just not the wisest expenditure of those resources. I’m a huge believer in writing the code that you need to achieve your functionality, then stopping. If a bug you did not foresee comes up, then you write a test around that bug, fix it, and you’ll never see that bug again.

Granted, we’ve all had the experience where we thought something was “never” going to happen and then, sure enough, it did. However, you’re at least dealing with a known issue at that point and not a hypothetical one.

In a previous life, I was at a Siemen’s demo of some fire suppression equipment, and the sales rep said that, for the garage modules, cigarette smoke wouldn’t set them off. Some wag piped up, “Well, someone could try to start a fire using a bunch of cigarettes, then,” and sat back with a smug grin on his face.

That man is not a safety visionary who will save your company money and protect your assets. That man is an idiot.

Yes, some cigarette-laden arsonist could walk into your stone parking garage with crates full of cigarettes and bring the place down before someone noticed him. That could happen. Now. How much do you want to spend ensuring that scenario can’t happen?

Coding is the same way. Time is money. How much do you want to spend recovering gracefully from a null value when there is no foreseeable way a null value could get into your method? How much do you want to spend writing an alternative code branch in your method in case your database or mainframe goes down? How much do you want to spend to protect against a divide by zero error when you can’t see how a zero will ever come up in the calculation?

Personally, I prefer lazy, optimistic coding to proactive, pessimistic coding. Write your tests first. Make your code depend on interfaces instead of concrete implementations. Strongly type as much stuff as possible. Now you’ve protected yourself from a huge majority of common issues by doing things you should have been doing to begin with.

Nov 222011
 
A photo of the apocryphally "first" ...

Image via Wikipedia

I really like this article from James Grenning that talks about the time and effort savings in debugging when you write your unit tests first. In it, he demonstrates that what we commonly think of as “debugging” a piece of code is typically a chunk of effort that is fairly significant with each piece of code that we write, so he compares the cost of this effort as part of the development process as opposed to coding, “finishing,” then debugging as part of testing.

I totally agree with all that.

The problem is that this makes the primary value of unit testing “time saved while debugging” and, in the article, is generally contrasted with having no tests at all. Some of the commenters chimed in with, “What about writing tests before you check in?” James maintained (rightly so) that writing tests first is still a savings, but we’re clearly on shakier ground, now. If the value of TDD is finding bugs early, is there much of a difference between writing tests first or writing them last? I would say the difference is relatively small either way.

The primary value in writing tests first is that you’re designing your code. You write the tests around what you expect the code to do in order to complete its job. This is defined, ultimately, by the functionality you’re trying to create (i.e. features).

Then, you write the code to satisfy those tests.

By doing this, you aren’t just going to catch bugs in an automated fashion, you’re:

  1. Making sure you only write the code that you need to support the requested functionality.
  2. Making sure that your code actually does what the functionality requires rather than simply working bug free.

Unit tests should be prescriptive, not descriptive. In other words, unit tests should define what your code is supposed to do, not illustrate after the fact what your code does.

In archery, you set up a target, first, then you shoot at it. You define success, then try to implement that success based on the definition. You don’t shoot an arrow, then paint a target around it. In other words, you don’t define success by what you did, you do the right things based on how you’ve defined success.

If we want to talk about time savings, the sheer amount of savings in the code that you don’t write is usually much more than enough to justify taking time to write the tests, and that’s not even adding the debugging time savings in that James talks about.

If you write unit tests (or worse, generate them – MSTest, I’m looking at you) after you write your code, you are missing out on what I believe is the most significant benefit to TDD.

Nov 182011
 
Software_development_process

Image by janek030 via Flickr

This post is mostly a notepad for myself for a series of posts I’d like to do over time. I want to take an application – real or hypothetical – and go through the process of building it out, starting with requirements and going through the first release, or at least the first feature.

What follows is a list of the steps I typically follow in an ideal development process:

  1. Developers meet with users/stakeholders to create list of features, including business reason for each requested feature. Developers relatively size features. Stakeholders prioritize at least the first seven to ten features.
  2. Features go up on a Kanban-style board in priority order.
  3. Devs get any necessary details about first feature from stakeholders. Document where helpful.
  4. Devs break feature into small tasks.
  5. Devs write acceptance tests in plain English that define when the feature will be done. Use acceptance tests to ask stakeholders if they’ve missed or misunderstood anything. Can be done in conjunction with QA testers if you have them.
  6. Build UI around making first acceptance test pass. Ask stakeholder to look at UI and confirm you’re on the right track.
  7. Write unit test(s) around first bit of code to make UI “work” for the first feature. Tests should define what that layer of code needs to do to support feature.
  8. Write code to make test pass.
  9. Write unit test around layer of code to make previous layer of code work.
  10. Write code to make test pass. Repeat this process until all tests pass for your feature.
  11. Get stakeholder to review what’s been done and suggest changes if necessary.
  12. Measure time feature spends in each Kanban column. Use this data to project timeline for other features.
  13. Repeat steps from “get any necessary details” for next prioritized feature.
  14. Release/deploy as often as you possibly can without being stupid.
Nov 112011
 
Logo of FindBugs

Image via Wikipedia

In the forest of acronyms one has to navigate in software development, one set that comes up in questions a lot is TDD and BDD, which stand for Test-Driven Development and Behavior-Driven Development, respectively.

If you’ve been in the game for a while, you know that there’s an X Driven Development for just about every helpful concept someone has come up with and, as a result, people might be reluctant to get into a “new” XDD practice, especially since so many people struggle with TDD to begin with.  But here’s the dirty, little secret:

If TDD is done as intended, there is no particular difference between TDD and BDD.

The problem is that, over time, TDD has evolved into a fairly unhelpful practice. BDD is an attempt to return TDD to its roots, and those roots are writing automated tests first around expected system behavior.

Today, TDD means (for most people) writing automated tests around the code you wrote to check for bugs. Or, if you’re a little more progressive, writing automated tests around the code you plan to write to check for bugs. These test classes usually have names based on the class and methods that they test, like ProductRepositoryTests and SaveProductTest().

By contrast, BDD tests are written first around the behavior you expect without initial regard to specific classes or methods you might use to achieve that behavior. They tend to have names like, When_Saving_A_Product and Then_The_Product_Should_Be_Added_To_The_Unit_Of_Work().

The main difference between TDD and BDD these days is that TDD just checks out the code you wrote. It doesn’t make sure you wrote only the code necessary, and it doesn’t make sure the code does what the functionality requires. It just checks that the code you wrote works. It’s sort of like handing in a book report and a teacher going, “Let’s see… paragraphs… text… it’s about a book… yep, this is a book report all right.  A+.”

So, my TDD test will check that my ProductRepository code is bug-free, but it doesn’t tell me if I needed a ProductRepository or if having a ProductRepository actually accomplishes any features. I just assume that in my head, sort of like the teacher who assumes that, if all the book report ingredients are there, then it must be accurate.

BDD, on the other hand, defines what I want the system to do, then I have to figure out the minimum amount of code I need to implement that functionality. I’m testing that the system does what it’s supposed to and that I’ve got sufficient code to get that feature done, not simply that a given class or method is bug-free. I set up a test to make sure I’m adding products, not simply that some class I wrote is running bug-free.

In terms of coverage, you generally end up about the same place. If I do BDD correctly, I should have covered all my classes and methods (mostly because I shouldn’t have any classes or methods that the tests themselves didn’t drive out), but I am testing them in the context of doing a particular function – achieving a particular goal. I’m really asking, “Does my code do what it’s supposed to do?” in addition to “Does my code work?”

Nov 022011
 
from Toto Moondance Jam concert 7/14/07. Photo...

Image via Wikipedia

Dear Travis,

While you’re in Ethiopia doing noble things, I’m busy rewriting our codebase into unrecognizability. I was trying to think of a gentle way to break it to you, so I decided to do it in song. This is meant to be sung to the tune of “Africa” by Toto.

I had to stay late and debug, tonight
NHibernate mappings gave me errors I didn’t understand
I’m going to try to be polite
Some of your code no longer looks the way that you’d originally planned

I scanned through old Stack Overflows
Hoping to find some old, forgotten code that solved our session bug
Refactoring’s just how it goes
Hurry, boy, I’m going to check this in

Gonna take a lot to fix this app, I fear
It’s got about a hundred bugs or more, but you’re not here
I changed your code while you’re in Africa
Gonna take some time to change it back the way it was
Oo-ooo

Marketing is crying every night
Wondering why we haven’t launched a beta or finished testing, yet
I know that I must do what’s right
Like give our classes names that make more sense and cutting technical debt
Gutting our repositories inside
Frightened of this thing that they’d become

Gonna take a lot to fix this app, I fear
It’s got about a hundred bugs or more, but you’re not here
I changed your code while you’re in Africa
Gonna take some time to change it back the way it was
Oo-ooo

Gonna take a lot to fix this app, I fear
It’s got about a hundred bugs or more, but you’re not here
I changed your code while you’re in Africa
I changed your code while you’re in Africa (I changed your code)
I changed your code while you’re in Africa (IIIII changed your code)
I changed your code while you’re in Africa (Hey, gonna take some time)
Gonna take some time to change it back the way it was
Oo-ooo

Oct 052011
 
Web browser usage share fr

Image via Wikipedia

Reading the book Responsive Web Design has got me to thinking.

I’ve been a web developer in some capacity or another since the mid 90s (oh, hey, turns out I’m old).  Over the past decade and a half, philosophy about browser experiences has evolved along with browsers, following a pattern that looks roughly like this:

Triassic: Support all browsers in existence, especially older versions.  Do whatever it takes to provide the same experience everywhere.

Jurassic: Provide the best experience in modern, standards-compliant browsers and degrade gracefully in older browsers.

Cretaceous: Screw older browsers.  Seriously, folks, upgrade.  I’m tired of making table-based layouts just for you.  If you’re still on IE5, my website is the least of your worries.

The point is, as the Old Browser dominance picture shifted, so did the philosophy about how much to worry about their design.  We want our websites to be usable in IE5, but we’re not concerned about IE5 when we make a website.  We do not spend thousands of dollars trying to preserve a pixel-perfect analog in browsers that are significantly less standards compliant than the browsers of today.

Today, there are all kinds of ways progressive websites are preparing an experience for mobile delivery – tablets and phones, primarily.  They make URLs for mobile versions of the site.  They use more flexible and responsive layouts.  They use media queries.  By and large, the dominant philosophy is to design the website per normal for the desktop, then do whatever special things you need to do to make them decent on a mobile device, not entirely unlike how we used to think about older browsers.

I wonder if we aren’t staring down the barrels of needing to design to modern browsers, and by that I mean mobile ones.  At what point do we treat desktop browsers like we used to treat IE5?  In other words, have we come to the point where it makes sense to design your website for mobile delivery, then make it gracefully adjust to desktop browsers?

We may not be ready for the “screw desktops” phase of web development, but we may get to that point quicker than you’d think.

Sep 232011
 
A View of ThoughtWorks Bangalore

Image via Wikipedia

Not everyone feels this way, but when someone asks me what the first step is for a team that wants to become more agile, I start with Continuous Integration (CI).

If you don’t know what that is, the short version is a CI environment means you have an automated way where the code all your developers are working on is brought together and compiled on a regular basis.  Typically, this involves pulling the latest version of the code that’s been checked in to source control (ok, maybe step one is technically source control), copying it somewhere, and running a compiler on it.  The CI system should then notify you if something didn’t compile.

Why do I recommend this as the first step to agility?  I’m glad I imagined you asking me that.

First, it’s a relatively soft push into a more agile shop.  The philosophical impact is very small.  It’s more of a technology change, and this is something most developers can wrap their arms around more easily than changing the way they think about code.

Second, it provides a very demonstrable benefit – you know if someone’s code broke “the build” before it goes anywhere else.  It’s like having an automated error checker looking at your codebase all the time.  Some benefits of agile or lean methodologies can seem a little abstract to developers just getting into it.  CI is something a little more tangible.

Third, it begins to move a team into a more collaborative mind set.  Even if developers are siloed (which isn’t good), their code is regularly brought together and compiled, and when the build breaks, the entire team knows.  Although this may sound scary at first, this actually has the effect of team bonding.  One place I worked, we hooked up a little voice synthesis so, whenever the build broke, an electric British lady would announce, “So and so broke the build, and Ben is a sexy beast.”  Other teams have gotten more creative.

It really does become a time of bonding and it facilitates group discussion and awareness of code.  It begins to make a team out of your… uh… team.

Finally, CI gives you a foundation for other agile practices.  Many teams, as part of their regular build, also run the tests on that code, so you’re also notified if a test fails.  Many teams also include some sort of deployment as part of their CI process, automatically pushing the code to a test server or dev server, which gives you an awareness of your issues when deploying to production.  If nothing else, CI notifies you if your changes broke the application, which is valuable information.

If you don’t have CI going in your environment, I highly recommend looking into it.  There are several products to help.  ThoughtWorks’ Go looks promising.  CruiseControl.NET is also an old favorite.  There are others.  Heck, you can do CI with a batch file if necessary.

But do give it a go.  Once you see the options that CI opens up for you, you’ll have gotten a good start on the path to becoming more productive.

Sep 212011
 
A mirror reflects Sarge and the Quake III logo...

Image via Wikipedia

The problem isn’t their work ethic.  Your developers work hard.  They often work overtime that you may not even know about.  They kill themselves to meet their deadlines.  They aren’t playing Quake when you aren’t looking.

The problem isn’t their intelligence.  Your developers are extremely smart people.  They have put themselves in an industry that is constantly changing, and the rate of that change will only increase.  They have to deal with abstractions and concepts that rival math, linguistic analysis, and other varieties of pattern-spotting and computational jobs.

The problem isn’t their industry knowledge.  Your developers are reading magazine articles, blog posts, going to conferences and Code Camps, books, and working on side projects at home to sharpen their skills and stay on top of things.

The problem with your developers is that they know there are better ways out there to do what they do, and they’d do them if they had someone to show them how to successfully implement them in real projects instead of just giving them theoretical knowledge and examples, but they don’t have that.

It will take them literally years to compensate for that – to claw their way up into a higher level of development skill and productivity – all for the want of someone just to show them how these fancy concepts like TDD, Interface-Based Programming, Dependency Injection, etc. actually work together to get a real project done.

That’s their problem.