Some Ideas for Managing an Effective Agile Transition

January 14, 2008

Going into an agile transition there are a lot of variables that need to be considered in order to make the transition a successful one. Those variables include:

  1. Timing – Should the transition be made in the middle of a major project or should the transition be timed such that it takes place between or at the beginning of a major project
  2. Transition Strategy for multiple teams – There are two major strategies that I am aware of (and I’m sure that there are many more): make the transition all at once, or start with a pilot project and move gradually toward an agile organization. Each has its own costs and benefits. The culture of the organization can also play a role in making this decsion (cautious vs. aggressive)
  3. How will distributed teams be brought on board. These days it is very common for companies to have either distributed or outsourced teams working around tht world. An agile transition is going to impact them too. How do you manage their part of the transition to agile? Ignore them? Train them up too? And what about the cultural considerations with trying to train someone in Agile techniques from a radically different culture?
  4. How will the training be organized? Do you want to train and certify everyone before starting the first iterations of the project, or is there more value in giving them an orientation, setting them loose, and then returning to run them through the certification course after they’ve stubbed their toes a few times? Again, there are trade offs to each approach.
  5. What sort of coaching support will you provide for the teams?
  6. What kind of additional materials can you bring to bear in support of the teams making the agile transition.
  7. how will the Agile roles (Scrum Master, Product Owner, Team) be mapped to the existing organizational structure. What will be the impact on the people who are currently in those roles?
  8. What kinds of tool support may be required to help facilitate the transition to agile? Communication support tools for distributed teams? Automation and testing support for operations and QA teams?
  9. What are the drivers for the change to Agile? What is motivating this move? A desire for more rapid releases? Perhaps reduce costs? Improved quality? Once you have identified the drivers, then you need to identify the metrics you are going to use to measure the success of your agile transition. Bug count? Throughput? Lead time?
  10. What additional training can be provided in support of this transition? Training in technical practices such as TDD or design patterns?
  11. Is scaling an issue here? Will the teams need to be trained in managing teams at the enterprise level (ie. Scrum of Scrums, Meta Scrum)?

These are just a few of the questions we have been asking as Ibegin the transition to agile for this company. For some of these issues I have opinions and recomendations. For others, it is simply a matter of discovering what the motivations and desires of the company are.


Kaizen Hospital

January 2, 2008

syringe.jpg

In a recent article in the New Yorker, There was an interesting study done in a hospital where they started using checklists for a lot of their activities. According to the article, the study demonstrated a dramatic drop in infections caused by routine procedures – down to almost zero.

This reminds me very much of the one page processes used by Toyota in the TPS. Maybe we should try this out with software too. I’m not the first one to come up with that stunning revelation – a lot of folks who are adopting lean approaches are experimenting with similar ideas.

Check it out. It’s definitely worth a read.


The New Year is off to a Sputtering Start

January 1, 2008

Oh boy, the fireworks fiasco at the Space Needle last night reminded me of one of my recent projects – “the horror”. In case you were one of those folks who slept through the big non-event last night, there was a tiny glitch with the fancy computerized pyrotechnics display on our local phallic symbol, er…tower of civic pride (just what kind of building is that thing?).

Anyway, the show sort of sputtered to a start, lasted for about 30 seconds, and then as the music swelled dramatically…nothing. The fireworks stopped completely. It was hilarious to watch the confusion on the faces of the television hosts for the show. No one had any idea what the problem was and the band just played on. About a minute later the fireworks sputtered to life again – now badly out of synch with the music. They continued for another minute or so – and died again!

At this point I was rolling on the couch at home laughing out loud. At first I thought the situation was funny, but then I realized I was laughing because I was glad it wasn’t me who was responsible for putting on that fireworks display. You know, kind of like when you see someone whack their funny bone. It hurts like hell, but you laugh because you’re glad it isn’t you.

Or maybe I was just losing my grip on reality…

I can only imagine the stress and panic of the technicians as they frantically tried to understand what the problem was and then figure out what to do about it. I’ve had to do product demos in high pressure situations before. And when they go bad (and I do mean bad), I’m sure the feeling is similar.

I heard this morning that the company responsible for putting on the fireworks display had successfully done a complete dry run earlier in the day without a problem. I would have expected that much at the very least. But in a situation that is as mission critical as theirs is, a simple dry run is not enough. They need to have some backup plans, some fail safes. Something other than just punching the 1500 ignition buttons in a panic.

Here are a few suggestions:

  1. Create a failover system. If it becomes apparent that the data on system ‘A’ is corrupt, then switch to system ‘B’. This could be a very sophisticated technical solution, or somebody could just have a backup laptop with the same software installed and a backup copy of the data. For what is a $100,000 job, the purchase of a $1000 backup machine seems like it would be well worth the expense.
  2. Create backups. OK, so the data is corrupted. You should be able to simply restore a backup and get up and running again. Where were the backups? This is just a common sense practice. Maybe they did have backups, but somehow I doubt it. Next time buy a USB drive – they’re cheap.
  3. Use a Mac instead.

Don’t get lulled into a false sense of complacency. It struck me as interesting to note that this is the same company that has been doing the same display for the last 14 years. I imagine the process has become fairly routine for them by now. If they were constantly trying to improve their process and eliminate problems, they probably wouldn’t have these sorts of problems. Instead, they were very likely just doing what worked the last time. At least that seems reasonable – I could see myself getting caught in that trap.

We do it often enough on Agile projects. Teams will get into a rhythm and just do the same thing each sprint without really doing anything to really inspect and improve their process. You know what they say – if it ain’t broke, don’t fix it. Of course, if your process, whether it is pyrotechnics or software, looks the same the fifth time as it did the first time, then you aren’t really improving your process (or product) at all.

So as I watched the fireworks show trip and sputter along last night, I looked on with a sense of both humor and dread. Part of me was indulging that, “Wouldn’t it be funny if…” notion. But there was the other side of me that was thinking, “Those poor bastards…”

Happy New Year folks. I hope the rest of the year goes a little smoother…