Thanks for a great 2014

December 31, 2014

concert-december-31-explosion-3867-733x550

 

“Cheers to a new year and another chance for us to get it right.” -Oprah Winfrey

This blog was first started back in 2007. Since then, I’ve published 258 posts on topics that ranged from outright rants, to humor, to reviews and recommendations. Everything on the topic of Agile software development processes.

Originally the idea was to dedicate the blog exclusively to the tools used on agile projects (hence the title). Obviously I didn’t end up confining myself to just discussing tools. After all, we agilists prefer interactions over processes and tools. So I ended up writing on a variety of agile topics. Most of the topics were things that simply came spontaneously. The ideas arising from my work, my reading or simply my feelings.

Looking back, some my posts were insightful, and others were simply boring. Occasionally I have taken what felt like significant risks with the material I’ve posted by saying things that I knew would not be popular or well received. In some ways that has made writing a blog like this very liberating.

Things began slowly in 2014 with just one post in March. After 200 posts, I think it’s fair to say that I was running out of steam. It wasn’t until August, when an old colleague of mine noted that I hadn’t been writing much, that things began to change. The writing engine fired up and I began posting again.

At this point, I started a new blog, onestandardman. The idea was to focus on a long-standing fascination of mine: Self-experimentation. I wanted to run experiments serially and share the results.

So now I was writing two blogs! In fact, this just seemed to grease the writing gears for me. For 35 days, I kept up a blistering pace of writing, posting twice a day. Once to each blog. September was a ferociously productive month for me.

The interesting thing was that in the past, I had a hard time keeping my writing quality high when I wrote with such frequency. This time quality was not a problem. The material just kept on coming and all I had to do was write whatever the voices in my head were telling me. Eventually, the writing pace slowed to a more sustainable level of 2-3 times per week.

It has been a wonderful year. I’ve encountered greater challenges this year than ever before. Thanks for being there.

 


Working the Conference Ecosystem – More on the Review Process

April 18, 2011

I have submitted to some conferences for 3 years in a row without any success. It sucks. I figure I haven’t yet cracked the code for what they are looking for. That brings me to the subject of the conference review process. Here are some of the processes I’ve seen (and I’m sure there are more):

  1. The proposal “black hole” – you toss in your submission, you get only one shot and get no feedback or questions. 6 weeks later you receive an accept or reject email.
  2. The proposal “incubation” process – usually a high feedback process with a supporting submission system that provides lots of peer review and allows for some evolution of a proposal.
  3. The “invitational” process – just invite the presenters you like. No chumps allowed.

The Black Hole

I think the first system, the “black hole”, is probably the most common. The proposals all are batched up and reviewed by a committee of some sort. As an outsider submitting to this system you have absolutely no insight into the decision making process. You really don’t know how they make their decision. All you know is that after a set period of time you receive an accept or a reject and that is pretty much the end of the story. Not much learning takes place on the part of either party in this system. There really is not much opportunity for learning to take place when there is no feedback. The nice thing is that the anxiety is actually kept to a minimum. I know that sounds crazy, but compared to some other processes, the “black hole” proposal process is relatively painless. You are in or you are out. No convoluted explanations, no bogus feedback by reviewers who don’t know what they are talking about, no agonizing over a million little revisions. You are in or you are out – period, full stop. Sometimes ignorance really is bliss. There, I actually said it.

Incubation

The second system, the “incubation” proposal process, is very high feedback. My personal experience has been that this is a bit of a mixed blessing. Proposals that are promising, but perhaps would not otherwise be considered, have a chance to be improved and matured with a rigorous feedback process. I find this possibility very exciting. I like the idea of taking a proposal, perhaps from someone relatively new, and helping them to develop it into something really great. I think there is a place for a peer review system that provides new talent with guidance and helps them to bring their ideas to a new audience. As potentially pompous as that may sound, I like it! And in the ideal world this is just how it works.

However there is a dark side to these “incubation” proposal systems: sometimes the feedback really does more harm than good. In these sorts of submission systems I think there is a very high bar that the reviewer has to meet. Poor feedback is almost worse than no feedback at all. Often times these systems allow public feedback from the general audience. I have mixed feelings about this sort of feedback. While I think it’s valuable in some regards, some of my worst, most caustic, useless feedback has come from these sorts of systems. People who are just venting their garbage. I guess as a someone who is proposing to a conference using an “incubation” system, you need to have thick skin. You can’t be too sensitive about your feedback. You need to be able to take the feedback and filter the wheat from the chaff. That’s probably true in any system, but certainly more so in one that allows unregulated public feedback.

Furthermore, as I mentioned before, there is a higher expectation for the quality of the feedback you will receive in a conference like this. As a reviewer, it’s a lot more work – a lot more dialog is necessary in order to help someone develop a proposal that needs significant changes in order to be approved. As a reviewer, that’s your job. To keep coming back and providing guidance and critique as the submitter makes changes. I’m always a little amazed when a submitter receives feedback and then doesn’t update their proposal. Feedback, even tough feedback, generally means that the reviewer is willing to continue the dialog. So go for it! Make the changes and then ask for more feedback! That’s what a healthy dialog looks like! Keep pushing until the reviewer gives in! After all, if you don’t respond to the feedback, you’re proposal is very likely dead.

Perhaps the worst case is when there is no dialog at all in the “incubation” systems. It happens. It’s the, “Great proposal, but not for this stage.” kind of feedback that will drive a submitter stark raving mad. This is a flaw in the reviewer, not the submitter. The reviewers need to work this stuff out and be able to give a coherent message to the submitter. Even worse, there have been times when there has been just a couple of, “great idea” comments and then your proposal is rejected. Again, this is a failure of the reviewers – reviewers really owe the submitters more than that.

Now I appreciate the fact that reviewers are human too. Therefore, I don’t expect miracles…often. But like in any herd there is safety in numbers. (Did I really just call a review team a herd?) As long as you can provide multiple reviews it is much more likely that at least one of you will come up with a cogent, intelligent set of critiques or feedback that resonate for the person who submitted the proposal. I’m not the most experienced reviewer, but I feel best when there are upwards of 5 reviews per proposal at minimum. Then I feel like a sufficient number of eyeballs have looked at the proposal and that there is a reasonable chance that the “wisdom of the crowd” will kick in and enable some useful dialog.

Invitation Only

Finally, there is the invitation only system. I really don’t have any experience with this, but I know of conferences that are run this way. I think on the one hand it offers a certain degree of reliability. As a conference organizer you are interested in keeping the quality high for your attendees and you aren’t interested in taking many risks. So, you stick to those you know and their friends and this system does seem to work. The flip side is that you aren’t necessarily going to get a lot of new voices and new ideas. Not every conference values innovation like that, but I suspect that for the conferences that do want to be on the cutting edge, you can’t afford to just invite those you already know. You need to take a few risks.

So Many Conferences, So Little Time…

One other thing that I try to keep in mind when submitting to a given conference is that there are a lot of conferences to choose from. Some are harder to get a submission into than others. A local open space conference is a great place to try out ideas and see if there is traction in the audience for them. The bar to entry is extremely low.

Then there are regional conferences where there is some review, but often they are quite easy to get into. The audience is still reasonably small, and there isn’t the intense competition to be a speaker. These regional conferences offer a great deal and can be a nice middle ground where you can continue to grow and nurture presentation ideas and delivery.

Finally, there are the big national and international conferences that garner a large audience and get lots of attention. There is a lot more competition to get submissions into these conferences. If you are coming up with an idea for the first time at one of these large conferences, you probably shouldn’t be too disappointed if it gets shot down for not being well developed enough. You will be competing against folks who have been developing their material at other venues and have refined things pretty well by this point.

I think the person submitting proposals needs to keep some perspective on the overall conference ecosystem in mind when submitting to a conference. A big national conference may not be the best place to float a new, untested idea for the first time. That’s not to say you can’t do it, but perhaps trying it out in a smaller venue would be well advised.


Risks, Impediments, & Lessons

May 21, 2010

I was talking to a group the other day about impediments. We were debating the temporal relationship between risks and impediments. We arrived at the following conclusion: Risks are potential threats to our projects that lie in the future. Once they manifest themselves in the present, we call them impediments. And once impediments are resolved and we move on, they become lessons. I’m intentionally not referring to them as “lessons learned” because I’m not sure we always learn from the experience.

So risks exist in the future, impediments exist in the present, and lessons exist in the past. I like the relationships and the terminology – it helps me to understand how to deal with these creatures.


Problem Solving 101

December 21, 2009

I have a confession to make: problem solving tools and techniques are probably the weakest part of my product development skill set. I don’t think I’m alone on that score. As I watch other teams work, I see them jumping to solutions without even entertaining the idea of understanding a problem well. It seems like a knee jerk response for many teams. It’s peculiar that it should be that way. After all, product development could be defined as nothing but a series of problems and challenges that stand between us and the successful delivery of a product. How we solve each problem has a direct impact on the eventual success of our products.

So when I stumbled across “Problem Solving 101: A Simple Book for Smart People“, by Ken Watanabe, I was looking for a guide that might help me remedy my problem solving deficiencies. There were a few things that attracted me to this book:

  1. It’s short – 111 pages
  2. It’s written for Japanese school children – If they can do it, then I just might have a chance too
  3. It has lots of pictures

Each of the problem solving tools that is introduced is illustrated by a story. This helped to keep my extremely short MTV attention span locked in just long enough to absorb a thing or two. That and I suppose I relate well to the problems typical of the average 12 year old. That’s my wife’s theory anyway.

Moving on, Watanabe outlines the framework for a problem solving process (p.14):

  1. Understand the situation
  2. Identify the root cause of the problem
  3. Develop an effective action plan
  4. Execute and modify, until the problem is solved

of course this reminds me of another 4 step problem solving cycle, the Shewhart Cycle:

  1. Plan
  2. Do
  3. Check
  4. Act

They’re not quite the same, but they both contain many of the same elements. The PDCA cycle is at the heart of lean continuous improvement or Kaizen. Some argue that PDCA is also built into some agile methodologies like Scrum, where in each iteration you Plan (sprint planning), Do (sprint), Check (retrospective), and Act (incorporate changes into the next sprint). So you could argue that the bones of a problem solving framework are built into some of these agile methodologies. However, I would contend that PDCA is not enough. It’s the problem solving toolbox that Watanabe describes where we get the real problem solving power. And the last time I checked, aside from some examples in lean development, you won’t find techniques like these anywhere else in agile development.

Watanabe introduces logic trees as a tool for problem analysis. They are a handy way for breaking down a problem into smaller pieces (slicing the elephant, so to speak) so that you can better identify the root causes of a problem. Then he introduces what he calls a “Problem Solving Design Plan” which is basically a matrix of the identified problems, and the proposed solutions and outcomes for each one. This tool is useful for really starting to think deeply about a problem. Understanding the problem well, creating a hypothesis, and proposing multiple solutions for a given problem are all part of the problem solving design plan.

Another strategy he describes is doing Gap analysis. I’ve heard of it before, but to be honest, I’d never really seen an example. Not in my PMP training, not in my Scrum training, I don’t even think I was introduced to this technique in college. Nope, I found this one in a book for Japanese grade school kids. Yikes!

Now, I don’t want to sound like the most clueless guy on the planet (too late!), but if a guy like me can go through primary, secondary, and college education without even the simplest problem solving skills under his belt, isn’t that a little alarming? I can solve a math problem just fine, but if it’s anything more abstract than that, I don’t know that many of us have a structured set of tools that we can use for problem solving. That’s where this book comes in handy.

I was part of a project recently where a solution was proposed to the team without even allowing any discussion of the underlying problem. It was probably one of the most dysfunctional meetings I have ever witnessed (and let me tell you, when it comes to bad meetings I put the “fun” in dysfunctional). The people proposing the solution were so defensive that they couldn’t tolerate the idea that they might have got it wrong. Of course, as things played out, the project was a complete failure. The inability to ask questions and examine the problem undermined the team’s morale and guaranteed that viable alternative solutions were never discussed.

Understanding the problem is part of what developer’s need to do. It’s what they’re supposed to be good at. Any attempt to stifle this impulse is almost guaranteed to end badly. Rather than do that, a good leader should be able to act as sherpa for the team, and help guide them through the analysis process. That means doing good root cause analysis, and being open to wide range of solutions. If you are one of those leaders, do yourself a favor and read “Problem Solving 101”.


Give the 360 Back to the Team

January 26, 2009

360-main_full1

Tired of doing the same old retrospective every sprint? You know how it goes: what went right/what needs improvement/action items. Are you running out of ideas for improvements?

Here’s an idea: at the end of each sprint do a 360 review. Use a survey tool and have every member of the team review each other’s performance in the past sprint. Just a couple of questions would do it. A good tool would summarize the data for each team member. Then, based on that feedback, each individual on the team would have a really good starting point for a discussion about how they can improve their performance in their next sprint.

Imagine doing a 360 every two weeks. Imagine changing the questions every two weeks to meet changing conditions that the team encounters. Imagine having up-to-date feedback on your performance as seen by your peers every sprint.

I offer this notion as a counterpoint to the traditional 360 review that is run by the corporation/HR. You know the one where you have to review a bunch of people you don’t necessarily know, then sit in an uncomfortable meeting with your manager where they review your personality flaws revealed by your peers. Instead, the 360 is for your use only. You decide what to do with it. Nobody else, not the scrum master, no one else is privy to the results.

I’ve done 360 reviews at a couple of different places (it had nothing to do with Agile). In most cases it was a top-down driven process. Questions were determined by my managers (and their managers) and when the results were tabulated they were given to your manager first. Then your manager would review the results with you. Recently however I found myself at a company where they wanted to do things differently. Privacy was a much bigger concern in this culture. People didn’t want anyone to see their results – not even HR.

At first I found this preoccupation with privacy perplexing (I’m very trusting by nature). However, as we went through the process I realized that the privacy actually seemed to improve the 360 review. It keep the feedback limited to the person who needed it most (the person being reviewed), without exposing it to the judgement of others (managers, HR, etc.). Whether or not you wanted to actually *do* anything about the feedback was purely up to you. It took the pressure off the situation – and I usually find that to be a good thing. If my salary isn’t hanging in the balance, I usually make much better decisions.

The more I thought about it, the more I thought we should be able to do a 360 review as frequently as we want to (assuming we can keep them low cost/low effort). Better yet, if the team controls the questions that are asked, then perhaps the team can change the questions frequently to match the changing nature of the problems they face. That seems to offer a unique opportunity to provide honest, anonymous feedback for team mates. 

An agile purist might maintain that a team should be able to give each other that sort of feedback without the 360. We should be able to critique each other face to face. Perhaps. There are some teams that are very good at that (very few that I’ve worked with). For the rest, the 360 might be worth experimenting with.

What sorts of questions might you ask in this kind of 360? Here are a few ideas for categories of questions based on the Scrum Values as described by Ken Schwaber:

  1. Commitment
  2. Focus
  3. Openness
  4. Respect
  5. Courage
  6. Technical Skills
  7. Domain knowledge

If you are thinking of trying this out, here are a couple of tools that might be useful for implementing a cheap 360 for your team:

Feedback is good. I’m thinking of using this sort of survey for my presentations.