Thanks for a great 2014

December 31, 2014

concert-december-31-explosion-3867-733x550

 

“Cheers to a new year and another chance for us to get it right.” -Oprah Winfrey

This blog was first started back in 2007. Since then, I’ve published 258 posts on topics that ranged from outright rants, to humor, to reviews and recommendations. Everything on the topic of Agile software development processes.

Originally the idea was to dedicate the blog exclusively to the tools used on agile projects (hence the title). Obviously I didn’t end up confining myself to just discussing tools. After all, we agilists prefer interactions over processes and tools. So I ended up writing on a variety of agile topics. Most of the topics were things that simply came spontaneously. The ideas arising from my work, my reading or simply my feelings.

Looking back, some my posts were insightful, and others were simply boring. Occasionally I have taken what felt like significant risks with the material I’ve posted by saying things that I knew would not be popular or well received. In some ways that has made writing a blog like this very liberating.

Things began slowly in 2014 with just one post in March. After 200 posts, I think it’s fair to say that I was running out of steam. It wasn’t until August, when an old colleague of mine noted that I hadn’t been writing much, that things began to change. The writing engine fired up and I began posting again.

At this point, I started a new blog, onestandardman. The idea was to focus on a long-standing fascination of mine: Self-experimentation. I wanted to run experiments serially and share the results.

So now I was writing two blogs! In fact, this just seemed to grease the writing gears for me. For 35 days, I kept up a blistering pace of writing, posting twice a day. Once to each blog. September was a ferociously productive month for me.

The interesting thing was that in the past, I had a hard time keeping my writing quality high when I wrote with such frequency. This time quality was not a problem. The material just kept on coming and all I had to do was write whatever the voices in my head were telling me. Eventually, the writing pace slowed to a more sustainable level of 2-3 times per week.

It has been a wonderful year. I’ve encountered greater challenges this year than ever before. Thanks for being there.

 


Pervasive Language

August 14, 2013

mpaa

I was watching a movie with my brother the other day and the rating warning came up, “Rated PG-13 for Pervasive Language”. I found myself scratching my head and wondering: what the hell is pervasive language? Maybe the people in the movie are going to talk a lot? Perhaps it’s just two guys talking to each other like in “Dinner at Andre’s”? I couldn’t help thinking, “This is going to be really boring…”

Actually I’m kind of an expert when it comes pervasive language. You see I’ve got kids and they never seem to stop talking. I go to Agile conferences, and brother do those folks ever talk a lot! You just can’t shut those Agile people up. That’s what we mean by pervasive language, right?

As a sailboat racer I’ve heard an awful lot of pervasive language (of the pirate variety). You hear it out on the race course all the time. It  is interesting to see the way the quantity of dialog changes based on the experience of people on the boat.

New/Green crew – There is an awful lot of talking going on:

“Get the sheet!”
“No, the other sheet!”
“It’s right there in front of you!”
“No, not you, damnit!”

The chatter is virtually non-stop and it can easily escalate to screaming when there is a little stress. The language is certainly pervasive. It is all about how to do the work and very little talk about heads up thinking about strategy or weather. Being on a boat with a new crew demands a lot of patience and a huge amount of talking.

Old timers/Experienced crew – There isn’t much talking at all. There is never screaming on these boats. It’s like everyone on the crew is part of a collective Vulcan Mind Meld. Everyone just knows what to do. When something changes, the whole group knows exactly how to react with a minimum of information exchanged. When there is discussion, it is strategic in nature, “The wind looks better over there…” These are the teams that just quietly and efficiently kick your ass on the race course.

It strikes me that there might be similar patterns of communication on software teams.

New teams have a lot of chatter as they go through the usual forming, storming and norming patterns we have all come to know and love. There are lots of questions and debates about the way things are done, who is doing what, how the work is done, etcetera. Cram all these people together in a common room and you have a recipe for some real noise!

Older, more experienced teams are much more likely to be quiet workers. I’ve worked with some teams where you would walk into their room and all you would hear is the ghostly clicking of keyboards. Everybody was deeply engrossed in their work and making progress.

Of course there are exceptions to these rules. There are teams that are just naturally composed of noisy, gregarious people. Others are naturally quiet. But the broad generalization does seem to hold true.

Now there are some times when you will hear the experienced crews/teams become noisy: When they are learning something new. Introduce a new element that they haven’t had to deal with before and listen to the chatter level rise. It might be caused by the introduction of a new team member or perhaps a new tool (like a new asymmetrical spinnaker). Once we are kicked back into learning mode, the need for communication on the team rises dramatically.

The point I’m trying to make here is that the level of communication that teams engage in is heavily dependent on things like experience and context. Don’t make the mistake of judging a team’s communication simply by how pervasive the communication is (it is equally foolhardy to judge a movie for the “pervasive” quality of its language). Experience, culture, context and personality all play important roles influencing the ways teams communicate with each other. Even beyond all that, there is the question of the quality of the team’s communication.

So what kind of noise is your team making? A roar? A whine? Or are they just humming along?

References:

The New Science of Building Great Teams, Harvard Business Review


Are You a Tweaker?

March 21, 2013

Image

When I used to race sailboats in Puget Sound I got to sail with a lot of different kinds of people. Some people who like to sail are pretty laid back. The Jimmy Buffet types. You know, “The wind will come when the wind decides to come, and in the meantime, how about a margarita?” I really like sailing with folks like that. They are enjoyable and easygoing and that is important when you are trapped together on a cold, wet, boat for 8 hours or longer.

On the other hand, there was another class of folks that I might categorize as the “Tweakers”. Tweakers are the people who are compulsively changing the boat trim in an effort to get more speed out of the boat. They never stop. They are constantly pulling lines and adjusting the rig in an effort to squeeze every last ounce of performance out of the boat. Tweakers are awesome people to have on a boat when you are racing. A boat full of tweakers will almost guarantee you a win. That is if they aren’t fighting with each other over the things they are trying to tweak.

Personally, I preferred a mix of the two types. Tweakers are indispensable to winning a race, and winning is fun. On the other hand, there are times when frankly, the wind just isn’t going to show up. Tweakers tend to go a little bit crazy when this happens. They go into a veritable frenzy of tweaking. All to no avail. Sometimes what they need is one of the more relaxed folks to hand them a beer and point out that there isn’t any wind today.

Other times, the tweakers are the ones to point out that, with just a bit more effort, we could be in the hunt. Even if we are moving so slowly that our progress is imperceptible. Being in the hunt is fun. Sometimes the tweakers need to give the Buffet Heads a nudge in the ribs to put down the beer and get going.

So which type do you have on your team?


Mistakes I’ve Made on Reviews

April 19, 2011

Recently I’ve been fortunate enough to be a reviewer for a couple of conferences: Agile2011 and the Seattle Scrum Gathering. It was an eye opening experience for someone like me who has been on the submitting side of the process to see and participate in how the reviews actually get made.

As a rookie reviewer, I wanted to share some of my experiences with the review process in the hope that others may learn from it. I do want to emphasize that overall, it was a wonderful experience that I enjoyed and hope to do more of in the future. That said, I also wanted to be unflinching in my reflections on what could be improved in my own performance.

Overall, the review process has been very interesting (You can read more about the process itself here and here). I’ve made the following kinds of mistakes:

  1. I’ve been too self-conscious in my feedback – afraid to take a stand. As a reviewer, it’s my job to have an opinion and not water it down in the interests of appearing “balanced”. Balance will come from the other reviewers, not from me.
  2. I’ve been a little snarky or rude. In general, I do not do this too much, but everyone has a bad day. As someone who goes through the proposal process as a submitter, I know how painful it can be to be flamed by some jerk reviewer having a bad day. As reviewers, we need to keep a lid on that. We have an obligation to be firm in how we express our opinion, but polite.
  3. I’ve been too brief in my feedback. Sometimes just asking for more detail is not enough. When I’m lazy, I can just blow through reviews and simply ask for more detail without expanding on anything else. I’m coming to realize that’s a cop-out. I was provoked by someone once to provide more detail. Their proposal was actually fantastic and I didn’t have a whole lot to say about it other than, “Great job!” Well, rightly enough, they called me on that and asked if I could provide a little more feedback that is useful. I got a little angry with myself, sat down, and pounded out a critique that was nearly two pages long. I was surprised with myself. It was good. Afterward I had to ask myself, “Why don’t I give everyone that kind of quality feedback?”
  4. Not understanding the audience for the feedback. It’s one thing to communicate with a private audience of reviewers, “Hey, this proposal is total crap. The guy didn’t even bother to give his full name…” it’s an altogether different thing to share that feedback with the submitter, “Dear so & so, thank you for your submission, while there were many great proposals, we couldn’t accept all of them. Yours was rejected due the brevity of its content.” You don’t want to get stuck re-writing the feedback for hundreds of reviews. Even more reason to write reviews really well the first time.
Fortunately the system is pretty robust and forgiving of error. Given that there are teams of reviewers who provide multiple reviews, it allows room for error (and learning!). I don’t have to provide stellar feedback for every review if there are 5 other reviewers also providing feedback – hopefully one of us is going to provide some valuable input.

For me, the conference submission and review process has been about learning. I’ve learned as someone who submits proposals that sometimes even the best material will go overlooked by a given set of reviewers for conference. I don’t take it too personally. There is a rich ecosystem of speaking opportunities available to me as a speaker, and if I can’t find an opportunity one place, then it’s very likely that I will find it in another. If I am passionate about the topic, I am usually pretty darn persistent about it too. This persistence has paid off. The more I get to know some of the more prominent speakers in our business, the more I realize and respect that they have been dealing with these sorts of vicissitudes in the process for many years.

As a reviewer, I have been lucky enough to participate in making the sausage that we call a review. It is a messy process with lots of flaws. I feel an obligation to improve my reviewing skills over time in order to provide folks who submit proposals the best possible experience with the proposal review process. But as groups separated by a screen of sorts, it’s easy to get frustrated with the process.


Working the Conference Ecosystem – More on the Review Process

April 18, 2011

I have submitted to some conferences for 3 years in a row without any success. It sucks. I figure I haven’t yet cracked the code for what they are looking for. That brings me to the subject of the conference review process. Here are some of the processes I’ve seen (and I’m sure there are more):

  1. The proposal “black hole” – you toss in your submission, you get only one shot and get no feedback or questions. 6 weeks later you receive an accept or reject email.
  2. The proposal “incubation” process – usually a high feedback process with a supporting submission system that provides lots of peer review and allows for some evolution of a proposal.
  3. The “invitational” process – just invite the presenters you like. No chumps allowed.

The Black Hole

I think the first system, the “black hole”, is probably the most common. The proposals all are batched up and reviewed by a committee of some sort. As an outsider submitting to this system you have absolutely no insight into the decision making process. You really don’t know how they make their decision. All you know is that after a set period of time you receive an accept or a reject and that is pretty much the end of the story. Not much learning takes place on the part of either party in this system. There really is not much opportunity for learning to take place when there is no feedback. The nice thing is that the anxiety is actually kept to a minimum. I know that sounds crazy, but compared to some other processes, the “black hole” proposal process is relatively painless. You are in or you are out. No convoluted explanations, no bogus feedback by reviewers who don’t know what they are talking about, no agonizing over a million little revisions. You are in or you are out – period, full stop. Sometimes ignorance really is bliss. There, I actually said it.

Incubation

The second system, the “incubation” proposal process, is very high feedback. My personal experience has been that this is a bit of a mixed blessing. Proposals that are promising, but perhaps would not otherwise be considered, have a chance to be improved and matured with a rigorous feedback process. I find this possibility very exciting. I like the idea of taking a proposal, perhaps from someone relatively new, and helping them to develop it into something really great. I think there is a place for a peer review system that provides new talent with guidance and helps them to bring their ideas to a new audience. As potentially pompous as that may sound, I like it! And in the ideal world this is just how it works.

However there is a dark side to these “incubation” proposal systems: sometimes the feedback really does more harm than good. In these sorts of submission systems I think there is a very high bar that the reviewer has to meet. Poor feedback is almost worse than no feedback at all. Often times these systems allow public feedback from the general audience. I have mixed feelings about this sort of feedback. While I think it’s valuable in some regards, some of my worst, most caustic, useless feedback has come from these sorts of systems. People who are just venting their garbage. I guess as a someone who is proposing to a conference using an “incubation” system, you need to have thick skin. You can’t be too sensitive about your feedback. You need to be able to take the feedback and filter the wheat from the chaff. That’s probably true in any system, but certainly more so in one that allows unregulated public feedback.

Furthermore, as I mentioned before, there is a higher expectation for the quality of the feedback you will receive in a conference like this. As a reviewer, it’s a lot more work – a lot more dialog is necessary in order to help someone develop a proposal that needs significant changes in order to be approved. As a reviewer, that’s your job. To keep coming back and providing guidance and critique as the submitter makes changes. I’m always a little amazed when a submitter receives feedback and then doesn’t update their proposal. Feedback, even tough feedback, generally means that the reviewer is willing to continue the dialog. So go for it! Make the changes and then ask for more feedback! That’s what a healthy dialog looks like! Keep pushing until the reviewer gives in! After all, if you don’t respond to the feedback, you’re proposal is very likely dead.

Perhaps the worst case is when there is no dialog at all in the “incubation” systems. It happens. It’s the, “Great proposal, but not for this stage.” kind of feedback that will drive a submitter stark raving mad. This is a flaw in the reviewer, not the submitter. The reviewers need to work this stuff out and be able to give a coherent message to the submitter. Even worse, there have been times when there has been just a couple of, “great idea” comments and then your proposal is rejected. Again, this is a failure of the reviewers – reviewers really owe the submitters more than that.

Now I appreciate the fact that reviewers are human too. Therefore, I don’t expect miracles…often. But like in any herd there is safety in numbers. (Did I really just call a review team a herd?) As long as you can provide multiple reviews it is much more likely that at least one of you will come up with a cogent, intelligent set of critiques or feedback that resonate for the person who submitted the proposal. I’m not the most experienced reviewer, but I feel best when there are upwards of 5 reviews per proposal at minimum. Then I feel like a sufficient number of eyeballs have looked at the proposal and that there is a reasonable chance that the “wisdom of the crowd” will kick in and enable some useful dialog.

Invitation Only

Finally, there is the invitation only system. I really don’t have any experience with this, but I know of conferences that are run this way. I think on the one hand it offers a certain degree of reliability. As a conference organizer you are interested in keeping the quality high for your attendees and you aren’t interested in taking many risks. So, you stick to those you know and their friends and this system does seem to work. The flip side is that you aren’t necessarily going to get a lot of new voices and new ideas. Not every conference values innovation like that, but I suspect that for the conferences that do want to be on the cutting edge, you can’t afford to just invite those you already know. You need to take a few risks.

So Many Conferences, So Little Time…

One other thing that I try to keep in mind when submitting to a given conference is that there are a lot of conferences to choose from. Some are harder to get a submission into than others. A local open space conference is a great place to try out ideas and see if there is traction in the audience for them. The bar to entry is extremely low.

Then there are regional conferences where there is some review, but often they are quite easy to get into. The audience is still reasonably small, and there isn’t the intense competition to be a speaker. These regional conferences offer a great deal and can be a nice middle ground where you can continue to grow and nurture presentation ideas and delivery.

Finally, there are the big national and international conferences that garner a large audience and get lots of attention. There is a lot more competition to get submissions into these conferences. If you are coming up with an idea for the first time at one of these large conferences, you probably shouldn’t be too disappointed if it gets shot down for not being well developed enough. You will be competing against folks who have been developing their material at other venues and have refined things pretty well by this point.

I think the person submitting proposals needs to keep some perspective on the overall conference ecosystem in mind when submitting to a conference. A big national conference may not be the best place to float a new, untested idea for the first time. That’s not to say you can’t do it, but perhaps trying it out in a smaller venue would be well advised.


Reviewing Conference Proposals is Like a Job Interview

April 17, 2011

I just got through my first experience as a reviewer for a couple of conferences and I feel like I learned a lot in the process. I made a lot of mistakes, some of which felt silly and others I still feel a bit bad about. You see I have been submitting proposals to conferences myself for a couple of years, and I know how heartbreaking it can be to get your proposal turned down for a conference. Especially when you know your material is really great. So, it was both revealing and initially a little bit scary to be on the other side of the process.

To begin with, often all you have to work with is a submission form from an automated system. This really severely constrains the amount of information that you have about a given proposal or the person who submitted it. It reminds me a lot of the job interview process. As the hiring manager, all you get for your initial input is a resume – perhaps the world’s most ineffective information communication tool. Somehow, using only the text on the page, you have to divine the personality of the applicant, their knowledge of the subject domain, and assess the overall merit of their application.

Now, if your experience with the job interview process has been anything like mine, you know all too painfully well that there is almost no way in hell to choose a decent job candidate solely based on their resume. The information is excessively sparse, there is no feedback, and you have no way to validate the assertions that are made. Oh, and you have many more candidates than you have jobs, so contrary to what they might tell you in HR, you are very likely looking to filter people out.

So when you are filtering resumes, desperately trying to find the good candidates, you usually adopt some criteria for assessing the quality of their resume. These criteria are usually things like:

  1. Spelling and Grammar
  2. Clarity of thought and presentation
  3. Attention getting words, thoughts or ideas
  4. Relevant experience
  5. Etc.

Of course, none of these criteria really translate into a guarantee of a superstar future employee, right? In fact, all of those criteria are pretty weak indicators of quality overall when you are looking for the next great programmer. However, initially they are really the only guides to you are given to assess whether or not a candidate is worth investigating further.

The same problem applies to reviewing submitted proposals to a typical conference. You don’t have anywhere near enough information, and the criteria that you apply are very likely inadequate to the task of identifying a quality proposal. Therefore, you end up with a set of criteria like this:

  1. Spelling and Grammar
  2. Clarity of thought and Presentation
  3. Attention Getting words…
  4. Relevant experience…

I hope that you can see where I’m going with this by now. It is an imperfect system at best. It can be further aggravated by submission systems that actually conceal information in the interests of fairness. Some conferences will “anonymize” the proposals so that you do not actually know who the submitter is. I think this is done in the interest of creating reviews that focus on the merit of the ideas alone and not the reputation of the presenter. This practice has an unintended consequence of further restricting the information that the reviewer has to work with. Imagine reviewing resumes where you cannot see names or work experience and you start to get a feeling for what working with anonymous proposals might be like.

Details

At its most basic, with some review systems I feel like you are really left with the following: Did the submitter care enough to provide a detailed description of the proposal and how it would be presented? Were they willing to invest the time and effort to provide me with as much information as possible? My experience is that all too often people, even very experienced presenters, will skip over entire sections of the submission form or provide only single sentence answers. Often, you can very quickly break the pile down into two stacks: people who bothered to fill in all the blanks with some decent detail and those who do not. I think many folks who submit to conferences would be stunned to see just how often people neglect to fill in the details.

As a reviewer, you are left with two stacks: those who did provide detail and those who did not. Which stack would you prefer? Now does that mean that people who left out information in the proposal had poorer presentations? No. It is very likely that there are some great proposals that get overlooked this way – in fact, I’m quite sure this happens all the time. However, let’s face it, getting your submission accepted to a conference is a competition. You need to do everything you can to understand what the reviewers are looking for. First, I can tell you that rich detail sells big. It tells a reviewer that you are willing to do the extra work to sell them on your proposal. Investing in the detail suggests that you may understand your topic and know how to deliver it. Even with rich detail, there is no guarantee that the presenter is any good, but what else do you have to work with? Often not very much.

I have seen especially impressive proposals where people provide links to video of themselves giving the presentation, links to the PowerPoint slides, and more. When someone is able to put additional material like this into a proposal, I find it very impressive. It tells me that they are very passionate about their topic and that they are willing to go out of their way to provide additional detail (that reviewers are starved for) in order to be considered. Very few people bother to do this, so when people actually bother to provide this kind of information, they *really* stand out. It is still not a guarantee you will be accepted, but believe me it puts you closer to the top of the list than the bottom. Just like in job hunting, you want to do anything you can to make yourself stand out.

The Problem with Themes

Even filling in all the blanks and providing significant detail often isn’t enough. I think it guarantees you are in the hunt, but there is more to consider. One of the toughest considerations as a reviewer is theme. Often there is some sort of conference or stage theme that you are responsible for satisfying as a reviewer. All too often I have seen terrific proposals that I was convinced would make compelling and interesting sessions, rejected because they didn’t appear to match the theme of the conference or the stage they were submitted to. For example, if you submit a proposal on “Writing Great User Stories” to a conference that has “Radical New Ideas” as a central theme, you are more than likely going to be rejected. No matter how great the material is, no matter how wonderful a presenter you are. If there is a perceived mismatch between your topic and the conference or stage theme, you are very likely out of the running.

Now I think that by their nature, themes are dreadfully subjective and vague and this is a bit of a tough nut for the submitter and reviewer to crack. I think conference organizers feel compelled to use themes to help give their attendees some sense of the value they intend to provide. It seems that most conference organizers do not feel compelled to just fill their agendas with any old good presentation that comes along. They also do not want repetition. From what little I’ve seen so far, it’s pretty easy to end up with three different proposals that all seem to boil down to, “Another Intro to Scrum”. Even if you have great presenters like: Mike Cohn, Jeff Sutherland, and Ken Schwaber – in the end, only one gets picked. In addition, if it is a Lean/Kanban conference, probably none of them gets picked. That is regardless of the quality of their stellar proposals or their godlike presentation skills.

Is that fair? I don’t know. Personally, I hate it when I feel I have material that is valuable, has been well received by my audiences, and supported by a solid proposal – and it is rejected. I am a competent presenter. I want to tell the reviewers what boneheads they are, that they missed a great opportunity! I want them to know they could not see value if it kissed them on the nose! But I don’t. That just does not seem like a very smart approach to me. So, I have a beer with my friends and go on a bit of a rant – fortunately they tolerate me, and I get over it.

Decision Time

Themes aside, sometimes it also just comes down to a matter of taste. As a reviewer, you are confronted with two great proposals and you only have room for one. You have to make a difficult choice where there is no obvious winner. In a case like that, it really will come down to some sort of gut instinct (often wrong) that you end up relying on to make the choice. You can put in place scoring systems and other mechanisms to make the decision appear more objective, but the bottom line is that it is a subjective judgment and you have to make a call. Of course, it does not make you feel any better when you are on the wrong end of the decision. It is hard to understand how your rocking proposal that you poured your heart and soul into could have been rejected.

Those feelings are natural enough and I understand all too well how they come about. The point is that you need to keep the bigger picture in mind. A rejection by a given conference may have little or nothing to do with your skill as a presenter or your mastery of the subject matter. That’s part of what makes this process, like job hunting, so bloody frustrating. We would all like to crack the code and have our genius recognized. However, the process, the information, and the people are imperfect. There are a depressing number of ways that great material can be overlooked.

If I haven’t put you completely off your feed by now, I’d recommend a couple of resources to follow up on if you are interested in improving your chances of getting your proposals accepted:

Mark Levinsons Blog:

http://agilepainrelief.com/notesfromatooluser/2011/04/how-we-reviewedagile-2011-coaching-stage.html

Mitch Lacey’s blog:

Part 1: http://mitchlacey.com/Blog/Agile-Conference-A-Stage-Producers-Story-Part-1.html

Part 2: http://mitchlacey.com/Blog/Getting-your-Session-Accepted-to-the-Agile-Alliance-Agile-Conference.html

Part 3: http://mitchlacey.com/Blog/Stage-Producer-101-Building-the-Best-Stage-Possible.html


MBO RIP

May 19, 2010

I’ve worked with MBOs a.k.a. Management By Objectives on and off at a variety of places. Your manager (and presumably their manager) define a set of quantifiable (S.M.A.R.T.) objectives for you to hit in the coming quarter. Then you review them at the end of the quarter. Easy, right?

I must be treating these things wrong. I look at objectives and tend to evaluate them against my current priorities and the current priorities of the company/customer. These priorities tend to change from day to day. So, being an independent minded type, I tend to look at the list and edit as appropriate: this objective is relevant, this one isn’t, and so on. Then comes the review at the end of the quarter and the inevitable question, “Why didn’t you hit objective X?”

Now, admittedly, I might just suck. Or I might have judged a month(s) old objective to be low priority compared to more immediate needs. Now I suppose there are a variety of ways to handle this:

  1. Update and evaluate objectives with greater frequency
  2. Request/Update MBO’s over time
  3. Use another mechanism

You see, what I really want is a higher feedback mechanism that is a bit more relevant to my day to day work. Goals from 6 months ago, even 3 months ago, are seldom relevant. In customer terms, if I even wait as long as two weeks, I’ve very likely lost them already.

So what is the alternative? Honestly, to date in my professional life, short of outright neglect, I haven’t seen any decent ones. Maybe it’s one of those “sound of one hand clapping” Japanese koans: What’s the alternative to MBO’s?