Different Roots, Same Tree

May 1, 2019

Recently, at conferences, in social media, and even informal gatherings, I’ve heard statements along the lines of “[X] scaling approach is absolutely not agile for [Y] reason.” I use the word approach to avoid the question of whether we are talking about a framework or a methodology. I really don’t care about that distinction and much of the subtlety that lies there is beyond me.

Admittedly, there is a long and rich history of critiquing each others ideas in the agile community. Some examples include:

  1. XP vs Scrum
  2. Kanban vs Scrum
  3. Lean vs Agile

To my knowledge, none of these debates has ever really reached any sort of meaningful conclusion. In fact, the more I watch (and even sometimes participate in) these debates, the more I feel like they are mostly a reflection of a sort of core philosophy. What I mean, is that there seem to be some common starting points or assumptions that characterize how people approach these debates.

Let me give you an example. Let’s take SenseMaking and the Cynefin framework. We can use a tool like Cynefin to help us navigate important decisions based on the assessment of contextual complexity. The beauty of this system is that you can use it anywhere. It doesn’t matter whether you are agile or not. Cynefin is simply used to help assess and navigate the environment of simple, complicated, complex and the chaotic. What decisions you make within each context will lead you to healthy outcomes. With Cynefin, you can start with absolutely no framework or required processes at all. In essence, you are building from scratch, and evolving only as necessary. Frankly it’s a beautiful and elegant system. Conceptually, it’s founded on the notion of sensing your environment and making decisions based on what you uncover. It’s a radically empirical process that starts wherever you may be. There is no default starting point for applying Cynefin. You simply use it to help you grow from wherever you are.

The interesting thing is that Cynefin isn’t the only framework that uses this “start wherever they are” approach. Kanban is also very minimalist in its rules. In fact, Kanban usually starts by simply making the existing process visible. You don’t need to change your process at all, just make it so that everyone can see it. Starting from there, the Kanban approach recommends that we consider applying WIP limits and working to understand the constraints of the flow through the system. There are no pre-defined required processes. You don’t have to do standup. You don’t have to hold retrospectives. You basically start Kanban from scratch and add those elements wherever they make the most sense. You build your agile process from scratch based on the feedback you get from making the process visible. Again, it’s a very elegant and powerful system, that’s founded on the notion of visibility (or transparency) and allows you to evolve however makes sense for your environment.

So I see both Cynefin and Kanban as sharing some important conceptual roots (while each is very unique). Both methods provide us feedback to help make good decisions in whatever context we may be working. Both also make absolutely no assumptions about what the starting point may be. You could start with a very rigid, waterfall style, process. Alternatively, you could be using Scrum. Neither Cynefin, nor Kanban care about where you start. In fact, what they really care about is not blindly applying process without some sort of feedback. So I think of Cynefin and Kanban as the “build it from scratch” or “consider context first” methods. Actually, I really like to think of these as the Buckaroo Banzai methods, you know, “Wherever you go…there you are.”

Now this also implies that you are really committed to this learning journey, with all of its joys, discovery, false starts and dead ends. Building your process from the ground up is not for the tentative or the faint of heart. Why do we have to go through all of this learning pain and discovery, when others seem to have found some practices that seem to work? Well, the argument, and it is a very valid one, is that you need to discover what works for you in your context. Trying to apply solutions that may have worked well in other places often leads to disappointment. In the Toyota way, Toichi Ohno warns us of exactly this. If you want to build a world class process, you can’t rent it. You need to build it and find out what works for you.

But what if we really could rent our process? Wouldn’t that save a lot of time and wasted effort? Let’s face it, this is business, not rocket surgery. We can’t all be so unique that we have to waste time rediscovering the wheel. Let’s take a look at another very large branch on the agile tree: approaches based on starting with a predefined set of practices or processes like Scrum, or XP.

Scrum is based on a very fundamental set of practices that creates the infrastructure (or framework or method) for continuous delivery and improvement of small units of work. Depending on who you ask, XP and scrum came into being around the same time. As I remember it, XP was the first to really land hard on a required set of practices that defined the process as truly being XP. These twelve practices were non-negotiable. You had to do them, and if you didn’t, well, then you weren’t doing XP. You’re probably familiar with many of these practices. They are foundational practices like pair programming, continuous integration, test driven development, and so on. Part of the reason for requiring these practices was that they supported each other. It’s hard to do continuous integration without some form of test driven development. The two together are kind of a magical combination – they help reinforce each other. Often, what we see happen in the real world, is that teams will struggle with and perhaps drop practices. When that happens, keeping the other XP practices working gets harder.

Scrum does something similar, but different. Scrum has a default set of non-technical practices that are required. You must have sprint planning, daily stand-ups, and sprint retrospectives. That’s non-negotiable. To do otherwise is to do “Scrum, but…” and to be mocked mercilessly by your peers. Both scrum and XP could be loosely described as having a default set of “best practices” that are required in order to use the framework to its best advantage. Now I personally hate the term “best practice” but that’s exactly what they are doing. We’ve identified the best, minimal, set of practices that you must use as a starting point, no matter what your context is. It’s a package deal and we defer to the wisdom in the package. Unlike Cynefin or Kanban, you have a very well defined starting point, and you aren’t given the option to do differently. Now, both XP and scrum are based on empirical process control (at least in theory) and they both claim that you can evolve and change the framework as you learn to use it. However, in practice, I’ve rarely seen it actually happen (Spotify being one very notable example). When you start with a predefined set of practices, it seems harder to evolve to anything else. Well, I guess Darwin never said evolution was easy.

So we have two very different schools of thought about how to think about approaching agile:

  1. Start “where you are” and use a decision making model or visibility model to evolve to where you need to be (Cynefin, Kanban).
  2. Start with a fixed “starter set” of best practices and then evolve to where you need to be (Scrum, XP).

I think that these two philosophies or approaches explain a lot of the conflict I see in the agile community today. The “start where you are” folks seem to feel very strongly that “starter set” approaches run the risk of being applied in a cookie cutter fashion and often incorrectly. To them these approaches are likely to lead to poor outcomes and are therefore to be avoided or even wrong headed.

On the other hand, the folks who take the “starter set” approach” are appalled by the waste involved in the “start where you are” engagements. Why in the world would you waste your customers precious time and energy on rediscovering the wheel when you already have a very capable set of practices to start with? It’s folly! These practices are tried and tested and there are very few exceptions. To ask the customer to invent their process on their own is just a high risk recipe for disaster! Therefore, to do anything other than the “starter set” approach is to be avoided or…well, you get the picture.

I think the argument only gets amplified when we start to include scaling frameworks in the conversation. As I look more and more closely at the scaling frameworks, I start to think that I see their roots in each of these different approaches. For example, SAFe has its roots firmly in the “starter set” camp. SAFe is most definitely a framework of prescribed “best practices” that are intended to be applied universally. There is some allowance made for the size and scale of the organization, but the gist is that everyone does SAFe. On the other hand, there is LeSS which seems to share its roots much more closely with the “start where you are” approaches used by Cynefin and Kanban. In LeSS there is more emphasis on using tools like systems diagrams and root cause analysis to discover the right means to change the system for scaling. So LeSS feels to me like it leans a bit more toward the “start where you are at” approaches.

Of course, the adherents of each approach think the others are nuts. I think some of that is due to how each sees the world. They are coming from very different starting points. I’m not sure they’re ever going to agree with each other. Fortunately, I’ve seen both approaches work well for people. And I’ve also seen them both fail miserably. Often it had little to do with the frameworks, and a lot to do with the people. So I guess we count ourselves lucky and try to remain calm when they point quivering fingers at each other and proclaim loudly that the other is “Not Agile”.

Of course they aren’t.

That’s OK.


Letting them build it

February 27, 2019

Agile methods like scrum and XP are very exciting, especially when you are first introduced to them. There is something very common sense about the ideas in them that seems to resonate for a lot of people. I know it was that way for me. I’d looked at a lot of different project management methods before settling on XP (thank you Steve McConnell). A lot of those methods looked interesting, but XP was the first one that just made sense. For a young project manager looking for a new way to do things, it was an easy choice. 

Now when you look closely at a method like XP you learn very quickly that it is actually a collection of practices, many of which have been around for a very long time. The thing that makes XP work, is the way that this particular set of practices or, as I like to think of it, this big agile bag full of cats works together. For instance, iterations by themselves have been around for a very long time under a different name: time boxes. Pair programming on the other hand, was a relatively new innovation as far as I know (although not entirely unheard of). And while continuous integration had actually been around in some form or another for a while, it was certainly best articulated and demonstrated by the proponents of XP. On their own I would argue that each of these ideas had plenty of merit, but the real magic happens when you combine them together. Each of these practices, and in XP there were roughly 13 of them, complements and overlaps one or more other practices in the set. So as a whole, you have a system of related ideas that have some redundancy and interconnection. You can see this in Ron Jeffries’ diagram of XP.

Now this gives you a package offering of interrelated ideas that many, including all XP practitioners I’ve ever met, say you need to adopt as a whole. You can’t just pick and choose the bits you like and expect to get great results. Why not? Well, I would go back to the redundancy and interrelated ideas. Let’s suppose for just a minute that you adopted all 13 XP practices, but you found that continuous integration for one reason or another was “too hard” or “not a good cultural fit” or for some other reason wasn’t going to work for your team. What might happen? Well, in all likelihood, in the short term you might not see any immediate effect. In fact, you might find that the team goes a little faster because they aren’t struggling to build continuous integration into their process. But hang on, we’re not done yet. You see there are practices that depend on continuous integration in order to work. For example, test driven development (TDD) and continuous refactoring. TDD relies on CI to give the developers quick feedback on their tests. That can’t happen without CI. So, developers are going to lose feedback on their tests, which means they aren’t going to get as much value from doing the tests in advance…and therefore they aren’t likely to keep doing TDD. Quality may start to suffer. And if they don’t have CI and TDD, then they don’t have the safety net of tests that they need to do continuous refactoring…so they are going to be less likely to try refactoring because it feels too risky.  By removing CI we have undermined quality and the resilience of the system we are developing (because we’re no longer refactoring). 

The impact of removing practices, especially in a pre-packaged set of methods has some rather insidious consequences. Things don’t immediately fall apart. Instead there is a gradual erosion of benefits that causes a cascade of related and also seemingly unrelated problems. You may still be getting some benefit from the remaining XP practices, but the system is now much more fragile and less resilient. You have removed some of the reinforcing mechanisms from the method that helped insure it is robust. When the team encounters a crisis, some sort of emergency in production where they need rapid turnaround and depend on high feedback, they aren’t prepared. They are slow to respond, introduce more defects and likely to struggle. At which point someone is liable to point out that this process sucks. Congratulations! Of course it does, you made it suck.

This is the reason that adherents of pre-packaged methods tend to sound so religious about the unequivocal adoption of all their practices. You have to adopt all the practices, otherwise you aren’t doing XP, Scrum, Kanban, and so on. I want to pause for a moment, because I don’t think that’s the end of the story. 

If we were to stop for a moment and look at development and management practices (agile and otherwise) we might find that there are practices that tend to have similarities that might cause us to group them together. Testing and QA practices like TDD, BDD, and others do share many similarities. Estimation practices like story points, ideal developer days, and others also share similarities. My point is that for any given meme or idea that we have in XP or in agile in general, there are multiple supporting practices that may fit. In addition, some practices are sophisticated enough that adoption can be measured by degree rather than in absolutes (we are 30% toward CI rather than all or nothing). My point is that there are multiple options for many of the key elements of popular frameworks. And even within many of those options there is a matter of the degree of adoption. After all, as so many agile advocates often say, it’s a journey, not a destination. Therefore, if I’m 30% of the way along the path, that must be worth something.

All of this is to say that we can substitute our own practices with some judicious caution. We’re allowed to do that, despite what the more religious might say. In fact, we can mix and match to find the elements that work for us. Now this is really hanging our toes out on the radical edge. Ivar Jacobson has something he calls essential methods. Basically, it is a catalog of development methods that you can combine and recombine to build your own framework. Now, you can still screw up. Remember that the reason that frameworks like XP and scrum have been successful is that they have concepts that are interlocking and support each other. The DIY approach is much riskier (practices may or may not support each other), but for some groups that may be the best way to go.

The important thing is to understand why these frameworks work as well as they do. They are composed of a series of practices that support each other, making them robust in the face of a world full of disruption and challenges. You mess with them at your own risk. Or…you build your own. Just know that you need to understand what you are building. If you do it poorly, it very likely won’t work.


Time Machine

February 26, 2019

OK, Mr. Peabody, where are we going today?

Well Sherman, Any time I explain what Scrum or XP is, I start with time boxes. The time box method has been around a really long time. The earliest record I can find in a casual search is where they were used at DuPont in the 1980s. I suspect that time boxes are much older than that. The time box basically applies a constraint to the system. It creates an arbitrary start and end date, usually on the smaller side. You commit to a fixed amount of work and when the end of the time box is reached you are done, no matter what the completion state of the work. Work that is complete is counted as done within the time box, work that still remains to be finished is either scope that gets dropped or perhaps that work is continued in the next time box.

This technique has some benefits:

  1. Deadlines, even arbitrary 2 week time boxes, help keep everyone focused.
  2. Deadlines force the question of prioritization. Not everything will fit in the box.
  3. Small time boxes create a short heartbeat or pulse that is useful for measures of capacity and throughput.
  4. It forms a useful skeleton for the OODA improvement cycle

There are also some challenges:

  1. Small time boxes demand that you figure out how to break work down into smaller, but still valuable pieces. Many teams find this hard to do.
  2. Small time boxes means that it is almost inevitable that scope won’t be delivered sooner or later. How the business manages this scenario says a lot about how the benefits of time boxes are perceived.
  3. Much of the angst of estimation is due primarily to the fact that teams are struggling to fit work to their limited capacity in ways they didn’t have to prior to the time box.
  4. It doesn’t work if you can’t break the iron triangle of scope, schedule, and quality. Scope usually has to be compromised in some form or another in order for time boxes to work (it’s kind of what they are based on)

Like so many other things, a time box is useful in the right context, but not all contexts. I’ve seen a few projects where a time box would not work (hardware constraints, legacy mainframe applications, an organization that wasn’t willing to give up the iron triangle, etc.). All too often we force the time box on the team and tell them that they suck if they can’t overcome the challenges. Sometimes that’s true, other times it isn’t. It’s a judgement call. Beware, and don’t let yourself get caught forcing a round peg into a square hole (I’m looking at you Scrum).


Painting The Spots

February 16, 2019

If you do a little reading about Scrum one of the first things that you learn are the 5 basic values of Scrum:

  • Courage
  • Focus
  • Respect
  • Committment
  • Openness

I’d like to examine one of those values that I watched a team wrestle with recently: commitment. These were really great folks. They were bright, energetic, friendly and passionate about the work they were doing. Within the team they took a lot of pride in their ability to “be agile.” They seemed to be doing a lot of good stuff.

However, I was hearing some disconcerting things from other parts of the organization. Other teams characterized this team as flakey. Managers expressed frustration that they didn’t deliver. I wasn’t sure what the story really was. Was it a cultural thing? Was it petty jealousy at work? I really had no idea.

An opportunity came along to do a little coaching with the team in question, so I was eager to find out more. Here’s what I found:

  • Optimism at the start: So the team said that they were prone to overcommitting to the amount of work they could handle in a sprint. During sprint planning, they would realize the balance of the work was unequal and that there would be team members left idle. So they would take on more “overflow” work to make sure that everyone on the team has something to do during the sprint. It’s great that they were aware of this problem. This pattern of behavior was leading the team to consistently overload their sprints with more work than they could achieve. The team told me that their typical velocity was 27-29 points per sprint. When I asked them what they had committed to in the last sprint, the answer was: 44 points. When I pointed out the obvious discrepancy, they admitted that they had overflow work from the previous sprint that they felt they had to get done. So then I asked them if they were going to deliver on all 44 points. And the survey says: No.
    The good news? This injury was self-inflicted. The bad news? It didn’t sound like they were entirely convinced they had a serious problem. A pattern of failing to reliably deliver sprint objectives can lead to a crisis of trust with a team’s stakeholders. The stakeholders start to doubt whether or not you will deliver on your sprint commitments. This can be a corrosive influence on the relationship with the very people who are signing the team’s paychecks. The solution? Stop overcommitting. This means that the team has to face some awkward issues about how to manage balancing work within their ranks. These are issues they were able to hide from by overloading the team with work. I got some grudging buy-in at this point, but I could tell that there was still work to be done.
  • Carry over matter: Since they are overloading the sprint, they are almost guaranteed to have items that are not completed and those get carried into the next sprint. I took the time to point out that this sort of issue is a problem, but you can skate by when you are simply going from sprint to sprint. However, when you are trying to work to a release plan with multiple teams and multiple sprints, then carry over is a total deal breaker. If you are working with other teams and you have a pattern of failing to deliver stories, the other teams are very quickly going to learn that you are not a good partner to work with.
  • Transparency: So I asked about this because I wasn’t sure what the problem was. Apparently they were concerned that they were being asked to track their time and their tasks in a time tracking tool to a level of detail that was making them uncomfortable. As we talked about it someone said, “I don’t think they trust us…” I could tell that this person was a bit upset by this perceived lack of trust. Of course I put on my Mr. Sensitivity hat and replied…Of course they don’t trust you! You don’t deliver committed work on time!

Well, I don’t think I said it exactly like that, but it was some polite variation on that theme. Now people were upset, and finally my message was getting through. The product owner for the team, gave me loud and vigorous support at this point. You could tell that we had stumbled on a fundamental assumption that people on the team were realizing was dead wrong. The scrum master articulated the invalid assumption for me: The whole purpose of having a sprint goal means that you can achieve the goal without having to deliver specific stories. You focus on the goal rather than the stories. That is an interesting, but completely incorrect interpretation of how commitment works. Apparently much of the team was operating with this model in mind. Once I pointed out that other people were depending on those specific stories being delivered, not some abstract goal, then you could feel the resistance immediately start to evaporate.

The other thing that was a little disturbing about this situation is the blind spot that the team had when working with other teams. They had explained away their inability to deliver as due to their own superior understanding of what it means to ‘be agile.’ No one else understood how awesome they were because the other teams weren’t as agile as they were. Now there is no doubt that they were doing a lot of things right. Like I mentioned in the beginning, they had a lot of good things going on. However, they had managed to paint over the ugly bits of their process without examining them and addressing them. Their ‘agility’ was their excuse for not delivering commitments. This sort of failure is not unusual – I’ve seen it happen in plenty of other teams. Dealing with these sorts of issues is hard for a team to do. Sometimes it takes an outsider to see them and point them out. So be careful about declaring your own agility. Doing so can sometimes hide some ugly spots.

This is What I Do

I provide innovative agile coaching, training, and facilitation to help organizations transform to deliver breakthrough products and performance. I do this by achieving a deep understanding of the business and by enabling the emergence of self-organizing teams and unleashing individual passion.

To learn more about the services that I offer or to arrange for an initial consultation, please see thomasperryllc.com


Test Driven Transformation

January 28, 2019

The introduction of agile methods has brought a wave of innovation in the business world that some might argue has revolutionized thinking about how organizations should be structured and how people work together. However, as it stands today, much of the promise of agile methods is wrapped up in preconfigured frameworks that offer a one-size-fits-all solution for every business challenge that a company may face. This is despite the fact that the modern organization is a highly complex structure, bordering on chaotic, that is often not best served by the application of frameworks. We see this manifested most commonly today in the failures to scale agile methods within large organizations.

The conversation about failure rates in the world of transformation is similar to prior discussions about the failure rates of projects and programs: both are notoriously vague and poorly defined. Almost all of the surveys that you find (PMI, etc.) use an embarrassing amount of anecdotal evidence to back up their assertions. The very definition of failure is usually so broad as to be completely meaningless. So, with that said, I think it’s important that we are careful with any assertions that transformations are failing or succeeding. In fact, my experience is that when we are talking about transformations within organizations, we are working at such a high level, that it is never clear what is entirely successful or failing. After all, In a good transformation, there is a lot of failure. You experiment, try things out, and find out that they don’t work. I’m not sure I trust anyone who tells me that 100% of their efforts are always successful. That tells me that they aren’t really changing much.

When I speak of frameworks, what exactly do I mean? Well, I’m thinking globally. I’m not just talking about those large scaling frameworks like SAFe and LeSS (that’s easy), I’m also pointing the finger at small scale, team level frameworks like Scrum and XP. And it’s not that these frameworks can’t work or can’t be useful. In fact, I’ve seen them applied and applied well. However, more often than not, they aren’t applied well. I know there is bitter and acrimonious debate on this subject. I’ll leave that battle for others and simply say, “We can do better.”

We need to step back and reassess how we engage with organizations from the very earliest stages of the engagement. It’s no longer sufficient to make prescriptive, framework-oriented recommendations and have any reasonable expectation of those proposals having any kind of success. In fact, I think we may well find they are often more harmful than helpful. Framework oriented approaches give the false promise that their solutions will solve every problem, and when they fail, they leave the customer having wasted tremendous time an energy, without anything to show for it. To make matters worse, consultants implementing such transformations will simply say that the organization didn’t have the right “mindset”, effectively blaming the customer for the failure of the transformation. This allows the consultant to wash their own hands of any responsibility for the failure as they move on to the next engagement with yet another set of pre-packaged proposals.

It’s time that we brought an end to such thinking and begin to focus on how we can properly understand the problems in the organization before we even begin to make recommendations. Then, like with any prescription for a complex system, we need to apply trial experiments not broad frameworks to address the specific problems that we find. Of course, in order to do this well, we need to have reliable means of assessing the health of the system. We need to treat the system like what it truly is, a complex organic structure, that lives and breathes, composed of living elements interacting with each other and participating in flows of ingestion, respiration, and value production for customers. This requires a first principles approach to understanding organizations. We need to understand exactly what organizational health looks like before we can make any kind of decent assessment of the system. To make any recommendations without that sort of understanding is irresponsible.

So what’s our target? Achieving some hypothetical state of agility is not a meaningful or useful target for a transformation. Agility has no objective meaning that a business person finds useful. Instead it is an end state in search of a meaning. In short, it has none.

Alternatively, there are those who propose that we should start from a place of experimentation. That also is an insufficient starting point for working with organizations. A company is not a consultant’s toy to be experimented with. And no one wants to be the subject of experiments. The experimental approach, while well meaning, signals rather strongly that you not only don’t understand the problem, but also that you have no idea what the real solution is. This experimental approach should be considered by any business owner of integrity as completely useless.

What organizations need is a clear eyed and objective assessment of what the problem is. It should be the sort of analysis that allows us to measure our effectiveness against that of our competition and our customer market in some meaningful fashion. Furthermore, based on that data, we should know what the prescription for change should be with a very high degree of confidence. Organizations are not looking for your best guess. They want to have confidence that any change or transformation effort has some reasonably provable possible outcome.

Another way of putting this is to think of it as test driven transformation. We must have some idea of a reasonable set of tests for assessing the relative health of a system. The results of those tests should give us some clue to the different kinds of problems that may afflict the system. They must be quantifiable, and like a doctor, we must have some notion of what the results of the tests imply. It doesn’t mean that we know for sure what the outcome will be, but it also doesn’t mean that we are taking a random shot in the dark. A good doctor will use multiple diagnostic tests to build a picture of the problems with the patient. Based on the results of those tests, the doctor is able to narrow down the treatment to a subset of commonly recommended approaches. Nothing about this is random experimentation, but rather it is a systematic, data-driven approach to understanding the nature of the problem.


The Agile Gymnasium

January 12, 2016

IMG_0271

I used to be a weightlifter. All through college, and for much of my adult life I have been in gyms exercising in one form or another. I’ve had some modest success. The experience of joining a gym goes along some standard lines. You’ve probably done it yourself. You show up and they take you around the facility and orient you to the equipment. They may even go so far as to give you some very basic training. You get an introduction to circuit training and then they slap you on the butt and tell you to “go be awesome!” You can record your exercise sessions on this little card over here…

That’s pretty much it.

As you might imagine, the success rate with that sort of system is fairly low. A lot of people never come back (although many continue to pay their monthly dues). Those who do come back typically have no idea what modern exercise programming looks like and simply go through the motions: they ride the stair master, do a few sit-ups, and maybe do some curls. That sort of exercise has some marginal utility – you get some small amount of aerobic benefit, but it’s a far cry from exercising a meaningful percentage of most people’s potential.

Most people stop there, but there are a few who have a more ambitious goal in mind. They may be trying to improve their tennis game with better conditioning. They may be looking to build massive pectoral muscles (like most teenage boys). They may be trying to maintain their conditioning in the off season of their sport, perhaps like cycling in the winter. In other words, the purpose of their exercise is to improve their performance in some sort of real world scenario.

I’d like to pause for a moment here. I was listening to a discussion with some folks who owned their own gym and they had an interesting model. It had three tiers to it:

  1. Gym Work: Work in the gym is not like the real world at all. It is where you go to prepare for the real world. The gym is a safe place to work to the point of failure (that’s important) and to learn.
  2. Expeditions: Expeditions are adventures in the real world that are guided by a coach. So it is real world experience, but with someone there to guide you and help if you fail.
  3. The Real World: This is where it all comes together. Ultimately, this is where the training in the Gym and the experience in the expeditions pays off in terms of improved performance.

As a model for the role of training for high performance, I thought this made a lot of sense. There was one more thing that they added to this: They were capturing data on the entire group’s performance and analyzing it in order to provide better training for individuals in the future!

So when you join the gym, you use a training program that is similar to what others in the gym are using. Your performance of that program is measured and metrics across the entire population training in the gym are measured. Then experimental changes are made to the training program and their benefit (or lack thereof) is measured across the group. Gradually their training program improves over time. But the training isn’t just tested in the gym. They also track the performance of their members when they go on expeditions. This measures the effectiveness of their training program in the real world.

OK, enough about this gym. What if we could use the same metaphor for the way we train our development teams? Training would be a weekly thing. Something where you go in for training on a periodic basis to firm up your skills. There might be repetitions (pair programming, mob programming, etc.) and there might be coaching (coaching circles, etc.) and there might be someone who is coordinating the training program and measuring the performance across the entire group of trainees.

There could be expeditions from time to time. Hackathons where people get to try out what they have learned in the gym out in the real world. You know: build a real project, maybe deliver something over a weekend. Test out your mastery of your skills in the real world – with a coach there if you need it.

Then there is game day – the real world. You take what you have learned and join a team. You get to flex your massive coding and collaboration muscles and help build something challenging – something amazing. What a great model for development! But I’m not done yet…

Let’s take this model, we’ll call it the “gymnasium model”, and apply it to something like Certified Scrum Master Training. Right now, there is two days of class time and exercises and then they slap the CSM on you and send the newly minted CSM out into the world. It’s a hauntingly similar scenario to the average person’s experience at the Gym: welcome to scrum, now “go be awesome!” Maybe you do a few sprints, do a few standups and off you go. That’s about as agile as most people get. Seriously. You get some marginal benefit, but that’s about it. It could be so much more.

But what if we did things differently? What if instead of signing up for a 2 day class, you were to join an Agile gym. Maybe twice each week you go into the gym to “work out”. A coach would give you a workout, perhaps something like this:

1. Dysfunctional Standup
2. 3 Reps in the coaching dojo
3. 2 Sets of mob programming
4. 2 reps of code katas
5. 1 cool down with a retrospective

That’s just a sample workout. The Agile Gym is a safe place to try out new skills and to push ourselves. The coach would be responsible for measuring the effectiveness of the workout and modifying it over time. Experimenting with new techniques and combinations of methods and evaluating the outcomes. Of course, this is just training in the gym. From time to time we are going to need to test our our competence in the real world. The coach would provide some guided expeditions (perhaps twice a month). For example:

1. Participating in a Hackathon
2. Participating in a Startup Weekend
3. Participating in a Maker Fair

These are events in the real world that are important places to evaluate the effectiveness of our training in the gym. If our coding skills have improved, then we should do well at these events and build confidence in our ability to use our newfound skills in the real world. Speaking of the real world, hopefully now we would see the agile behaviors that we have practiced being manifested in useful ways in the actual projects that we are running from day to day. Our collaboration skills should be tight, our planning impeccable, our retrospectives revealing. And if we find any weak areas, then it is back to the gym for more training.

In this model, the gym is always open. You actually practice your skills and see improvement. What an amazing way to learn about agile!

It’s not a bad model really. Actually, it’s a really darn good one. Who wants to start a gym?


Ripping the Planning Out of Agile

October 10, 2014

needle-31827_640

Recently I was following some twitter feed about #NoEstimates. I’m no expert, but it seems to be a conversation about the fundamental value, or lack of value, that planning provides to teams. What they seem to be arguing is that planning represents a lot of wasted effort that would be better spent elsewhere.

Fundamentally I would have to agree. I’ve wasted a tremendous amount of time arguing about story points, burning down hours, and calculating person days – all for what seems like very little benefit.

What I would rather do is spend more time talking about the problem we are trying to solve. I really value a deep understanding of the system and the changes that we intend to make to it. If I have that much, then I’m well situated to deliver fast enough that nobody’s going to give me much grief about not having estimates. That’s my theory anyway. The sooner you can deliver working software, the sooner people will shut up about estimates.

But often we never do talk about the problem at anything other than a very superficial level. We spend most of our time trying to size the effort according to some artificial schema that has nothing to do with the work or any real empirical evidence at all.

So what if there were no plan? What if we took Scrum and did everything but the planning? You show up Monday morning and you have no idea what you are going to work on. The team sits down with the customer and talks about their most pressing need. They work out what they need to build, make important design decisions, and coordinate among themselves. At no point are there any hours, or points, or days. What would happen to the cadence of the sprint if we removed the planning? Basically, we would have our daily standup, and then we would review our accomplishments at the end of the sprint and look for ways to improve.

That sounds pretty good actually. Like anything else, I’m sure it has pros and cons:

Pros: Save time and energy otherwise wasted on estimation, and use that time instead for important problem solving work.

Cons: Stakeholders really like estimates. It’s like crack. They start to shake and twitch if you take their estimates away. Not many will even let you talk about it.

It might be worth a try sometime. It would certainly make an interesting experiment for a sprint or two. What if the sprint were focused entirely on the improvement cycle instead?