Custom Hot Rod

May 2, 2019

One of my favorite cars that I ever owned was a 1967 Ford Falcon. I bought it for $600 when I was in college. It really wasn’t much of a car. It was a 2 door coupe built on the same frame as the classic Mustang, but without any of those muscle car good looks. It was the kind of car intended to be a hot rod for my grandmother. It had a straight six cylinder motor combined with an automatic transmission that was best described as apathetic. It had all the fundamentals you need in a car: an engine, brakes, doors that open and close, and a horn that went “Beep! Beep!”

I remember the first time I stepped back and looked at it after I bought it and thinking, “Well, I’m going to have to change this right now.” Of course, being a college student I had to do everything on the cheap. So I ran down to the auto parts store and bought a bunch of cans of spray paint. I taped up the windows and the headlights and proceeded to paint the entire car bright canary yellow. Sufferin’ succotash! Was that car ever bright! Unfortunately, so was the car parked right next it (Oops). Then I bought a genuine race car hood scoop. You know, the kind like Mad Max had on the front of his car? Well, I grabbed a drill and bolted that baby right onto the hood (no, not the motor, the hood). That hood scoop didn’t actually do anything but look cool (and act as storage for my friends used beer cans). Then I put some old fat used tires and some moon rims on the wheels and I had a genuine, bonafide, race machine.

Now granted, I really didn’t change the motor at all. And those beer cans in the hood scoop rattled a lot whenever I turned sharply. After all, it was still just grandma’s skinny little straight six. But you can’t argue that I didn’t have one of the most distinctive looking cars in SE Portland at the time. There was something empowering about being able to make any old cheap modification, large and small, just for the fun of it. So I just kept at it. Somehow I only managed to get pulled over by the police once – and that was for driving while simultaneously eating a very large bag of M&Ms. Guilty as charged: it was an “M&M DUI” – Driving Under the Influence of M&Ms. Yes indeed, those were wild days.

I still like to customize things. Whether it’s cars, boats, or my house, I just can’t seem to keep things stock. I guess I need to tweak it a bit to make it mine. Perhaps I need to fine tune things until they fit just right? And so it goes with some of the processes that we use. I don’t think I’ve ever done Scrum the same way twice. And you can rest assured that I’ve never been able to implement a framework without bolting a metaphorical hood scoop on it or otherwise changing it to better fit the needs of the teams.

I don’t really understand how people can refer to any framework as strictly “cookie cutter” or standardized. That just doesn’t really match with my experience. You see, we always have to customize things. No matter how dogmatic we may be, there are difference issues and impediments that beg for us to make small changes. And that’s OK, we need to be able to change things a little bit here and there. There are three reasons I believe that the customization of frameworks is important.

First, sometimes when you look closely at those frameworks you will find that there are multiple practices that can be used in the same place. I’m thinking of the myriad different ways that we can facilitate planning meetings for example. So you have a choice, you can use the stock practice as proscribed in the framework, or you can use a custom variety of your own. It’s kind of like customizing my old Ford Falcon and turning it into a hot rod.

Second, frameworks also have gaps. Again, close inspection of frameworks will reveal gaps in the recommended processes and practices. Not everything is completely spelled out, that’s why they call it a framework to begin with (there are bits that are intentionally left blank). It’s supposed to be skeleton upon which you hang your organizations processes. The processes that are already described are what many might call essential, but they are by no means all of the processes that you can have. You can certainly add more and you can certainly innovate in the way that those additional processes are integrated or combined with the framework. If you want to hang a stained glass window in the rear window of my Falcon, be my guest.

Third, frameworks are intended to serve as the foundation or soil within which the seeds of innovation can take root and grow. Most agile frameworks are all based on the underlying assumption that this is the starting point from which you will evolve. Over time you will either hang more processes off that skeleton or you will change the skeleton itself to better suite your business and technology domain.

It’s only through customizing our frameworks using these tools that we achieve remarkable outcomes. Customization provides alternatives to stock practices that may grow stale over time. Customization can also help us to fill in the gaps in the process that were never anticipated when the framework was created. And finally, customization serves as the seeds of innovation that we plant in our frameworks in the hope of developing exciting new ways of working. We’re here to build hot rods, not clunkers, so it’s time to customize our frameworks.


Different Roots, Same Tree

May 1, 2019

Recently, at conferences, in social media, and even informal gatherings, I’ve heard statements along the lines of “[X] scaling approach is absolutely not agile for [Y] reason.” I use the word approach to avoid the question of whether we are talking about a framework or a methodology. I really don’t care about that distinction and much of the subtlety that lies there is beyond me.

Admittedly, there is a long and rich history of critiquing each others ideas in the agile community. Some examples include:

  1. XP vs Scrum
  2. Kanban vs Scrum
  3. Lean vs Agile

To my knowledge, none of these debates has ever really reached any sort of meaningful conclusion. In fact, the more I watch (and even sometimes participate in) these debates, the more I feel like they are mostly a reflection of a sort of core philosophy. What I mean, is that there seem to be some common starting points or assumptions that characterize how people approach these debates.

Let me give you an example. Let’s take SenseMaking and the Cynefin framework. We can use a tool like Cynefin to help us navigate important decisions based on the assessment of contextual complexity. The beauty of this system is that you can use it anywhere. It doesn’t matter whether you are agile or not. Cynefin is simply used to help assess and navigate the environment of simple, complicated, complex and the chaotic. What decisions you make within each context will lead you to healthy outcomes. With Cynefin, you can start with absolutely no framework or required processes at all. In essence, you are building from scratch, and evolving only as necessary. Frankly it’s a beautiful and elegant system. Conceptually, it’s founded on the notion of sensing your environment and making decisions based on what you uncover. It’s a radically empirical process that starts wherever you may be. There is no default starting point for applying Cynefin. You simply use it to help you grow from wherever you are.

The interesting thing is that Cynefin isn’t the only framework that uses this “start wherever they are” approach. Kanban is also very minimalist in its rules. In fact, Kanban usually starts by simply making the existing process visible. You don’t need to change your process at all, just make it so that everyone can see it. Starting from there, the Kanban approach recommends that we consider applying WIP limits and working to understand the constraints of the flow through the system. There are no pre-defined required processes. You don’t have to do standup. You don’t have to hold retrospectives. You basically start Kanban from scratch and add those elements wherever they make the most sense. You build your agile process from scratch based on the feedback you get from making the process visible. Again, it’s a very elegant and powerful system, that’s founded on the notion of visibility (or transparency) and allows you to evolve however makes sense for your environment.

So I see both Cynefin and Kanban as sharing some important conceptual roots (while each is very unique). Both methods provide us feedback to help make good decisions in whatever context we may be working. Both also make absolutely no assumptions about what the starting point may be. You could start with a very rigid, waterfall style, process. Alternatively, you could be using Scrum. Neither Cynefin, nor Kanban care about where you start. In fact, what they really care about is not blindly applying process without some sort of feedback. So I think of Cynefin and Kanban as the “build it from scratch” or “consider context first” methods. Actually, I really like to think of these as the Buckaroo Banzai methods, you know, “Wherever you go…there you are.”

Now this also implies that you are really committed to this learning journey, with all of its joys, discovery, false starts and dead ends. Building your process from the ground up is not for the tentative or the faint of heart. Why do we have to go through all of this learning pain and discovery, when others seem to have found some practices that seem to work? Well, the argument, and it is a very valid one, is that you need to discover what works for you in your context. Trying to apply solutions that may have worked well in other places often leads to disappointment. In the Toyota way, Toichi Ohno warns us of exactly this. If you want to build a world class process, you can’t rent it. You need to build it and find out what works for you.

But what if we really could rent our process? Wouldn’t that save a lot of time and wasted effort? Let’s face it, this is business, not rocket surgery. We can’t all be so unique that we have to waste time rediscovering the wheel. Let’s take a look at another very large branch on the agile tree: approaches based on starting with a predefined set of practices or processes like Scrum, or XP.

Scrum is based on a very fundamental set of practices that creates the infrastructure (or framework or method) for continuous delivery and improvement of small units of work. Depending on who you ask, XP and scrum came into being around the same time. As I remember it, XP was the first to really land hard on a required set of practices that defined the process as truly being XP. These twelve practices were non-negotiable. You had to do them, and if you didn’t, well, then you weren’t doing XP. You’re probably familiar with many of these practices. They are foundational practices like pair programming, continuous integration, test driven development, and so on. Part of the reason for requiring these practices was that they supported each other. It’s hard to do continuous integration without some form of test driven development. The two together are kind of a magical combination – they help reinforce each other. Often, what we see happen in the real world, is that teams will struggle with and perhaps drop practices. When that happens, keeping the other XP practices working gets harder.

Scrum does something similar, but different. Scrum has a default set of non-technical practices that are required. You must have sprint planning, daily stand-ups, and sprint retrospectives. That’s non-negotiable. To do otherwise is to do “Scrum, but…” and to be mocked mercilessly by your peers. Both scrum and XP could be loosely described as having a default set of “best practices” that are required in order to use the framework to its best advantage. Now I personally hate the term “best practice” but that’s exactly what they are doing. We’ve identified the best, minimal, set of practices that you must use as a starting point, no matter what your context is. It’s a package deal and we defer to the wisdom in the package. Unlike Cynefin or Kanban, you have a very well defined starting point, and you aren’t given the option to do differently. Now, both XP and scrum are based on empirical process control (at least in theory) and they both claim that you can evolve and change the framework as you learn to use it. However, in practice, I’ve rarely seen it actually happen (Spotify being one very notable example). When you start with a predefined set of practices, it seems harder to evolve to anything else. Well, I guess Darwin never said evolution was easy.

So we have two very different schools of thought about how to think about approaching agile:

  1. Start “where you are” and use a decision making model or visibility model to evolve to where you need to be (Cynefin, Kanban).
  2. Start with a fixed “starter set” of best practices and then evolve to where you need to be (Scrum, XP).

I think that these two philosophies or approaches explain a lot of the conflict I see in the agile community today. The “start where you are” folks seem to feel very strongly that “starter set” approaches run the risk of being applied in a cookie cutter fashion and often incorrectly. To them these approaches are likely to lead to poor outcomes and are therefore to be avoided or even wrong headed.

On the other hand, the folks who take the “starter set” approach” are appalled by the waste involved in the “start where you are” engagements. Why in the world would you waste your customers precious time and energy on rediscovering the wheel when you already have a very capable set of practices to start with? It’s folly! These practices are tried and tested and there are very few exceptions. To ask the customer to invent their process on their own is just a high risk recipe for disaster! Therefore, to do anything other than the “starter set” approach is to be avoided or…well, you get the picture.

I think the argument only gets amplified when we start to include scaling frameworks in the conversation. As I look more and more closely at the scaling frameworks, I start to think that I see their roots in each of these different approaches. For example, SAFe has its roots firmly in the “starter set” camp. SAFe is most definitely a framework of prescribed “best practices” that are intended to be applied universally. There is some allowance made for the size and scale of the organization, but the gist is that everyone does SAFe. On the other hand, there is LeSS which seems to share its roots much more closely with the “start where you are” approaches used by Cynefin and Kanban. In LeSS there is more emphasis on using tools like systems diagrams and root cause analysis to discover the right means to change the system for scaling. So LeSS feels to me like it leans a bit more toward the “start where you are at” approaches.

Of course, the adherents of each approach think the others are nuts. I think some of that is due to how each sees the world. They are coming from very different starting points. I’m not sure they’re ever going to agree with each other. Fortunately, I’ve seen both approaches work well for people. And I’ve also seen them both fail miserably. Often it had little to do with the frameworks, and a lot to do with the people. So I guess we count ourselves lucky and try to remain calm when they point quivering fingers at each other and proclaim loudly that the other is “Not Agile”.

Of course they aren’t.

That’s OK.


Letting them build it

February 27, 2019

Agile methods like scrum and XP are very exciting, especially when you are first introduced to them. There is something very common sense about the ideas in them that seems to resonate for a lot of people. I know it was that way for me. I’d looked at a lot of different project management methods before settling on XP (thank you Steve McConnell). A lot of those methods looked interesting, but XP was the first one that just made sense. For a young project manager looking for a new way to do things, it was an easy choice. 

Now when you look closely at a method like XP you learn very quickly that it is actually a collection of practices, many of which have been around for a very long time. The thing that makes XP work, is the way that this particular set of practices or, as I like to think of it, this big agile bag full of cats works together. For instance, iterations by themselves have been around for a very long time under a different name: time boxes. Pair programming on the other hand, was a relatively new innovation as far as I know (although not entirely unheard of). And while continuous integration had actually been around in some form or another for a while, it was certainly best articulated and demonstrated by the proponents of XP. On their own I would argue that each of these ideas had plenty of merit, but the real magic happens when you combine them together. Each of these practices, and in XP there were roughly 13 of them, complements and overlaps one or more other practices in the set. So as a whole, you have a system of related ideas that have some redundancy and interconnection. You can see this in Ron Jeffries’ diagram of XP.

Now this gives you a package offering of interrelated ideas that many, including all XP practitioners I’ve ever met, say you need to adopt as a whole. You can’t just pick and choose the bits you like and expect to get great results. Why not? Well, I would go back to the redundancy and interrelated ideas. Let’s suppose for just a minute that you adopted all 13 XP practices, but you found that continuous integration for one reason or another was “too hard” or “not a good cultural fit” or for some other reason wasn’t going to work for your team. What might happen? Well, in all likelihood, in the short term you might not see any immediate effect. In fact, you might find that the team goes a little faster because they aren’t struggling to build continuous integration into their process. But hang on, we’re not done yet. You see there are practices that depend on continuous integration in order to work. For example, test driven development (TDD) and continuous refactoring. TDD relies on CI to give the developers quick feedback on their tests. That can’t happen without CI. So, developers are going to lose feedback on their tests, which means they aren’t going to get as much value from doing the tests in advance…and therefore they aren’t likely to keep doing TDD. Quality may start to suffer. And if they don’t have CI and TDD, then they don’t have the safety net of tests that they need to do continuous refactoring…so they are going to be less likely to try refactoring because it feels too risky.  By removing CI we have undermined quality and the resilience of the system we are developing (because we’re no longer refactoring). 

The impact of removing practices, especially in a pre-packaged set of methods has some rather insidious consequences. Things don’t immediately fall apart. Instead there is a gradual erosion of benefits that causes a cascade of related and also seemingly unrelated problems. You may still be getting some benefit from the remaining XP practices, but the system is now much more fragile and less resilient. You have removed some of the reinforcing mechanisms from the method that helped insure it is robust. When the team encounters a crisis, some sort of emergency in production where they need rapid turnaround and depend on high feedback, they aren’t prepared. They are slow to respond, introduce more defects and likely to struggle. At which point someone is liable to point out that this process sucks. Congratulations! Of course it does, you made it suck.

This is the reason that adherents of pre-packaged methods tend to sound so religious about the unequivocal adoption of all their practices. You have to adopt all the practices, otherwise you aren’t doing XP, Scrum, Kanban, and so on. I want to pause for a moment, because I don’t think that’s the end of the story. 

If we were to stop for a moment and look at development and management practices (agile and otherwise) we might find that there are practices that tend to have similarities that might cause us to group them together. Testing and QA practices like TDD, BDD, and others do share many similarities. Estimation practices like story points, ideal developer days, and others also share similarities. My point is that for any given meme or idea that we have in XP or in agile in general, there are multiple supporting practices that may fit. In addition, some practices are sophisticated enough that adoption can be measured by degree rather than in absolutes (we are 30% toward CI rather than all or nothing). My point is that there are multiple options for many of the key elements of popular frameworks. And even within many of those options there is a matter of the degree of adoption. After all, as so many agile advocates often say, it’s a journey, not a destination. Therefore, if I’m 30% of the way along the path, that must be worth something.

All of this is to say that we can substitute our own practices with some judicious caution. We’re allowed to do that, despite what the more religious might say. In fact, we can mix and match to find the elements that work for us. Now this is really hanging our toes out on the radical edge. Ivar Jacobson has something he calls essential methods. Basically, it is a catalog of development methods that you can combine and recombine to build your own framework. Now, you can still screw up. Remember that the reason that frameworks like XP and scrum have been successful is that they have concepts that are interlocking and support each other. The DIY approach is much riskier (practices may or may not support each other), but for some groups that may be the best way to go.

The important thing is to understand why these frameworks work as well as they do. They are composed of a series of practices that support each other, making them robust in the face of a world full of disruption and challenges. You mess with them at your own risk. Or…you build your own. Just know that you need to understand what you are building. If you do it poorly, it very likely won’t work.


Time Machine

February 26, 2019

OK, Mr. Peabody, where are we going today?

Well Sherman, Any time I explain what Scrum or XP is, I start with time boxes. The time box method has been around a really long time. The earliest record I can find in a casual search is where they were used at DuPont in the 1980s. I suspect that time boxes are much older than that. The time box basically applies a constraint to the system. It creates an arbitrary start and end date, usually on the smaller side. You commit to a fixed amount of work and when the end of the time box is reached you are done, no matter what the completion state of the work. Work that is complete is counted as done within the time box, work that still remains to be finished is either scope that gets dropped or perhaps that work is continued in the next time box.

This technique has some benefits:

  1. Deadlines, even arbitrary 2 week time boxes, help keep everyone focused.
  2. Deadlines force the question of prioritization. Not everything will fit in the box.
  3. Small time boxes create a short heartbeat or pulse that is useful for measures of capacity and throughput.
  4. It forms a useful skeleton for the OODA improvement cycle

There are also some challenges:

  1. Small time boxes demand that you figure out how to break work down into smaller, but still valuable pieces. Many teams find this hard to do.
  2. Small time boxes means that it is almost inevitable that scope won’t be delivered sooner or later. How the business manages this scenario says a lot about how the benefits of time boxes are perceived.
  3. Much of the angst of estimation is due primarily to the fact that teams are struggling to fit work to their limited capacity in ways they didn’t have to prior to the time box.
  4. It doesn’t work if you can’t break the iron triangle of scope, schedule, and quality. Scope usually has to be compromised in some form or another in order for time boxes to work (it’s kind of what they are based on)

Like so many other things, a time box is useful in the right context, but not all contexts. I’ve seen a few projects where a time box would not work (hardware constraints, legacy mainframe applications, an organization that wasn’t willing to give up the iron triangle, etc.). All too often we force the time box on the team and tell them that they suck if they can’t overcome the challenges. Sometimes that’s true, other times it isn’t. It’s a judgement call. Beware, and don’t let yourself get caught forcing a round peg into a square hole (I’m looking at you Scrum).


Painting The Spots

February 16, 2019

If you do a little reading about Scrum one of the first things that you learn are the 5 basic values of Scrum:

  • Courage
  • Focus
  • Respect
  • Committment
  • Openness

I’d like to examine one of those values that I watched a team wrestle with recently: commitment. These were really great folks. They were bright, energetic, friendly and passionate about the work they were doing. Within the team they took a lot of pride in their ability to “be agile.” They seemed to be doing a lot of good stuff.

However, I was hearing some disconcerting things from other parts of the organization. Other teams characterized this team as flakey. Managers expressed frustration that they didn’t deliver. I wasn’t sure what the story really was. Was it a cultural thing? Was it petty jealousy at work? I really had no idea.

An opportunity came along to do a little coaching with the team in question, so I was eager to find out more. Here’s what I found:

  • Optimism at the start: So the team said that they were prone to overcommitting to the amount of work they could handle in a sprint. During sprint planning, they would realize the balance of the work was unequal and that there would be team members left idle. So they would take on more “overflow” work to make sure that everyone on the team has something to do during the sprint. It’s great that they were aware of this problem. This pattern of behavior was leading the team to consistently overload their sprints with more work than they could achieve. The team told me that their typical velocity was 27-29 points per sprint. When I asked them what they had committed to in the last sprint, the answer was: 44 points. When I pointed out the obvious discrepancy, they admitted that they had overflow work from the previous sprint that they felt they had to get done. So then I asked them if they were going to deliver on all 44 points. And the survey says: No.
    The good news? This injury was self-inflicted. The bad news? It didn’t sound like they were entirely convinced they had a serious problem. A pattern of failing to reliably deliver sprint objectives can lead to a crisis of trust with a team’s stakeholders. The stakeholders start to doubt whether or not you will deliver on your sprint commitments. This can be a corrosive influence on the relationship with the very people who are signing the team’s paychecks. The solution? Stop overcommitting. This means that the team has to face some awkward issues about how to manage balancing work within their ranks. These are issues they were able to hide from by overloading the team with work. I got some grudging buy-in at this point, but I could tell that there was still work to be done.
  • Carry over matter: Since they are overloading the sprint, they are almost guaranteed to have items that are not completed and those get carried into the next sprint. I took the time to point out that this sort of issue is a problem, but you can skate by when you are simply going from sprint to sprint. However, when you are trying to work to a release plan with multiple teams and multiple sprints, then carry over is a total deal breaker. If you are working with other teams and you have a pattern of failing to deliver stories, the other teams are very quickly going to learn that you are not a good partner to work with.
  • Transparency: So I asked about this because I wasn’t sure what the problem was. Apparently they were concerned that they were being asked to track their time and their tasks in a time tracking tool to a level of detail that was making them uncomfortable. As we talked about it someone said, “I don’t think they trust us…” I could tell that this person was a bit upset by this perceived lack of trust. Of course I put on my Mr. Sensitivity hat and replied…Of course they don’t trust you! You don’t deliver committed work on time!

Well, I don’t think I said it exactly like that, but it was some polite variation on that theme. Now people were upset, and finally my message was getting through. The product owner for the team, gave me loud and vigorous support at this point. You could tell that we had stumbled on a fundamental assumption that people on the team were realizing was dead wrong. The scrum master articulated the invalid assumption for me: The whole purpose of having a sprint goal means that you can achieve the goal without having to deliver specific stories. You focus on the goal rather than the stories. That is an interesting, but completely incorrect interpretation of how commitment works. Apparently much of the team was operating with this model in mind. Once I pointed out that other people were depending on those specific stories being delivered, not some abstract goal, then you could feel the resistance immediately start to evaporate.

The other thing that was a little disturbing about this situation is the blind spot that the team had when working with other teams. They had explained away their inability to deliver as due to their own superior understanding of what it means to ‘be agile.’ No one else understood how awesome they were because the other teams weren’t as agile as they were. Now there is no doubt that they were doing a lot of things right. Like I mentioned in the beginning, they had a lot of good things going on. However, they had managed to paint over the ugly bits of their process without examining them and addressing them. Their ‘agility’ was their excuse for not delivering commitments. This sort of failure is not unusual – I’ve seen it happen in plenty of other teams. Dealing with these sorts of issues is hard for a team to do. Sometimes it takes an outsider to see them and point them out. So be careful about declaring your own agility. Doing so can sometimes hide some ugly spots.

This is What I Do

I provide innovative agile coaching, training, and facilitation to help organizations transform to deliver breakthrough products and performance. I do this by achieving a deep understanding of the business and by enabling the emergence of self-organizing teams and unleashing individual passion.

To learn more about the services that I offer or to arrange for an initial consultation, please see thomasperryllc.com


It’s All About Flow

February 14, 2019

OK, please forgive me, but I’m going to geek out for bit here on some Thermodynamics of Emotion stuff. Furthermore, I’m going to try and draw an analogy between a law of thermodynamics and the business world. So, hold on to your hats, here we go… 

In the Design of Nature, Bejan states the Constructal Law as:

“For a finite-size flow system to persist in time (to live), its configuration must evolve in such a way that it provides easier access to the currents that flow through it.”

-Bejan, Adrian. Design in Nature

This is to say that for any living system there is a design or landscape that must change over time such that the flow through the system improves. The design can be anything as primitive as the branching of streams, the vascularity of the arteries and veins in your body, or perhaps the process that you use to do work at the office.

In business, process is the design that we use to structure the way work flows through our organizations. As such, the process is not arbitrary, but intentional. If it improves the flow of work, then it’s a useful process, if it degrades the flow of work, then it’s not. By improving the flow of work, we mean that it must configure the landscape or domain such that the work flows more easily (read with less resistance) through the system. That also implies that the access to that work is improved (it takes less energy to find it).

According to Constructal Law, processes that allow work to remain hidden interfere with flow. Processes that constrain work so that it’s flow can’t change or evolve also interfere with flow. Given these assumptions, old-school, plan-driven methods with rigidly defined processes are counter to healthy flow and are less likely to succeed than processes that are dynamic and enable transparency of work in the organization.

In fact, to carry this one step further. What we are currently witnessing in the last two to three decades is the evolution of processes in the business world. Rigid, plan driven processes are dying off, as the Constructal Law would predict, in the face of new dynamic processes like agile. Any process, even somewhat imperfect, that improves flow and transparency of work in the system is going to be more successful (more efficient conversion of energy to work) than a more rigid process. 

Of course, agile too will one day be replaced by a process that successfully enables better flow. What that next process is remains to be seen.


Team Emotional Flow

February 12, 2019

The morning begins with everyone arriving at the office and gathering in the kitchen. The whole team is works together, there are no remote workers. As folks grab coffee and maybe toast a bagel, there is casual banter about the game the night before, the kids performance at a school play, and plans for an upcoming barbecue.

When the last member of the team arrives, they all gather round into a circle looking at one another. There are a few mumbled “good mornings” and one member starts off with, “I’m feeling excited, we are going to get to integration test the system for the first time today. I think the plan is to start around 10:00.”

There are a few raised eyebrows and then a question or two as folks sync up. The next person in the circle says, “I’m feeling frustrated this morning. The work on the UI hit a stumbling block last night, and I hate leaving work with an unresolved problem.” Someone else chimes in with, “Me too! Let’s pull the mob together and see if more brains can help us nail this problem this morning.” There are general mumbles of assent from the group and the process continues with the next person, “I’m feeling glad that we’re making progress. I think I know what is causing that problem, so I’m looking forward to sharing a potential solution.”

And so it goes, each after the other. The format is relatively loose: You always start with sharing a feeling, then follow up with any resistance you may be encountering. The emphasis is on keeping the interaction casual and not forcing anything. There is no pre-defined leader. Everyone has agreed that this kind of sharing is important and they support it as needed.
At the end of the meeting, everyone updates their feeling status on a whiteboard. They track their feelings on a daily basis so that they can see trends in their overall team mood. They work together as closely as possible. They use mob programming to do their work together whenever possible. The focus is on sharing their experience together.

One tool they use to keep themselves aware of the emotional flow of the team is frequent use of the “check in”. The check in is taken from Jim McCarthy’s core protocols. The idea is to declare your emotional state at the beginning of significant meetings and interactions. This helps to make emotion visible to everyone and gives important needed context for others who you work with. You simply state your current emotion: I feel Mad, Sad, Glad, etc. It lets everyone know where you are at and helps the group to synchronize emotionally. It doesn’t have to be rigid and highly formalized. I think that depends on the character of the team. I personally prefer a casual but disciplined approach (always do it, but let the language be natural and informal rather than highly structured and rigid).

I offer this as an alternative to the traditional standup. We don’t track work, we track feeling. We focus on achieving emotional flow. We don’t use a rigid system of pre-defined questions that must be answered. We flow.


Building a Scaled Agile Framework for Dummies

February 10, 2019

Scaled Agile Frameworks like SAFe are all the rage these days. You can go out now and get training, certification and a shave from a bevy of consultants that for a mere two grand per head (not really sure about the shave part). That’s a perfectly legitimate approach. However here’s a dirty little secret: anyone can do it. Here’s an example of one that I made a few years ago.

I had taken a look at SAFe and there was a lot that I liked and there were some things that just didn’t seem to fit our context. With those qualifications in mind. I decided I could make my own version. I got out my notepad and my colored sharpies and I went to town. I knew that I liked the three layer model, but I found a lot of the SAFe Big Picture had too much complexity in it. So you can see that in the first level, I simplified things quite considerably. The second or program level was also quite simple. I mixed in some things like agile chartering which I felt would be beneficial and were not found in the SAFe diagram. What about the third (Portfolio) level? Well, at the time I really didn’t have a clear idea how that would look. It was at this level that I was looking to integrate the model with our existing PMO practices – which in hindsight was probably a mistake (hey, make your own model and you make your own mistakes). So then I started to iterate.

Now I was starting to think about how things related between the three layers. Those interactions between the team level, the program level, and the portfolio level seemed to be very important. I was also experimenting with different ways of visualizing the processes on each level (with what I must confess are varying degrees of success). My color repertoire had expanded too.

Finally I started to look at the processes as a series of prescriptive steps that I needed to be able to document and describe to people. You can see that I added numbers and then I took each of those interlocking blocks and documented them. I made poster sized copies and put them on the wall outside my office with a sharpie hanging next to them. The request was simple – please change it to fit your needs. After a few days, I had more feedback and iterated from there.

Building your own scaling model isn’t for everyone. However, it’s not rocket science either. If you have a modest understanding of your own business domain, AND you understand the basics of the agile frameworks, you have everything necessary to build your own scaling framework. I’m sure there will be folks who are appalled by the arrogance of doing something like this, but personally, I think we all should feel free to make our own Big Picture. When we can customize our processes in ways that work best for us, I think we win. We learn along the way and we don’t inherit a bunch of cruft from someone else’s framework.


Test Driven Transformation

January 28, 2019

The introduction of agile methods has brought a wave of innovation in the business world that some might argue has revolutionized thinking about how organizations should be structured and how people work together. However, as it stands today, much of the promise of agile methods is wrapped up in preconfigured frameworks that offer a one-size-fits-all solution for every business challenge that a company may face. This is despite the fact that the modern organization is a highly complex structure, bordering on chaotic, that is often not best served by the application of frameworks. We see this manifested most commonly today in the failures to scale agile methods within large organizations.

The conversation about failure rates in the world of transformation is similar to prior discussions about the failure rates of projects and programs: both are notoriously vague and poorly defined. Almost all of the surveys that you find (PMI, etc.) use an embarrassing amount of anecdotal evidence to back up their assertions. The very definition of failure is usually so broad as to be completely meaningless. So, with that said, I think it’s important that we are careful with any assertions that transformations are failing or succeeding. In fact, my experience is that when we are talking about transformations within organizations, we are working at such a high level, that it is never clear what is entirely successful or failing. After all, In a good transformation, there is a lot of failure. You experiment, try things out, and find out that they don’t work. I’m not sure I trust anyone who tells me that 100% of their efforts are always successful. That tells me that they aren’t really changing much.

When I speak of frameworks, what exactly do I mean? Well, I’m thinking globally. I’m not just talking about those large scaling frameworks like SAFe and LeSS (that’s easy), I’m also pointing the finger at small scale, team level frameworks like Scrum and XP. And it’s not that these frameworks can’t work or can’t be useful. In fact, I’ve seen them applied and applied well. However, more often than not, they aren’t applied well. I know there is bitter and acrimonious debate on this subject. I’ll leave that battle for others and simply say, “We can do better.”

We need to step back and reassess how we engage with organizations from the very earliest stages of the engagement. It’s no longer sufficient to make prescriptive, framework-oriented recommendations and have any reasonable expectation of those proposals having any kind of success. In fact, I think we may well find they are often more harmful than helpful. Framework oriented approaches give the false promise that their solutions will solve every problem, and when they fail, they leave the customer having wasted tremendous time an energy, without anything to show for it. To make matters worse, consultants implementing such transformations will simply say that the organization didn’t have the right “mindset”, effectively blaming the customer for the failure of the transformation. This allows the consultant to wash their own hands of any responsibility for the failure as they move on to the next engagement with yet another set of pre-packaged proposals.

It’s time that we brought an end to such thinking and begin to focus on how we can properly understand the problems in the organization before we even begin to make recommendations. Then, like with any prescription for a complex system, we need to apply trial experiments not broad frameworks to address the specific problems that we find. Of course, in order to do this well, we need to have reliable means of assessing the health of the system. We need to treat the system like what it truly is, a complex organic structure, that lives and breathes, composed of living elements interacting with each other and participating in flows of ingestion, respiration, and value production for customers. This requires a first principles approach to understanding organizations. We need to understand exactly what organizational health looks like before we can make any kind of decent assessment of the system. To make any recommendations without that sort of understanding is irresponsible.

So what’s our target? Achieving some hypothetical state of agility is not a meaningful or useful target for a transformation. Agility has no objective meaning that a business person finds useful. Instead it is an end state in search of a meaning. In short, it has none.

Alternatively, there are those who propose that we should start from a place of experimentation. That also is an insufficient starting point for working with organizations. A company is not a consultant’s toy to be experimented with. And no one wants to be the subject of experiments. The experimental approach, while well meaning, signals rather strongly that you not only don’t understand the problem, but also that you have no idea what the real solution is. This experimental approach should be considered by any business owner of integrity as completely useless.

What organizations need is a clear eyed and objective assessment of what the problem is. It should be the sort of analysis that allows us to measure our effectiveness against that of our competition and our customer market in some meaningful fashion. Furthermore, based on that data, we should know what the prescription for change should be with a very high degree of confidence. Organizations are not looking for your best guess. They want to have confidence that any change or transformation effort has some reasonably provable possible outcome.

Another way of putting this is to think of it as test driven transformation. We must have some idea of a reasonable set of tests for assessing the relative health of a system. The results of those tests should give us some clue to the different kinds of problems that may afflict the system. They must be quantifiable, and like a doctor, we must have some notion of what the results of the tests imply. It doesn’t mean that we know for sure what the outcome will be, but it also doesn’t mean that we are taking a random shot in the dark. A good doctor will use multiple diagnostic tests to build a picture of the problems with the patient. Based on the results of those tests, the doctor is able to narrow down the treatment to a subset of commonly recommended approaches. Nothing about this is random experimentation, but rather it is a systematic, data-driven approach to understanding the nature of the problem.


Under Pressure

January 26, 2019

All organizations are open flow systems that have inputs, outputs, and boundaries. Within them there is a shape and structure that facilitates the flow of work/ideas across a landscape filled with impediments/resistance. These systems are often called value streams. They take ideas or requests and turn them into downstream value for the customer.

The work/ideas in flow systems or in value streams have two fundamental properties:

  • Pulse – waves or fluctuation in capacity
  • Pressure – accumulation of work due to resistance

Even in the most healthy/alive organizations work is delivered in pulses or waves. These waves or pulses may be large or small. They may be regularly paced or erratic in their timing. They may be fast or slow. In any case, a healthy pulse is tuned to the demands of the environment.  We can use concepts like rate of customer demand or takt time to determine a healthy pulse for an organization. We can use the rhythm of the pace (smooth, even, or spiky) to also assess the health of the organization. Likewise, we can use the accumulation of work to measure the pressure within the system (release, relieve, resolve). Large backlogs create more pressure or resistance in a system than small backlogs. By comparing backlog size to the velocity of a value stream we can express the relative pressure within the system. High pressure and low pressure can then be assessed along with the consequences.

Where we find high pressure, we should expect to find turbulence within an organization. Therefore, we can have the simplest possible method to assess the organizational/business agility or aliveness of the system. Based on this kind of assessment, we should be able to make reasonable predictive statements about the health of the organization i.e. the pulse is fast, slow or irregular.  The pressure is too high, or perhaps too low and we can make prescriptions/recommendations or further tests to better understand the problem.

Within organizations, while the flow of work or ideas may play a primary role, like blood, there are other flows to consider such as the flow of funding and other resources that provide important food or energy for the system. These additional flows can be measured and should have their own pulse, peristalsis and pressure. 

Like a Doctor, I may initially check your pulse and blood pressure and then based on what I find, begin to ask questions about your diet or habits. If the system manifests unhealthy attributes, then we can test it using more refined tools like value stream mapping in order to create a more detailed picture of how the work flows through the system or subsystems.

As we map the topology of the organization, we can use different tests to help uncover resistance and turbulence in the system. For example, a dependency map of the teams along with a relative measure of the connection strength and quality between teams can help us to find hot spots or bottlenecks that create fiction and reduce flow. 

We also need to understand the product families and their relative backlog health and velocity in order to understand where the pressure lies in the system.  Where there is high pressure, we need to improve the vascularity or flow in the system.

We can use time/motion studies to assess the impact of the physical environment. Understanding the distance between teams can reveal important information about temporal delays in the system.

Finally, these systems are based on human beings who are motivated and driven by feelings. As such, we can use interviews to assess questions of autonomy, mastery, and purpose in order to understand the emotional/cultural personality of the organization. We can also use subjective assessments/surveys to gather and understand the polarity of feelings across different groups (NPS score, etc.).

You wouldn’t try to prescribe all of these tests at once (although it might be tempting). Instead, I would begin with pulse/pressure in different places (arms, legs, Business Units). Based on what I find, then I might move to more specific tests (like value stream mapping and team dependency assessment).

Alternatively, we could structure an assessment with phases:

  1. Overview – high level assessment of primary indicators like overall pulse and pressure. -indication of key pacemakers
  2. Workflow for suspect areas (pulse) – using value stream mapping – team dependencies – emotional topography
  3. Product Family Analysis (pressure) – Backlog and Velocity/Product – Market health
  4. Environment Assessment (structure) – Team structure – location in time and space study

We can see the improvement in flow of a system by the evolution of its design. Large chunks of work called requirements can be broken down into initiatives, features, stories and tasks.  This breakdown is optimized for the fastest transportation of information across the organization as characterized by large initiatives. In order for that work to move to the smaller cells (or teams) of the organization, it is broken down or vascularized into smaller features and ultimately into stories. These units, in combination with an arbitrary planning cadence, provide the necessary elements for understanding measuring the pulse of the organization.

Likewise, the structure and attachment of the work muscles or teams has an architecture that encourages flow. It is fractal in nature starting with the teams, and then aggregating to teams of teams and perhaps even larger. Like cells in the body, these teams are self-organizing. This provides the most “alive” design that can reconfigure itself according to the stimuli provided by the ecosystem. The design of the team matches the design of the work flowing through the system with teams processing stories, teams of teams processing features and the whole organization processing initiatives.

This also gives us a clue into how funding should be designed. It should match the design of the work, the structure of the teams and the planning cadences in order to provide the energy for the system to flow smoothly.