Letting them build it

February 27, 2019

Agile methods like scrum and XP are very exciting, especially when you are first introduced to them. There is something very common sense about the ideas in them that seems to resonate for a lot of people. I know it was that way for me. I’d looked at a lot of different project management methods before settling on XP (thank you Steve McConnell). A lot of those methods looked interesting, but XP was the first one that just made sense. For a young project manager looking for a new way to do things, it was an easy choice. 

Now when you look closely at a method like XP you learn very quickly that it is actually a collection of practices, many of which have been around for a very long time. The thing that makes XP work, is the way that this particular set of practices or, as I like to think of it, this big agile bag full of cats works together. For instance, iterations by themselves have been around for a very long time under a different name: time boxes. Pair programming on the other hand, was a relatively new innovation as far as I know (although not entirely unheard of). And while continuous integration had actually been around in some form or another for a while, it was certainly best articulated and demonstrated by the proponents of XP. On their own I would argue that each of these ideas had plenty of merit, but the real magic happens when you combine them together. Each of these practices, and in XP there were roughly 13 of them, complements and overlaps one or more other practices in the set. So as a whole, you have a system of related ideas that have some redundancy and interconnection. You can see this in Ron Jeffries’ diagram of XP.

Now this gives you a package offering of interrelated ideas that many, including all XP practitioners I’ve ever met, say you need to adopt as a whole. You can’t just pick and choose the bits you like and expect to get great results. Why not? Well, I would go back to the redundancy and interrelated ideas. Let’s suppose for just a minute that you adopted all 13 XP practices, but you found that continuous integration for one reason or another was “too hard” or “not a good cultural fit” or for some other reason wasn’t going to work for your team. What might happen? Well, in all likelihood, in the short term you might not see any immediate effect. In fact, you might find that the team goes a little faster because they aren’t struggling to build continuous integration into their process. But hang on, we’re not done yet. You see there are practices that depend on continuous integration in order to work. For example, test driven development (TDD) and continuous refactoring. TDD relies on CI to give the developers quick feedback on their tests. That can’t happen without CI. So, developers are going to lose feedback on their tests, which means they aren’t going to get as much value from doing the tests in advance…and therefore they aren’t likely to keep doing TDD. Quality may start to suffer. And if they don’t have CI and TDD, then they don’t have the safety net of tests that they need to do continuous refactoring…so they are going to be less likely to try refactoring because it feels too risky.  By removing CI we have undermined quality and the resilience of the system we are developing (because we’re no longer refactoring). 

The impact of removing practices, especially in a pre-packaged set of methods has some rather insidious consequences. Things don’t immediately fall apart. Instead there is a gradual erosion of benefits that causes a cascade of related and also seemingly unrelated problems. You may still be getting some benefit from the remaining XP practices, but the system is now much more fragile and less resilient. You have removed some of the reinforcing mechanisms from the method that helped insure it is robust. When the team encounters a crisis, some sort of emergency in production where they need rapid turnaround and depend on high feedback, they aren’t prepared. They are slow to respond, introduce more defects and likely to struggle. At which point someone is liable to point out that this process sucks. Congratulations! Of course it does, you made it suck.

This is the reason that adherents of pre-packaged methods tend to sound so religious about the unequivocal adoption of all their practices. You have to adopt all the practices, otherwise you aren’t doing XP, Scrum, Kanban, and so on. I want to pause for a moment, because I don’t think that’s the end of the story. 

If we were to stop for a moment and look at development and management practices (agile and otherwise) we might find that there are practices that tend to have similarities that might cause us to group them together. Testing and QA practices like TDD, BDD, and others do share many similarities. Estimation practices like story points, ideal developer days, and others also share similarities. My point is that for any given meme or idea that we have in XP or in agile in general, there are multiple supporting practices that may fit. In addition, some practices are sophisticated enough that adoption can be measured by degree rather than in absolutes (we are 30% toward CI rather than all or nothing). My point is that there are multiple options for many of the key elements of popular frameworks. And even within many of those options there is a matter of the degree of adoption. After all, as so many agile advocates often say, it’s a journey, not a destination. Therefore, if I’m 30% of the way along the path, that must be worth something.

All of this is to say that we can substitute our own practices with some judicious caution. We’re allowed to do that, despite what the more religious might say. In fact, we can mix and match to find the elements that work for us. Now this is really hanging our toes out on the radical edge. Ivar Jacobson has something he calls essential methods. Basically, it is a catalog of development methods that you can combine and recombine to build your own framework. Now, you can still screw up. Remember that the reason that frameworks like XP and scrum have been successful is that they have concepts that are interlocking and support each other. The DIY approach is much riskier (practices may or may not support each other), but for some groups that may be the best way to go.

The important thing is to understand why these frameworks work as well as they do. They are composed of a series of practices that support each other, making them robust in the face of a world full of disruption and challenges. You mess with them at your own risk. Or…you build your own. Just know that you need to understand what you are building. If you do it poorly, it very likely won’t work.


Time Machine

February 26, 2019

OK, Mr. Peabody, where are we going today?

Well Sherman, Any time I explain what Scrum or XP is, I start with time boxes. The time box method has been around a really long time. The earliest record I can find in a casual search is where they were used at DuPont in the 1980s. I suspect that time boxes are much older than that. The time box basically applies a constraint to the system. It creates an arbitrary start and end date, usually on the smaller side. You commit to a fixed amount of work and when the end of the time box is reached you are done, no matter what the completion state of the work. Work that is complete is counted as done within the time box, work that still remains to be finished is either scope that gets dropped or perhaps that work is continued in the next time box.

This technique has some benefits:

  1. Deadlines, even arbitrary 2 week time boxes, help keep everyone focused.
  2. Deadlines force the question of prioritization. Not everything will fit in the box.
  3. Small time boxes create a short heartbeat or pulse that is useful for measures of capacity and throughput.
  4. It forms a useful skeleton for the OODA improvement cycle

There are also some challenges:

  1. Small time boxes demand that you figure out how to break work down into smaller, but still valuable pieces. Many teams find this hard to do.
  2. Small time boxes means that it is almost inevitable that scope won’t be delivered sooner or later. How the business manages this scenario says a lot about how the benefits of time boxes are perceived.
  3. Much of the angst of estimation is due primarily to the fact that teams are struggling to fit work to their limited capacity in ways they didn’t have to prior to the time box.
  4. It doesn’t work if you can’t break the iron triangle of scope, schedule, and quality. Scope usually has to be compromised in some form or another in order for time boxes to work (it’s kind of what they are based on)

Like so many other things, a time box is useful in the right context, but not all contexts. I’ve seen a few projects where a time box would not work (hardware constraints, legacy mainframe applications, an organization that wasn’t willing to give up the iron triangle, etc.). All too often we force the time box on the team and tell them that they suck if they can’t overcome the challenges. Sometimes that’s true, other times it isn’t. It’s a judgement call. Beware, and don’t let yourself get caught forcing a round peg into a square hole (I’m looking at you Scrum).


Painting The Spots

February 16, 2019

If you do a little reading about Scrum one of the first things that you learn are the 5 basic values of Scrum:

  • Courage
  • Focus
  • Respect
  • Committment
  • Openness

I’d like to examine one of those values that I watched a team wrestle with recently: commitment. These were really great folks. They were bright, energetic, friendly and passionate about the work they were doing. Within the team they took a lot of pride in their ability to “be agile.” They seemed to be doing a lot of good stuff.

However, I was hearing some disconcerting things from other parts of the organization. Other teams characterized this team as flakey. Managers expressed frustration that they didn’t deliver. I wasn’t sure what the story really was. Was it a cultural thing? Was it petty jealousy at work? I really had no idea.

An opportunity came along to do a little coaching with the team in question, so I was eager to find out more. Here’s what I found:

  • Optimism at the start: So the team said that they were prone to overcommitting to the amount of work they could handle in a sprint. During sprint planning, they would realize the balance of the work was unequal and that there would be team members left idle. So they would take on more “overflow” work to make sure that everyone on the team has something to do during the sprint. It’s great that they were aware of this problem. This pattern of behavior was leading the team to consistently overload their sprints with more work than they could achieve. The team told me that their typical velocity was 27-29 points per sprint. When I asked them what they had committed to in the last sprint, the answer was: 44 points. When I pointed out the obvious discrepancy, they admitted that they had overflow work from the previous sprint that they felt they had to get done. So then I asked them if they were going to deliver on all 44 points. And the survey says: No.
    The good news? This injury was self-inflicted. The bad news? It didn’t sound like they were entirely convinced they had a serious problem. A pattern of failing to reliably deliver sprint objectives can lead to a crisis of trust with a team’s stakeholders. The stakeholders start to doubt whether or not you will deliver on your sprint commitments. This can be a corrosive influence on the relationship with the very people who are signing the team’s paychecks. The solution? Stop overcommitting. This means that the team has to face some awkward issues about how to manage balancing work within their ranks. These are issues they were able to hide from by overloading the team with work. I got some grudging buy-in at this point, but I could tell that there was still work to be done.
  • Carry over matter: Since they are overloading the sprint, they are almost guaranteed to have items that are not completed and those get carried into the next sprint. I took the time to point out that this sort of issue is a problem, but you can skate by when you are simply going from sprint to sprint. However, when you are trying to work to a release plan with multiple teams and multiple sprints, then carry over is a total deal breaker. If you are working with other teams and you have a pattern of failing to deliver stories, the other teams are very quickly going to learn that you are not a good partner to work with.
  • Transparency: So I asked about this because I wasn’t sure what the problem was. Apparently they were concerned that they were being asked to track their time and their tasks in a time tracking tool to a level of detail that was making them uncomfortable. As we talked about it someone said, “I don’t think they trust us…” I could tell that this person was a bit upset by this perceived lack of trust. Of course I put on my Mr. Sensitivity hat and replied…Of course they don’t trust you! You don’t deliver committed work on time!

Well, I don’t think I said it exactly like that, but it was some polite variation on that theme. Now people were upset, and finally my message was getting through. The product owner for the team, gave me loud and vigorous support at this point. You could tell that we had stumbled on a fundamental assumption that people on the team were realizing was dead wrong. The scrum master articulated the invalid assumption for me: The whole purpose of having a sprint goal means that you can achieve the goal without having to deliver specific stories. You focus on the goal rather than the stories. That is an interesting, but completely incorrect interpretation of how commitment works. Apparently much of the team was operating with this model in mind. Once I pointed out that other people were depending on those specific stories being delivered, not some abstract goal, then you could feel the resistance immediately start to evaporate.

The other thing that was a little disturbing about this situation is the blind spot that the team had when working with other teams. They had explained away their inability to deliver as due to their own superior understanding of what it means to ‘be agile.’ No one else understood how awesome they were because the other teams weren’t as agile as they were. Now there is no doubt that they were doing a lot of things right. Like I mentioned in the beginning, they had a lot of good things going on. However, they had managed to paint over the ugly bits of their process without examining them and addressing them. Their ‘agility’ was their excuse for not delivering commitments. This sort of failure is not unusual – I’ve seen it happen in plenty of other teams. Dealing with these sorts of issues is hard for a team to do. Sometimes it takes an outsider to see them and point them out. So be careful about declaring your own agility. Doing so can sometimes hide some ugly spots.

This is What I Do

I provide innovative agile coaching, training, and facilitation to help organizations transform to deliver breakthrough products and performance. I do this by achieving a deep understanding of the business and by enabling the emergence of self-organizing teams and unleashing individual passion.

To learn more about the services that I offer or to arrange for an initial consultation, please see thomasperryllc.com


It’s All About Flow

February 14, 2019

OK, please forgive me, but I’m going to geek out for bit here on some Thermodynamics of Emotion stuff. Furthermore, I’m going to try and draw an analogy between a law of thermodynamics and the business world. So, hold on to your hats, here we go… 

In the Design of Nature, Bejan states the Constructal Law as:

“For a finite-size flow system to persist in time (to live), its configuration must evolve in such a way that it provides easier access to the currents that flow through it.”

-Bejan, Adrian. Design in Nature

This is to say that for any living system there is a design or landscape that must change over time such that the flow through the system improves. The design can be anything as primitive as the branching of streams, the vascularity of the arteries and veins in your body, or perhaps the process that you use to do work at the office.

In business, process is the design that we use to structure the way work flows through our organizations. As such, the process is not arbitrary, but intentional. If it improves the flow of work, then it’s a useful process, if it degrades the flow of work, then it’s not. By improving the flow of work, we mean that it must configure the landscape or domain such that the work flows more easily (read with less resistance) through the system. That also implies that the access to that work is improved (it takes less energy to find it).

According to Constructal Law, processes that allow work to remain hidden interfere with flow. Processes that constrain work so that it’s flow can’t change or evolve also interfere with flow. Given these assumptions, old-school, plan-driven methods with rigidly defined processes are counter to healthy flow and are less likely to succeed than processes that are dynamic and enable transparency of work in the organization.

In fact, to carry this one step further. What we are currently witnessing in the last two to three decades is the evolution of processes in the business world. Rigid, plan driven processes are dying off, as the Constructal Law would predict, in the face of new dynamic processes like agile. Any process, even somewhat imperfect, that improves flow and transparency of work in the system is going to be more successful (more efficient conversion of energy to work) than a more rigid process. 

Of course, agile too will one day be replaced by a process that successfully enables better flow. What that next process is remains to be seen.


Team Emotional Flow

February 12, 2019

The morning begins with everyone arriving at the office and gathering in the kitchen. The whole team is works together, there are no remote workers. As folks grab coffee and maybe toast a bagel, there is casual banter about the game the night before, the kids performance at a school play, and plans for an upcoming barbecue.

When the last member of the team arrives, they all gather round into a circle looking at one another. There are a few mumbled “good mornings” and one member starts off with, “I’m feeling excited, we are going to get to integration test the system for the first time today. I think the plan is to start around 10:00.”

There are a few raised eyebrows and then a question or two as folks sync up. The next person in the circle says, “I’m feeling frustrated this morning. The work on the UI hit a stumbling block last night, and I hate leaving work with an unresolved problem.” Someone else chimes in with, “Me too! Let’s pull the mob together and see if more brains can help us nail this problem this morning.” There are general mumbles of assent from the group and the process continues with the next person, “I’m feeling glad that we’re making progress. I think I know what is causing that problem, so I’m looking forward to sharing a potential solution.”

And so it goes, each after the other. The format is relatively loose: You always start with sharing a feeling, then follow up with any resistance you may be encountering. The emphasis is on keeping the interaction casual and not forcing anything. There is no pre-defined leader. Everyone has agreed that this kind of sharing is important and they support it as needed.
At the end of the meeting, everyone updates their feeling status on a whiteboard. They track their feelings on a daily basis so that they can see trends in their overall team mood. They work together as closely as possible. They use mob programming to do their work together whenever possible. The focus is on sharing their experience together.

One tool they use to keep themselves aware of the emotional flow of the team is frequent use of the “check in”. The check in is taken from Jim McCarthy’s core protocols. The idea is to declare your emotional state at the beginning of significant meetings and interactions. This helps to make emotion visible to everyone and gives important needed context for others who you work with. You simply state your current emotion: I feel Mad, Sad, Glad, etc. It lets everyone know where you are at and helps the group to synchronize emotionally. It doesn’t have to be rigid and highly formalized. I think that depends on the character of the team. I personally prefer a casual but disciplined approach (always do it, but let the language be natural and informal rather than highly structured and rigid).

I offer this as an alternative to the traditional standup. We don’t track work, we track feeling. We focus on achieving emotional flow. We don’t use a rigid system of pre-defined questions that must be answered. We flow.


Building a Scaled Agile Framework for Dummies

February 10, 2019

Scaled Agile Frameworks like SAFe are all the rage these days. You can go out now and get training, certification and a shave from a bevy of consultants that for a mere two grand per head (not really sure about the shave part). That’s a perfectly legitimate approach. However here’s a dirty little secret: anyone can do it. Here’s an example of one that I made a few years ago.

I had taken a look at SAFe and there was a lot that I liked and there were some things that just didn’t seem to fit our context. With those qualifications in mind. I decided I could make my own version. I got out my notepad and my colored sharpies and I went to town. I knew that I liked the three layer model, but I found a lot of the SAFe Big Picture had too much complexity in it. So you can see that in the first level, I simplified things quite considerably. The second or program level was also quite simple. I mixed in some things like agile chartering which I felt would be beneficial and were not found in the SAFe diagram. What about the third (Portfolio) level? Well, at the time I really didn’t have a clear idea how that would look. It was at this level that I was looking to integrate the model with our existing PMO practices – which in hindsight was probably a mistake (hey, make your own model and you make your own mistakes). So then I started to iterate.

Now I was starting to think about how things related between the three layers. Those interactions between the team level, the program level, and the portfolio level seemed to be very important. I was also experimenting with different ways of visualizing the processes on each level (with what I must confess are varying degrees of success). My color repertoire had expanded too.

Finally I started to look at the processes as a series of prescriptive steps that I needed to be able to document and describe to people. You can see that I added numbers and then I took each of those interlocking blocks and documented them. I made poster sized copies and put them on the wall outside my office with a sharpie hanging next to them. The request was simple – please change it to fit your needs. After a few days, I had more feedback and iterated from there.

Building your own scaling model isn’t for everyone. However, it’s not rocket science either. If you have a modest understanding of your own business domain, AND you understand the basics of the agile frameworks, you have everything necessary to build your own scaling framework. I’m sure there will be folks who are appalled by the arrogance of doing something like this, but personally, I think we all should feel free to make our own Big Picture. When we can customize our processes in ways that work best for us, I think we win. We learn along the way and we don’t inherit a bunch of cruft from someone else’s framework.


Test Driven Transformation

January 28, 2019

The introduction of agile methods has brought a wave of innovation in the business world that some might argue has revolutionized thinking about how organizations should be structured and how people work together. However, as it stands today, much of the promise of agile methods is wrapped up in preconfigured frameworks that offer a one-size-fits-all solution for every business challenge that a company may face. This is despite the fact that the modern organization is a highly complex structure, bordering on chaotic, that is often not best served by the application of frameworks. We see this manifested most commonly today in the failures to scale agile methods within large organizations.

The conversation about failure rates in the world of transformation is similar to prior discussions about the failure rates of projects and programs: both are notoriously vague and poorly defined. Almost all of the surveys that you find (PMI, etc.) use an embarrassing amount of anecdotal evidence to back up their assertions. The very definition of failure is usually so broad as to be completely meaningless. So, with that said, I think it’s important that we are careful with any assertions that transformations are failing or succeeding. In fact, my experience is that when we are talking about transformations within organizations, we are working at such a high level, that it is never clear what is entirely successful or failing. After all, In a good transformation, there is a lot of failure. You experiment, try things out, and find out that they don’t work. I’m not sure I trust anyone who tells me that 100% of their efforts are always successful. That tells me that they aren’t really changing much.

When I speak of frameworks, what exactly do I mean? Well, I’m thinking globally. I’m not just talking about those large scaling frameworks like SAFe and LeSS (that’s easy), I’m also pointing the finger at small scale, team level frameworks like Scrum and XP. And it’s not that these frameworks can’t work or can’t be useful. In fact, I’ve seen them applied and applied well. However, more often than not, they aren’t applied well. I know there is bitter and acrimonious debate on this subject. I’ll leave that battle for others and simply say, “We can do better.”

We need to step back and reassess how we engage with organizations from the very earliest stages of the engagement. It’s no longer sufficient to make prescriptive, framework-oriented recommendations and have any reasonable expectation of those proposals having any kind of success. In fact, I think we may well find they are often more harmful than helpful. Framework oriented approaches give the false promise that their solutions will solve every problem, and when they fail, they leave the customer having wasted tremendous time an energy, without anything to show for it. To make matters worse, consultants implementing such transformations will simply say that the organization didn’t have the right “mindset”, effectively blaming the customer for the failure of the transformation. This allows the consultant to wash their own hands of any responsibility for the failure as they move on to the next engagement with yet another set of pre-packaged proposals.

It’s time that we brought an end to such thinking and begin to focus on how we can properly understand the problems in the organization before we even begin to make recommendations. Then, like with any prescription for a complex system, we need to apply trial experiments not broad frameworks to address the specific problems that we find. Of course, in order to do this well, we need to have reliable means of assessing the health of the system. We need to treat the system like what it truly is, a complex organic structure, that lives and breathes, composed of living elements interacting with each other and participating in flows of ingestion, respiration, and value production for customers. This requires a first principles approach to understanding organizations. We need to understand exactly what organizational health looks like before we can make any kind of decent assessment of the system. To make any recommendations without that sort of understanding is irresponsible.

So what’s our target? Achieving some hypothetical state of agility is not a meaningful or useful target for a transformation. Agility has no objective meaning that a business person finds useful. Instead it is an end state in search of a meaning. In short, it has none.

Alternatively, there are those who propose that we should start from a place of experimentation. That also is an insufficient starting point for working with organizations. A company is not a consultant’s toy to be experimented with. And no one wants to be the subject of experiments. The experimental approach, while well meaning, signals rather strongly that you not only don’t understand the problem, but also that you have no idea what the real solution is. This experimental approach should be considered by any business owner of integrity as completely useless.

What organizations need is a clear eyed and objective assessment of what the problem is. It should be the sort of analysis that allows us to measure our effectiveness against that of our competition and our customer market in some meaningful fashion. Furthermore, based on that data, we should know what the prescription for change should be with a very high degree of confidence. Organizations are not looking for your best guess. They want to have confidence that any change or transformation effort has some reasonably provable possible outcome.

Another way of putting this is to think of it as test driven transformation. We must have some idea of a reasonable set of tests for assessing the relative health of a system. The results of those tests should give us some clue to the different kinds of problems that may afflict the system. They must be quantifiable, and like a doctor, we must have some notion of what the results of the tests imply. It doesn’t mean that we know for sure what the outcome will be, but it also doesn’t mean that we are taking a random shot in the dark. A good doctor will use multiple diagnostic tests to build a picture of the problems with the patient. Based on the results of those tests, the doctor is able to narrow down the treatment to a subset of commonly recommended approaches. Nothing about this is random experimentation, but rather it is a systematic, data-driven approach to understanding the nature of the problem.