Agile Open Northwest 2019

January 31, 2019

Today, like every other day, we wake up empty and frightened. Don’t open the door to the study and begin reading. Take down a musical instrument.

-Rumi

Agile Open Northwest is a conference themed around agile software development that is run using an unorthodox style called Open Space (sometimes referred to as an un-conference). Open space is a facilitated meeting where the topics are introduced by the audience on the first morning of the event. There is no canned, pre-set agenda. There are no keynote addresses. None of the traditional elements of a conference are present. Instead, you build the agenda collaboratively on the spot that morning with the group that shows up. If you have a talk you want to share, go for it. If you have a question you’d like answered, go for it. If you want to play with an idea or a short workshop, again, go for it. Anything you can imagine that relates to the central theme of the conference is fair game. You don’t have to be an expert. You don’t have to have experience. You just have to have an interest in a topic or question that you would like to share with others. That’s it. No more. No less.

You never know what sort of topics people will present. Every year I’m surprised by something new and interesting that I never anticipated. Some examples from past years include:

  • A marvelous workshop on mind mapping 
  • An introduction to the Thermodynamics of Emotion
  • PowerPoint Karaoke
  • And many more…

Many of the faces are familiar. I think of this group as my Pacific Northwest tribe. I guess that’s why I find that much of my time at the conference these days is simply meeting with folks and catching up on what’s happening. We have a really amazing group of people here in the northwest who have been innovating and helping others for as long as agile has been around (and longer…). So it’s reinvigorating to be able to join the circle and sit with wonderful people you admire. If you haven’t been to this conference before, then you owe it to yourself to give it a try. If you are a veteran, then I look forward to seeing you there.

If you are looking for more information, check out www.agileopennorthwest.org 

There are still a few seats left!


Individuals and Interactions

January 30, 2019

One of the things that has challenged me the most working with groups of people is the following statement from the Agile Manifesto:

We have come to value individuals and interactions over processes and tools. 

You see, the reality is that when I’m consulting, I almost always seem to end up talking about processes and tools. I talk to teams about scrum (process). I talk to teams about kanban (process). I talk about task boards (tools) and facilitation techniques (tools). I talk about the importance of flow (process) and value stream mapping (tool).  

Basically, I do a terrible job of focusing on individuals and interactions. I lean to the right in this particular equation pretty hard. It turns out that an awful lot of the time I end up doing the very thing that we say we shouldn’t do as agilists.

Don’t get me wrong, I don’t think I’m unique in this flawed behavior. I know for a fact that a lot of consultants have the same problem. We all know that we should be focusing on individuals, but we struggle to find the right way to do that. 

What does an emphasis on individuals and interactions look like? First, it means building relationships with people. Building trust with them and listening to what they say. It means asking questions rather than proscribing solutions. It means inquiring about what interactions there are and what the quality of those interactions are like. It means asking how you feel about a variety of things. How do you feel about your work, your management, your colleagues?

So, the next time you find yourself talking to a group about a framework, remind yourself to take a step back and consider a different approach. Perhaps your time would be better spent asking about how they feel about working together. No process or tool will fix that (all claims by vendors to the contrary). I need this as a rubber band around my wrist. Something that I can snap to remind myself to attend to the people and not the tools. I’ve done tools and processes for far too long. It’s become a bad habit for many of us in the industry. I think the time has come to push back and ask if we are really doing ourselves a disservice with the tool emphasis. 

Look, I know it says tools right in the title of my blog. I get it. It’s a hangover from my early days in blogging when I thought, “Hey, wouldn’t it be cool if I had a blog that reviewed agile tools?” Well, I think I did a tool review just about once and then never talked about agile tools again. I guess that’s just how it goes sometimes with blogs. Sometimes they end up being about things that you probably never expected. That’s OK, I think agile tools has been something that some people have found helpful, and that’s really all that matters to me.


Discovering Motivations and Needs

January 29, 2019

I have been working on a new way of doing organizational discovery lately. I think of it as discovering motivations and needs. It’s a very different starting point for an engagement that what is conventionally done (OK, it’s new to me). Here’s how it works:

My starting point is to find out how people feel about the place that they work. In this particular case, prior to the engagement, I used publicly available information on Glassdoor.com. I looked up the company and found the employee reviews. These reviews ask for the pros, cons, and any advice the employee may have for the company. I gather all of the text from the pro feedback and aggregate it together in a file, then without any edits whatsoever, I put that file through Wordle. Now if you are not familiar with Wordle, it’s a tool that creates a visual map of the most common words found in any text you choose to feed it. It eliminates the ‘noise’ of common articles (words like: the, to, a, he, she). The remaining words that appear with the most frequency are given a correspondingly larger font. Words that appear less frequently are smaller. It looks something like this:

At a glance it can be a very good way to identify the most potent and prevalent themes in a text. So it is a good way to use a tool to discover the things that people feel the most strongly about when talking about a company that they work for. 

I take the text from the pros, cons, and advice and put it into three corresponding files that I then run through Wordle to generate a sort of heat map of the words that are most prevalent in each text. The pros tend to look like the things that people are most excited by and energized with at the company. These may represent appetites that people seek within the company. They are the things that get them out of bed in the morning to come to work. They are the things that attract us to our jobs. These attractors could also be called drives or motivations. It’s surprising some of the different motivations you can find simply by using this method. It offers a curious insight into the things that excite people at different companies. 

Similarly, we can run the same exercise with the cons or the things that people don’t like about a given company. Again, the Wordle can be very revealing. Often the words represent things that people want that are missing from the company that they work for. These ‘wants’ or missing things are what I characterize as needs. The aggregation of these needs as derived from the Wordle is what your employees want from the company the most.

We can run a similar exercise with the advice that employees provide, and it seems to map rather closely with the cons that they describe, so I tend (right now) to treat the two as synonymous. Advice is the employee telling us what they want – again, needs.

If you are getting started on an engagement, this sort of information is very compelling and useful for a couple of reasons:

  • It gives you insight into the emotional state of the organization. I don’t know of any other practice used in assessment or discovery that does this in a systematic fashion. Interviews are the only thing that comes close, and those typically yield very hard to quantify anecdotal data.
  • It gives you an idea what people will get excited by and what needs they have that aren’t being met. This is absolutely critical to the success of any change effort! 

This sort of information is very useful, because any change effort that matches these drives or satisfies these needs is much more likely to be successful. We need to match our change to the emotional context of the organization. That is to say, that our changes must match the things that motivate the majority of people or they must help ratify the needs of the majority of folks in the organization. Otherwise, what is the alternative. I would submit that any change you propose, no matter how powerful or useful, that doesn’t match the motivations or needs of the organization is ultimately doomed to fail. 

I’ll say it in one sentence: Figure out what drives people emotionally or your proposed change will very likely fail.


Test Driven Transformation

January 28, 2019

The introduction of agile methods has brought a wave of innovation in the business world that some might argue has revolutionized thinking about how organizations should be structured and how people work together. However, as it stands today, much of the promise of agile methods is wrapped up in preconfigured frameworks that offer a one-size-fits-all solution for every business challenge that a company may face. This is despite the fact that the modern organization is a highly complex structure, bordering on chaotic, that is often not best served by the application of frameworks. We see this manifested most commonly today in the failures to scale agile methods within large organizations.

The conversation about failure rates in the world of transformation is similar to prior discussions about the failure rates of projects and programs: both are notoriously vague and poorly defined. Almost all of the surveys that you find (PMI, etc.) use an embarrassing amount of anecdotal evidence to back up their assertions. The very definition of failure is usually so broad as to be completely meaningless. So, with that said, I think it’s important that we are careful with any assertions that transformations are failing or succeeding. In fact, my experience is that when we are talking about transformations within organizations, we are working at such a high level, that it is never clear what is entirely successful or failing. After all, In a good transformation, there is a lot of failure. You experiment, try things out, and find out that they don’t work. I’m not sure I trust anyone who tells me that 100% of their efforts are always successful. That tells me that they aren’t really changing much.

When I speak of frameworks, what exactly do I mean? Well, I’m thinking globally. I’m not just talking about those large scaling frameworks like SAFe and LeSS (that’s easy), I’m also pointing the finger at small scale, team level frameworks like Scrum and XP. And it’s not that these frameworks can’t work or can’t be useful. In fact, I’ve seen them applied and applied well. However, more often than not, they aren’t applied well. I know there is bitter and acrimonious debate on this subject. I’ll leave that battle for others and simply say, “We can do better.”

We need to step back and reassess how we engage with organizations from the very earliest stages of the engagement. It’s no longer sufficient to make prescriptive, framework-oriented recommendations and have any reasonable expectation of those proposals having any kind of success. In fact, I think we may well find they are often more harmful than helpful. Framework oriented approaches give the false promise that their solutions will solve every problem, and when they fail, they leave the customer having wasted tremendous time an energy, without anything to show for it. To make matters worse, consultants implementing such transformations will simply say that the organization didn’t have the right “mindset”, effectively blaming the customer for the failure of the transformation. This allows the consultant to wash their own hands of any responsibility for the failure as they move on to the next engagement with yet another set of pre-packaged proposals.

It’s time that we brought an end to such thinking and begin to focus on how we can properly understand the problems in the organization before we even begin to make recommendations. Then, like with any prescription for a complex system, we need to apply trial experiments not broad frameworks to address the specific problems that we find. Of course, in order to do this well, we need to have reliable means of assessing the health of the system. We need to treat the system like what it truly is, a complex organic structure, that lives and breathes, composed of living elements interacting with each other and participating in flows of ingestion, respiration, and value production for customers. This requires a first principles approach to understanding organizations. We need to understand exactly what organizational health looks like before we can make any kind of decent assessment of the system. To make any recommendations without that sort of understanding is irresponsible.

So what’s our target? Achieving some hypothetical state of agility is not a meaningful or useful target for a transformation. Agility has no objective meaning that a business person finds useful. Instead it is an end state in search of a meaning. In short, it has none.

Alternatively, there are those who propose that we should start from a place of experimentation. That also is an insufficient starting point for working with organizations. A company is not a consultant’s toy to be experimented with. And no one wants to be the subject of experiments. The experimental approach, while well meaning, signals rather strongly that you not only don’t understand the problem, but also that you have no idea what the real solution is. This experimental approach should be considered by any business owner of integrity as completely useless.

What organizations need is a clear eyed and objective assessment of what the problem is. It should be the sort of analysis that allows us to measure our effectiveness against that of our competition and our customer market in some meaningful fashion. Furthermore, based on that data, we should know what the prescription for change should be with a very high degree of confidence. Organizations are not looking for your best guess. They want to have confidence that any change or transformation effort has some reasonably provable possible outcome.

Another way of putting this is to think of it as test driven transformation. We must have some idea of a reasonable set of tests for assessing the relative health of a system. The results of those tests should give us some clue to the different kinds of problems that may afflict the system. They must be quantifiable, and like a doctor, we must have some notion of what the results of the tests imply. It doesn’t mean that we know for sure what the outcome will be, but it also doesn’t mean that we are taking a random shot in the dark. A good doctor will use multiple diagnostic tests to build a picture of the problems with the patient. Based on the results of those tests, the doctor is able to narrow down the treatment to a subset of commonly recommended approaches. Nothing about this is random experimentation, but rather it is a systematic, data-driven approach to understanding the nature of the problem.


Scaling Self-Organizing Systems

January 27, 2019

I was reading Geoffrey West’s book, Scaling: The surprising mathematics of life and civilization recently and something interesting jumped out at me. Where networks, whether they are biological or man-made, are concerned, there appear to be some efficiencies as the network grows. Generally speaking, fewer resources are required for the same amount of effort as scale increases two-fold or more. That applies when we are talking about networks. However, when West reviewed corporations he noticed that the same benefits of scaling did not apply. Surprise!

Hold my beer. I’ve got this…

The organizations that West examined were very likely hierarchies. Hierarchies are the worst performing sort of network. So, it is no surprise at all that scaling doesn’t work well for hierarchies. As hierarchies get larger, communication and the flow of resources tends to get less efficient. Living systems are self-organizing, cities are self-organizing, hierarchies are definitely not self-organizing.

If West had looked at organizations that were founded on self-organization like Morningstar, W.L. Gore, or Semco, I suspect he might have found a different result.


Under Pressure

January 26, 2019

All organizations are open flow systems that have inputs, outputs, and boundaries. Within them there is a shape and structure that facilitates the flow of work/ideas across a landscape filled with impediments/resistance. These systems are often called value streams. They take ideas or requests and turn them into downstream value for the customer.

The work/ideas in flow systems or in value streams have two fundamental properties:

  • Pulse – waves or fluctuation in capacity
  • Pressure – accumulation of work due to resistance

Even in the most healthy/alive organizations work is delivered in pulses or waves. These waves or pulses may be large or small. They may be regularly paced or erratic in their timing. They may be fast or slow. In any case, a healthy pulse is tuned to the demands of the environment.  We can use concepts like rate of customer demand or takt time to determine a healthy pulse for an organization. We can use the rhythm of the pace (smooth, even, or spiky) to also assess the health of the organization. Likewise, we can use the accumulation of work to measure the pressure within the system (release, relieve, resolve). Large backlogs create more pressure or resistance in a system than small backlogs. By comparing backlog size to the velocity of a value stream we can express the relative pressure within the system. High pressure and low pressure can then be assessed along with the consequences.

Where we find high pressure, we should expect to find turbulence within an organization. Therefore, we can have the simplest possible method to assess the organizational/business agility or aliveness of the system. Based on this kind of assessment, we should be able to make reasonable predictive statements about the health of the organization i.e. the pulse is fast, slow or irregular.  The pressure is too high, or perhaps too low and we can make prescriptions/recommendations or further tests to better understand the problem.

Within organizations, while the flow of work or ideas may play a primary role, like blood, there are other flows to consider such as the flow of funding and other resources that provide important food or energy for the system. These additional flows can be measured and should have their own pulse, peristalsis and pressure. 

Like a Doctor, I may initially check your pulse and blood pressure and then based on what I find, begin to ask questions about your diet or habits. If the system manifests unhealthy attributes, then we can test it using more refined tools like value stream mapping in order to create a more detailed picture of how the work flows through the system or subsystems.

As we map the topology of the organization, we can use different tests to help uncover resistance and turbulence in the system. For example, a dependency map of the teams along with a relative measure of the connection strength and quality between teams can help us to find hot spots or bottlenecks that create fiction and reduce flow. 

We also need to understand the product families and their relative backlog health and velocity in order to understand where the pressure lies in the system.  Where there is high pressure, we need to improve the vascularity or flow in the system.

We can use time/motion studies to assess the impact of the physical environment. Understanding the distance between teams can reveal important information about temporal delays in the system.

Finally, these systems are based on human beings who are motivated and driven by feelings. As such, we can use interviews to assess questions of autonomy, mastery, and purpose in order to understand the emotional/cultural personality of the organization. We can also use subjective assessments/surveys to gather and understand the polarity of feelings across different groups (NPS score, etc.).

You wouldn’t try to prescribe all of these tests at once (although it might be tempting). Instead, I would begin with pulse/pressure in different places (arms, legs, Business Units). Based on what I find, then I might move to more specific tests (like value stream mapping and team dependency assessment).

Alternatively, we could structure an assessment with phases:

  1. Overview – high level assessment of primary indicators like overall pulse and pressure. -indication of key pacemakers
  2. Workflow for suspect areas (pulse) – using value stream mapping – team dependencies – emotional topography
  3. Product Family Analysis (pressure) – Backlog and Velocity/Product – Market health
  4. Environment Assessment (structure) – Team structure – location in time and space study

We can see the improvement in flow of a system by the evolution of its design. Large chunks of work called requirements can be broken down into initiatives, features, stories and tasks.  This breakdown is optimized for the fastest transportation of information across the organization as characterized by large initiatives. In order for that work to move to the smaller cells (or teams) of the organization, it is broken down or vascularized into smaller features and ultimately into stories. These units, in combination with an arbitrary planning cadence, provide the necessary elements for understanding measuring the pulse of the organization.

Likewise, the structure and attachment of the work muscles or teams has an architecture that encourages flow. It is fractal in nature starting with the teams, and then aggregating to teams of teams and perhaps even larger. Like cells in the body, these teams are self-organizing. This provides the most “alive” design that can reconfigure itself according to the stimuli provided by the ecosystem. The design of the team matches the design of the work flowing through the system with teams processing stories, teams of teams processing features and the whole organization processing initiatives.

This also gives us a clue into how funding should be designed. It should match the design of the work, the structure of the teams and the planning cadences in order to provide the energy for the system to flow smoothly.


Your Framework Sucks

January 16, 2019

We can learn the art of fierce compassion – redefining strength, deconstructing isolation and renewing a sense of community, practicing letting go of rigid us-vs.-them thinking – while cultivating power and clarity in response to difficult situations.

-Sharon Salzberg

Recently I’ve seen a lot of negative comments on social media criticizing SAFe and other scaling frameworks. Some of it can be chalked up to the Agile community’s typical aversion to change (ironic, isn’t it…). You hear it whenever somebody says, “That’s not agile.” That’s just another way of saying, “That’s different.” This isn’t anything new. I remember people used to say the same thing about Kanban when it was first introduced. They’ll probably say it about the next new thing that comes along too. Some of it is the usual competitive “My agile-fu is stronger than your agile-fu.” There are a bunch of agile scaling frameworks now, and curiously, none of them has anything good to say about the others. Despite all that, there are some criticisms that I think are pretty legit. I’d like to address a few of those here.

First, the rollout plans for SAFe and other frameworks seem to be pretty static. That could just be me, after all, but I don’t see a lot of variation in the approaches to rolling out frameworks. It’s often top down, and dictated largely by the management teams or key stakeholders in the organization. I’m not arguing that isn’t the right way to do things, but I am arguing it’s not the only way to do it. The agile community at large has been experimenting with how to introduce agile to groups in a fashion that is more bottom up for a long time. This bottom up approach has many advantages. If we can get the people doing the work to have a voice in how they are organized, then we are much more likely to get their buy-in and engagement with the new organization. Those folks also know more about the work, so they are probably better suited to make key decisions about who works with whom. Bear with me here, because this is some pretty radical stuff. There are folks who are experimenting with self-selecting teams that are making impressive progress. Imagine being able to work on whatever team you like? Amazing.

For example, we should be able to introduce team self-selection into SAFe as one of multiple options for creating release trains. There is nothing about self-selecting teams that breaks or somehow violates the 9 fundamental principles of SAFe. In fact, I might argue that self-selecting teams are perfect for SAFe. I truly believe that they are much more likely to be high performing teams than teams that are selected in a top down fashion by managers. There could even be a hybrid model where the management teams define the capacity – the overall size of the release train according to funding allocation, and the teams self-select to match that capacity. It would be a combination of top down and bottom up.

The other area where I see rather dramatic over-control from the top is with the emphasis on the top down epic-feature-story elaboration. Often this process can be so rigid that teams can feel as though their feet have been nailed to the floor. Everything is so tightly defined by the time that it comes to the team, that the team doesn’t feel like they have any options. All of the key decisions have been made. In a very real sense, if everything has been decided before the team sees it, then the epic-feature-story elaboration process is indistinguishable from waterfall from the teams perspective. It’s especially bad when the teams are asked to commit to delivering those features and stories for a planning increment. Suddenly you have teams wondering what, if anything, they are contributing to the process. There certainly doesn’t feel like there is much room for learning.

I think there is a hybrid approach here where the teams take the epic-feature-story breakdown as inputs for negotiation and conversation, but they don’t commit to them. To me, epics, features, and stories are a useful language or model that product owners use to describe what they think the customer or marketplace wants. Epics, features, and stories are not actual value. They are a description of what we think value might be. They are an input to the team design process, not an output. This is important and probably bears repeating: epics, features, and user stories are an input to the design process, NOT AN OUTPUT. We want teams to commit to outputs. Specifically, something valuable. Software that does something useful is valuable. So we want them to commit to delivering some software that we can use to do something valuable. So we should stop asking teams to commit to the inputs, and instead ask that they commit to outputs. Commit to value. That would cure a whole lot of dysfunctions that arise from asking teams to commit to delivering inputs.

There is a transformation that needs to take place between a request defined by epics-features-stories and the resulting useful software that is produced. This is where the sausage gets made. The team uses features and stories to try and understand in simple terms what is being requested of them. Then they integrate that model with their own understanding of the domain and the working system that they have before them. Even that is an incomplete picture of the world. To really do well, they have to use all of this incomplete information to test their assumptions against the system and the customer to get some feedback. They find unanticipated problems, and they have to have the freedom to change fundamental assumptions in order to arrive at what is hopefully something very useful to the customer. That’s never a given, there are always lots of unknowns, and we have to allow for that.

These are a couple of examples of how we can experiment and play with how the framework actually gets rolled out. There is lots of room for variation – that’s why they call it a framework to begin with. There’s a roadmap for rolling out SAFe. If you are just starting out, that’s probably the best place to begin. However, I think that as experienced practitioners, we need to be exploring many different ways of rolling out SAFe (or whatever your framework of choice happens to be). Not all customers are alike, especially when it comes to scaling agile. We need to be flexible and creative in the manner in which we implement our frameworks. In and of themselves, frameworks provide a set of overlapping ideas that can help us start to deliver value amid the chaos that is often the norm in so many places. However we need to implement those frameworks using all the creativity and imagination at our disposal. This is how we can best serve our customers.