Phones suck (a rant)

Every conference call I’m on … I really think it is every one … has issues. We can’t hear you. Can the person whose dog is barking go on mute? We’re seeing a screen, but it’s not a screen that appears relevant. Wait, I need a therapeutic reboot. We still can’t see your screen. Can everyone go on mute there’s a lot of background noise. What did you say/ say that again? I didn’t hear anything after holy crap, can you say everything again after holy crap? Ok, I can almost hear you, I’m getting about every fourth packet. I give up, I’m going to hang up and dial in again, I hope that’s better. Or what I just did on a call: silently hang up and fail to dial back in, it just wasn’t worth it.

Individual calls – just two people – are not far behind in quality. They’re just poor quality most of the time. I long for the good old days. Yes, my phone wasn’t in my pocket, but every time you picked up the phone, it worked, and it was high quality. btw I get pretty good service from Skype, of all things, it usually works pretty well. However now that it has been acquired by Microsoft I’m not expecting great things going forward. We’ll get more features and poorer quality.

Collaboration between people in far flung locales requires excellent voice, screen share and video capabilities. We just don’t seem to have that – not even close. I hope one day these facilities improve. Building great things requires seamless collaboration.

Impediments Log

#impedimentsLog

At a client site … “Hello. Are you the person who replenishes supplies?” (I’m betting he is. He’s standing in front of the supplies locker with a cart full of supplies.) “Why, yes I am.”

“Great,” I say. “Just wanted to let you know, we’re taking on some new development practices on this floor and a couple others. You’ll probably find that we’re going through more sticky notes and marker pens.”

”Oh,” he says. “Well, I have limits. I can’t leave you with more than 6 stickynote pads of each kind. If you want more than that, you have to order them on your own.”

Lots of different possible replies occurred to me! I went with, “Well, thanks, appreciate that. I know you’ll do what you can.”

How far up one reporting chain and down another would I have to go to fix this? Who knows. #buyYourOwn #BringYourAgileKit #pickTheBattles #theyllFigureItOut …

The Power of 3

I’ve always been a fan of 3. Third time’s a charm, at least 3 questions to indicate that you’ve been thoughtfully listening, 3 strikes you’re out, 3 ring circus. 3 seems to be pervasive, if not cool.

I heard a talk at an Agile conference a couple years back claiming that the minimum viable product, when it comes to Agile, is these 3 elements. If you don’t have all 3 of these, your ability to be Agile is severely compromised:

  • a genuine team
  • a single-source and all-inclusive backlog
  • frequently executing stuff.

I find that a very useful set of 3 comprising an MVP for Agile, and I use it all the time. Now Mike Cohn of MountainGoat Software has a wonderful 3 question assessment he published recently. And I like it, so of course I’m giving you the reference, and I’ll summarize it briefly. Nice article, Mike!

Let us assess whether an organization is at least moderately Agile, and trying to get better would be helpful.

  1. Assess Agile team functionality. How frequently do your teams fully integrate their products?
  2. Gauge leadership commitment to Agile. How do you respond if there’s a crisis or problem that makes a deadline or planned milestone no longer possible to achieve?
  3. Uncover hidden Agile dysfunctions (yellow or red flags). Tell me about your best Scrum Master or Product Owner.

Wow, that’s easy. Well, sorta, obviously it depends on what you do with the 3 answers. Of course, Mike goes into more depth in his article. It’s an easy read, I recommend it. (I recommend it so much I wrote a blog entry recommending it!)

https://www.mountaingoatsoftware.com/blog/three-questions-to-determine-if-an-organization-is-agile

 

Agile PMOs and COEs

I have been conversing with a colleague recently about Agile PMOs[1] and COEs[2]. For those of you who voraciously read all my posts (I may be the only one) allow me to share my thinking.

I tend to agree with managedagile.com[3] as to the basic functions of an Agile PMO which, by inspection, are markedly different from a traditional PMO:

Agile PMO functions on behalf of the development and delivery organization: 

  • process/progress monitoring
  • process guidance
  • process support
  • liaison from the overall IT organization to teams, via their Scrum Masters

We note that PMO members are typically a separate small team. For me, their value at this level of operation isn’t clear, as expressed by scrum.org[4]. If they are, in addition, the Scrum Master Chapter or Guild (to use the Spotify parlance[5]) then they are adding a bit more value to the organization. If they’re pretty focused on measurements and feedback, then they add even more value. That makes them pretty close to an Agile COE, but without significant support for Agile adoption. These days, I find adoption support to be crucial for most organizations, so the COE is a better model than the PMO and supports most, if not all, the PMO’s functions:

COE functions:  in addition to the above

  • communicating – no, let’s instead say socializing, the business need, urgency, vision, strategy, and roadmap for change (i.e. organization-wide adoption of Agile) – these are the artifacts driving change
  • developing the implementation plan for transformation to Agile, managing the transformation backlog, and managing dashboards regarding it to make progress visible
  • establishing a measurement plan, with metrics, and facilitating the execution of that plan
  • developing and executing a training plan to train personnel in Agile methodologies, e.g. Product Manager, Scrum Master, team member, etc., also continuing education
  • identifying value streams and products around which to base teams and teams-of-teams
  • fostering appropriate communities of practice (Spotify model might say guilds and chapters)
  • maintaining a highly visible presence, probably somewhere well known on the internal intranet – the one-stop-shopping concept
  • maintaining a cadre of good mentors, coaches, consultants to assist wherever needed
  • fostering a relentless improvement attitude with innovation at its root

Some of the COE members typically comprise a separate small team, but their membership also includes many people who might not be considered part of the team – the guiding coalition who developed the artifacts driving change, the steering committee driving the change day-to-day, the mentors/coaches/consultants who are on the ground making it happen for the organization.

To sum up, Mindtree[6] says it’s all about culture, community, innovation and expertise. Some say that a COE should NOT be monitoring; your choice, depends on the organization. Finally, here is a SAFe® specific version of the COE, or LACE[7], that I’ve found helpful also.

 

[1]Program Management Office or Project Management Office – one is never sure!

[2]Center of Excellence, an ambiguous phrase if I ever saw one

[3]http://managedagile.com/what-is-an-agile-pmo/

[4]https://www.scrum.org/resources/blog/agile-pmo

[5]https://labs.spotify.com/2014/03/27/spotify-engineering-culture-part-1/ – by the way, I think Spotify is a really good and visible example that can be used to show an IT department that their value to the company is significant and that IT should never be thought of as a cost center.)

[6]https://www.mindtree.com/about/investors/annual-reports/annual-report-2015-2016/cultivating-future/agile-center-excellence

[7]https://www.scaledagileframework.com/LACE/

Productivity and Required Stuff

So I just tried to take an online quiz that is required for access to certain needed infrastructure in a particular environment I happen to be involved with. The quiz is a way that the environment’s principals can understand that those who access the infrastructure are aware of certain security consequences for poor behavior.

It turns out this quiz has two prerequisite courses, A and B. I’d taken A. On to B. B has no prerequisites. However … I was unable to take course B, because I kept receiving an error message that I must take its prerequisites first. Only it states that there aren’t any …

After a bit, on a lark, I thought, why don’t I just try to take the quiz? So I tried that. However, in order to take the quiz, one must first enroll to take it. I could not find a page allowing me to press an “enroll” button, only pages containing a “take the quiz” button. I finally found a way: one can search on the quiz’s course number; this gives one a different screen than before; the new screen had an “enroll” button.

I enrolled. I was now 20 minutes into a process to take a 10 minute quiz.

So finally, I took the quiz. Again, it has prerequisite courses A and B, but the system didn’t require me to have taken them this time. I clearly have not taken B but it did not seem to matter.

Anyway, I didn’t pass: 68%, 70% required. I figure this is par for taking a quiz where I have been unable to take one of the two prerequisite courses! There were acronyms I did not understand. There was English so horribly written no one could understand. There were multi-selects where I got 2 selected but a 3rd was required and where I got 3 selected but one was incorrect – and both of these situations are “entirely incorrect” answers in this quiz’s architecture.

No matter, I simply took the quiz again. Immediately. Now that I knew how to take it, I scored a 96%. What, not 100%? Hah! There were some new questions asked in place of old ones, but not too many. Of course, I still do not know the acronyms. I still do not understand some of the concepts.

Total time to take 10 minute quiz and pass: 45 minutes. Almost no new knowledge gained. No wonder productivity is poor and frustration high.

Measurements

I have been working slowly on this “big” measurements post. Don’t run away at the sight of the word “big”! Measurements are really important – without them you are operating open loop. And yet I see so many teams and organizations who either don’t measure, or measure stuff they don’t need, usually at the expense of measuring stuff they do need. I verify Douglas Hubbard’s Measurement Inversion law[i] at nearly every client I visit, and that is really sad. Perhaps this post will help someone, somewhere, improve their measurements and objective feedback discipline. Speaking of which, I would love your feedback on this post – thanks.

Contents

  • Three-tier Measurement Loop  -this brief framework discussion is necessary to some of the rest of this post’s content
  • Simple Steps to Measuring (Closing the Loop!) – I really wanted to start here; too many measurement posts start with theory and are less helpful to those who simply want to measure the right stuff
  • Practical Measuring Guidelines – ok, now some guidance, I’ve tried to keep it practical; please pay special attention to GQM and go see the GQM reference if that’s new to you
  • Disciplined Agile Delivery’s Principles on Measurement – Scott Ambler has some good advice on measurement and I’ve repeated some of it here for good measure
  • A list of potential (mostly control) measures for Agile – Agile leverages control measures a lot, and they can be useful; one of the best sources is Agility Health Radar (for more on AHR see near the bottom of the Guidelines section) but here is my list as well

The Three-tier Measurement Loop

I and some of my colleagues in a former life frequently used a three-tier measurement framework to reason about needed measurements, and I still do so. The loop is straightforward and is a necessary precursor to the discussion that follows:

  • Business Goals: first, determine/understand your desired business level results, such as revenue, profit, etc.
  • Operational Objectives: set operational objectives by reflecting on your situation and your business goals to determine what short term outcomes you wish to achieve. For example, “we want to improve our profit margin so reducing cost would help with that, therefore we want to reduce defects and move them to the left, since that improves our cost profile.” Operational objectives might include quality, predictability, perhaps time-to-market though this is sometimes perceived instead as a business goal, and productivity (though productivity is notoriously difficult to measure without adversely affecting other things like team cohesiveness and capability.)
  • Process Directives: if these operational objectives are to be achieved, what process changes should we make to do so? A common answer right now is for shops to decide to take on one or more elements of an Agile process such as Scrum. This is a reasonable process decision.
  • Process Control Measures: if we have decided on process directives, how do we know that we are successfully executing in accordance with these decisions? We need measures, and perhaps a dashboard, to tell us these things. If we’ve decided, for example, to “become more Agile over the next N months”, then presumably we’ve defined what that means and we need to know how we’re doing toward that goal. Control measures should be designed to tell us this, and they should be placed on a dashboard where they are easily found so that question can be answered by anyone at any time.
  • Operational Outcome Measures: whatever our operational objectives are, we need to measure those outcomes to understand whether we are achieving those objectives, e.g. defects and defect rates for quality.
  • Business Result Measures: my guess is your company already has these. If not, then you’re probably pretty small and privately owned; nonetheless, now is a good time to put them in place!

Three-tier measurements framework

  • My mentor Dr. Cantor would speak of the following vignette as a way to motivate this three-tier scheme. Suppose you wish to manufacture tasty potato chips (crisps if you’re from Great Britain). You put together a conveyor that allows the potato slices to go through a salt shaker at some conveyor speed, and then an oven. An important element of selling more chips is flavor, and the salt content of each chip is part of that. You can vary the speed of the conveyor, or vary how much the salt shaker shakes – those are the two things you can do to vary the chip flavor that results from salt content. (There is no flavor knob you can twiddle – clockwise for better flavor! – much as we might wish we had one.)Thus, your desired business result is to sell more chips (revenue). Your operational objective is to get the salt content just right (so that you sell more chips). Your taster might be telling you “um, not quite salty enough”. Therefore, you decide to decrease the conveyor speed a little, and/or increase the shake on the salt shaker. You note the shake frequency gauge and the conveyor speed gauge, and you adjust the two adjustment knobs a little, and note the changes to the two gauges to confirm your actions. In doing this you’re hoping that the taste will improve, but you don’t know. You do know that you’re making things a bit saltier. This first step is feedback from process control measures of shake and speed in light of the process decision regarding them.Now your taster tastes the affected chips. “Tastes better” she says. Now you have an operational outcome measure as feedback to your operational objective of better taste. This is feedback from an operational outcomes measure to an operational objective. You hope this translates into improved revenue. Finally, after a few months (business measures often lag!) you see your new revenue numbers and determine whether the taster’s expert opinion is reflected in your business performance, completely closing the feedback loop relevant to your process changes. Note that no matter how expert your taster is, she might not have gotten it right. It could be you need to try again, and without business measures you can’t know for sure. Thus you need all three levels of measurement to properly close a process feedback loop.

    Murray’s potato chip example

Simple Steps to Measuring (Closing the loop!)

With the three-tier measurement loop understood, we can outline some steps to measuring. These steps leverage GQM[ii] – Goal/ Question/ Metric, an important tool for getting measurements right.

  1. Take stock. What are you measuring today? If the answer is “nothing” then skip to step 4.
  2. Of those things you are measuring, which of them are you using to answer important questions about your capability, or your progress toward a process decision/directive, or operational objective, or business goal? Keep those.
  3. Of those things you are measuring, which are you not using in a way that can help improve measured results? If they’re completely automatic and won’t go stale, you can do nothing with them for now. However, if you invest effort to measure them, now is the right time to stop performing those measurements.
  4. Now that you have surveyed your existing measurement landscape, what more are you asking about (GQM’s “Q” for Questions) whose knowledge would improve your business results against your business goals (GQM’s “G” for Goal)? This will require knowledge of how you translated your desired business goals into operational objectives, and in turn how those have been translated into process directives. What do you need to measure to know that your process decisions are being carried out? What do you need to measure to know that your outcomes against your operational objectives are improving? What do you need to measure to know that your desired business results are being achieved?
  5. From the list in step 4, pick no more than one item in each tier – because we should start small and simple. (GQM’s “M” for Metric.) Pick at least one item in the bottom (controls) tier.
  6. Implement the (up to) three measures you just picked. The implementation can be manual if that’s easier until it is understood that you have it right – that it’s answering your questions. Once you know you have it right, you must automate each measure if you can.
  7. If you cannot automate a measure that you need, then understand the business case for the measure in monetary terms. It won’t be long before someone is asking after the high cost of the measurement, and you will need to justify the expense by demonstrating the value that accrues from the measure.
  8. Do it now. It is most important to realize that most companies are measuring very little that they need to measure. Without closing the loop, you have no objective way to improve. Go measure something useful!
  9. Add or remove about one measure at a time with some reasonable frequency. Keep it simple, keep it small, but no smaller than needed.

Practical Measuring Guidelines

  • Measure what you need to measure. This sounds simple but it is not. Many things are easy to measure; they are your tool’s default or whatever. However they often aren’t worth much. As previously mentioned, Douglas Hubbard notes the high cost of getting this backwards. Corollary: look often for measures you need, and start measuring them.
  • If what you need measured is expensive to measure, then develop a (lightweight) business case for it; don’t just abandon it due to cost. If you think you need it, your gut is telling you that it’s worth the expense, so do the analysis to know, and fight for it if it’s worth the investment. Suggestion: you may need to account for substantial uncertainty in the business case. That is a separate topic …
  • Do not measure what you don’t need. Most measures that you don’t need waste time and effort. They also get stale and present false or misleading information. Corollary: it’s entirely reasonable to stop measuring something no longer needed. Corollary: look for stale measures often, and remove them, especially if there is any chance they can become stale and/or present false/misleading info.
  • Measure a little, not a lot. One or two good measurements are more valuable than 10 poor measurements.
  • Anything can be measured. Douglas Hubbard asserts this[iii], and I have found it to be true. Corollary: if you need it, then don’t give up.
  • All measures should be visible and available. Make a good dashboard. Your measures are trying to tell everyone something, but if they’re hard to find, no one will hear, which is both a waste and a failure to close necessary feedback loops. Thus, a reasonable dashboard is always worth the expense of creating it.
  • No measurement need be automated until you know it is a measure you need. But …
  • All measurements you need shall be automated as soon as possible. Failure to do so leads to expense, drudgery and fudgery, especially fudgery, otherwise known as making stuff up.
  • If you measure nothing else, measure the Cost of Delay.” (COD) This is Don Reinertsen’s principle[iv], oft-repeated by Dean Leffingwell and for good reason. This may be Reinertsen’s most important principle, as it leads to less waste, better flow, and system-optimal behaviors. It is also the basis for the prioritization tool WSJF[v].
  • To determine what you need to measure, use Vic Basili’s GQM – Goal, Question, Metric. The Wikipedia page on GQM is as good a place to start as any and is referenced previously. What are your goals? Use them to determine your questions. Determine the metrics needed to answer your questions and then implement them. Wikipedia specifies 6 steps and you should follow all 6. I often think of the word metric as the identifier for the elements needed to answer questions, and then measurement is the mechanism and implementation for actual data collection of that which the metric identified. It can pay to be precise in this way about the nomenclature.
    1. Identify improvement goals
    2. Generate questions
    3. Identify potential measures
    4. Introduce mechanisms for data collection
    5. Collect, validate, analyze the data to provide feedback
    6. Take action
  • Think of measures as being in three tiers – the business results, operational objectives (both are outcome measures), and process control measures (i.e. not outcomes) of which I wrote in a section above. By the way, an extensive set of Agile-related control measures can be found behind the Agility Health Radar(AHR) intellectual capital, see here[vi].
  • Post and well socialize these guidelines. Everyone needs to know and follow them. Your Center of Excellence might be a good “place”.

Continue reading “Measurements”

CapEx / OpEx and Agile

For decades it’s been easy. Once the application is going to be real, developing it is a capital expenditure. Waterfall made determining that pretty easy, but Agile can upend the usual considerations. As we behave in a completely iterative manner, what of the things we’re doing are capitalizable? Some of the concerns Finance often has with Agile are quite clearly described in Dean Leffingwell’s SAFe post (link below). This post is quite prescriptive. Scott Ambler and Disciplined Agile have an excellent post on this topic also, and it is less prescriptive. Scott and Mark identify a very useful yet objective milestone on which to base the decision to move from operational to capitalizable expense, and they consider an aggressive vs. conservative strategy too. I recommend reading both articles on this topic, neither is long, and making your decisions about capitalizing expenses using a short set of clear rules that you prepare, share with Finance, agree on with Finance, and then strictly adhere to unless Finance agrees on an exception. There ya go, wasn’t that easy? Read on …

A list of Agile control measures

I’ve been aware for awhile of a great list of Agile control measures, well, mostly control measures. By control, I mean that they are measures that tell you how well you are doing at being (or at least doing) Agile. By itself that does not guarantee better outcomes, but it does tell you if you are doing what you promised yourself you’d do when you said, “We want to adopt a more Agile paradigm.” A few measures under the heading Performance are outcome measures, like predictability and quality. They tell you whether the changes to your process are having the desired effect.

Anyway, that list of Agile control measures is from Agility Health Radar. They have a great program and tool to help you measure the items on the list. But the list itself they publish and hand out to people. It’s pictorial though, and I wanted a text list … so here it is. Up to you to figure out how to measure … or go buy their product, it is a good one.

Foundation
Agility
Sustainable Pace
Self organization
Technical Excellence
Planning and Estimation
Effective Meetings
Team Structure
Size and Skills
Allocation and Stability
Environment
Clarity
Vision
Vision and Purpose
Measures of Success
Planning
Short-term
Mid-term
Long-term
Roles
Roles
Generalizing Specialists
Performance
Confidence
Product Owner
Team
Stakeholder
Measures
Predictable Velocity
Time-to-Market
Value Delivered
Quality
Response to change
Leadership
Team Facilitator
Effective facilitation
Servant leadership
Impediments management
Technical Lead
Servant Leadership
Technical Leadership
Product Owner
Engagement
Backlog Management
Leadership
Manager
Servant Leadership
People Development
Process Improvement
Culture
Team Dynamics
Happiness
Collaboration
Trust and Respect
Creativity
Accountability

Distributed Agile – Executing Agile with non-collocated teams; a call to Coaches

I am motivated to write this entry on geographically distributed Agile teams because most clients now, at least those of significant size, seem to have issues stemming from this situation. Indeed, a timely tweet from Scott Ambler on 27-April-2018, referencing his 2016 Agile At Scale survey, says it all: “Less than one-third of #agileteams are co-located! http://dld.bz/fxnJr Isn’t it amazing what surveys discover?”

I have seen three major reasons/scenarios for geographically distributed teams, and I am sure there are more reasons I’ve not seen yet:

  1. Our company is ginormous (technical term!). Most of us jumped onto the big “offshoring”[1] bandwagon because our CFO was so excited to see the reduced development costs that we projected would result. However, we didn’t effectively consider the impact on value associated with offshoring, including the reduction in collaboration that would result from offshoring. Collaboration between team members “on the other side of the world” is very difficult.
  2. Our company is ginormous, and we got this way by buying other companies. Sometimes their offices were located many timezones away. As we merged, people were placed in fragmented fashion onto teams based on functional expertise. Now we have teams where members are in multiple timezones, some having been on the acquired company, and others on the acquiring company, and the cultures are still far apart.
  3. The technology was developed in, say, China, but it is applicable only or mostly to, say, a US and/or European market. Our dev teams are in China because that is where the technology expertise resides. However, our product owner needs to be in the US and Europe, where the market for the product is understood. Therefore we have split our teams up in that fashion. It is a constant struggle to keep the bandwidth of collaboration sufficient between the dev teams and the product owners.

A common coaching pattern that arises from this situation occurs when an Agile coach consulting to such a company inevitably perceives collaboration difficulties. S/he asserts that one of the root causes of the collaboration difficulties is that the teams are not collocated, and/or that they spend insufficient time interacting face-to-face. After all, the agilemanifesto.org principles[2]are clear:

  • Business people and developers must work together daily throughout the project”,
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation”.

My call to the coaching community is simple: this is no longer a useful answer. That may be the beautiful horse we rode in on, but the horse has a broken leg and we need to put it out of its misery. Too few clients can, or will, take the advice. Scott Ambler’s numbers bear this out. To help make our client successful, we need to recommend something else. We can start by asking, “What if I said, ‘You have to collocate your teams?'” but they are usually going to tell you that’s not an option, and sadly, they mean it. If you persist, then their response may well be, “We don’t need you – you are no help to us.”


My call to the coaching community is simple:  this is no longer a useful answer.


What shall we tell them instead? How do we make our client successful? I don’t yet know. I haven’t gone down this road enough times yet. But I’ve developed some options, and I’m interested in knowing if you’ve tried any of these options, how successful they have been, and what other options we need to begin also recommending to our clients. Some definitions help to get us started, and Scott Ambler in this article ([3]) does a good job with the definitions – indeed, the entire article is excellent.

Coaching Options for non-collocated teams

  • Invest in really good collaboration technology. I mean the kind that just works, and initializes in seconds. It’s available, I have seen it, and it really helps. You walk into a conference room, you push a button or dial a number, and there on the screen in the front of the room is another conference room, half way around the world, and your mates are there, and the video is clear, and the audio is clear, and screen sharing is easy. It just works, and it takes 15 seconds not 15 minutes to initialize. I don’t know how much it costs to get that, but its justification is probably there in the time being wasted, along with the frustration and distraction. It’s certainly there if you miss a market window or fail a quality gate because you can’t get the needed level of collaboration. An Agile principle covers this: “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”
  • Look for an early opportunity for people frequently collaborating to meet in person, face-to-face, at least once. And again when possible. It makes a huge difference.
  • Look for patterns to improve collocation and cross-functionality. Agile teams that are not cross-functional are problematic. The way I often tell it, “If Fred isn’t here today, you can wait until tomorrow for Fred, who will do the task by himself in an hour, or you can assign two people to do the task Fred normally does. Even though they’ll take twice the time and four times the manpower, they’ll learn how Fred does it, might even figure out a better way, and the rest of the team will be awaiting the result for only 2 hours instead of 8 (or 24).” Once again, Agile principles cover the situation: “The best architectures, requirements, and designs emerge from self-organizing teams.” And (derived from Lean) “Simplicity–the art of maximizing the amount of work not done–is essential.” A Scaled Agile principle[4]also applies, “Apply systems thinking.”, i.e. optimize globally not locally.

How do these principles manifest here? Well, suppose we have two distributed teams, and both are half in a Philadelphia office and half in a Mumbai office. Take the parts of the two teams in Philadelphia and make them a team. Take the parts of the two teams in Mumbai and make them a team. For awhile, it may be that neither team performs well. They’re missing skills! But they will acquire those skills. A pattern like this for reorganizing teams to make them more collocated and more cross functional can usually be found amongst the teams in a distributed organization.

  • Be extremely loath to place individuals in situations where they are working by themselves, e.g. with no office or in a different city from everyone else. They never get to collaborate with high bandwidth, and honestly that’s just depressing. Actually, I’ve seen this several times recently and it didn’t seem depressing at all to the people involved, and I wondered why. I eventually realized that the people in these situations do not realize what they are missing. Contacting someone several times and finally getting them to cooperate, followed by calling the person several more times and finally getting them to do it right, seems normal to these people. The people involved often are those who the company believes are true experts and SMEs, so it doesn’t seem unusual to anyone that conveying what’s really needed is difficult and will take time. I think that’s hogwash. When they collaborate in person it does not take nearly as long, and the result is usually higher quality.
  • Moreover, such people are fragmenting their capabilities by having too many things on their plate, which Agile would call a high WIP limit, so their efficiency is reduced that way too. That group of people known as architects seem to have this problem very often. I’ve spoken to many architects who wish they could do their day job, but instead they spend the whole day telling the next team what the architectural guidelines, patterns and constraints are that apply to that team. There is too much information in their heads, and too much of it is known only to them! (I’ve heard this referred to as low truck number – if they get run over by a truck…) Experts/SMEs must write down the basics, and refer practitioners and teams to what’s written first, socializing the location of that information. That way when there is a conversation, it doesn’t start with the basics, it starts with the exception of interest – this is higher bandwidth.
  • When you have a distributed team, you need the skills of a Scrum Master in each location. The Scrum Masters should meet regularly. One should be a chief of sorts for the team.
  • When you have a distributed team, you need the skills of a Product Owner in each location. The Product Owners should meet regularly. One should be a chief, who absolutely must have absolute final say (did I sufficiently emphasize that?). However, there is no issue with one of the Product Owners making a decision that is later undone by the chief Product Owner. She was doing her best, and most of the time she’ll make the right decision, avoiding blocking the team. When she has a miss, not much work must be undone because she and the chief PO meet regularly. (SAFe® principle #9: “Decentralize decision making.”)

There are a number of large-organization antipatterns we can recognize also, and coach to improve:

  • Antipattern: IT is a Cost Center. This is never true. To prove it, next time you hear “My IT Department is treated as a Cost Center”, offer that they should all turn the lights out and go home. The lowest cost I can offer to my company is to cost them nothing, right? The refrain will be immediate: you can’t do that! Well why not? You wanted me to lower costs and I have done so. But the IT Department provides… whatever they say next, it’s value. It’s worth money. Determine the approximate monetary value if you must, to make your point. IT is a Value Center, and everyone needs to treat it that way.
  • Antipattern: we must measure Productivity and Utilization. Gaaah. If you measure utilization, you stifle your teams. Lots of studies out there demonstrate this. I spent much time in my youth going after productivity measures. They’re worthless. Measure predictability and value accrued. Measure cost of delay. (Don Reinertsen: “If you measure nothing else, measure the cost of delay.”) But don’t measure productivity or utilization of knowledge workers.[5]
  • Antipattern: SMEs and other True Experts can work alone. Nope, not really. It’s not very efficient and it’s ineffective. See my coaching discussion bullet above.
  • Antipattern: SMEs and other True Experts don’t need to document their knowledge. Yes they do! Use the SAFe® concept of capacity allocation[6]to ensure that documentation of things the Expert seems to say all the time is getting performed regularly. Socialize the location of that information. Use it to raise the bandwidth of the next conversation.
  • Antipattern: We support our meetings with frustrating collaboration technology. Oh my golly this happens so often you would think they actually word their goal exactly that way. You walk into the conference room and the technology isn’t ready for another 15 minutes. Incredibly wasteful, very frustrating and distracting too. This comedy video is just depressing, ain’t it? ( [7])
  • Antipattern: We’ll ignore the latest acquisition’s effect on overall product architecture; they’ll figure it out. This is a simple application of Conway’s Law[8], which, roughly stated, is “Organization drives architecture.” Mel Conway said “If you have four teams writing a compiler, you’ll get a four pass compiler.” You can’t ignore the acquisition in this way. Figure out what the end architecture is that you want, and mold the whole development and delivery organization around that architecture. Just do it.
  • Antipattern: We ignore cultural differences during an acquisition. This is what killed Daimler-Chrysler.[9]

Finally, I have an update to this post, my colleague Bob posted this blog entry in May 2020 and I find it quite germane. I hope this helps as you work to improve the effectiveness of your distributed Agile teams.


[1]Offshore is a term I dislike, but it is in common usage so I am using it in these pseudo-quotes. In India, it is the US that is offshore. Let’s just say where the people are, e.g. U.S., India, China, Ireland … wherever they are. While we’re at it, let’s call those resources people, or human beings, or team members, since that is what and who they are.

[2]agilemanifesto.org and its page www.agilemanifesto.org/principles.html

[3]http://www.disciplinedagiledelivery.com/agility-at-scale/geographically-distributed-agile-teams/

[4]https://www.scaledagileframework.com/safe-lean-agile-principles/

[5]References include SAFe® Principle #8: “Unlock the intrinsic motivation of knowledge workers”, and Dan Pink’s books, or even just his video: https://www.youtube.com/watch?v=u6XAPnuFjJc.

[6]Search for “Optimizing Value and Solution Integrity with Capacity Allocation” here: https://www.scaledagileframework.com/program-and-solution-backlogs/

[7]https://www.youtube.com/watch?v=kNz82r5nyUw

[8]http://www.melconway.com/Home/Conways_Law.html

[9]https://www.forbes.com/sites/georgebradt/2015/06/29/the-root-cause-of-every-mergers-success-or-failure-culture/#7812cdcd305b

Rock Engineering, Gold Plating, and Features

I used to have this post somewhere else, but it was inside a “previous life”. It got impressively long (see gold-plating below), and even talked about Beethoven. I’ll start this out small. Maybe I’ll enhance it with  some of those additional stories at some point.

The intent is simply to identify and describe anti-patterns associated with Requirements. Mostly this is for fun, because we don’t do Requirements anymore in Agile (although the lessons within these anti-patterns do have applicability in the Lean/Agile world for sure). Instead, in Agile, we often use the XP constructs user stories and epics, or sometimes just a simple and regular flow of enhancements and other issues. In OO we do use cases and scenarios. In any case we’ve recognized that they’re not really requirements; they are – to quote one of my mentors Walker Royce – desirements. That is, they’re negotiable.

Contents

  • Rock Engineering anti-pattern
  • Gold-Plating anti-pattern
  • Features

Our first anti-pattern is Rock Engineering. It is a form of crafting a solution to a problem that doesn’t exist, or a problem that is poorly understood. It goes like this …

King: “I need a rock. You there, go bring me a rock.”

You: “Yes, Sire, I shall bring you the finest rock in the land.” (and so you go find a rock.)

You: “Sire, I have brought you a rock, here it is, it is the finest rock in the land.” (Present with flourish)

King: “This is not the rock I need. It is too small.”

You: “Yes, Sire, I shall go find you the rock you need, the finest rock in the land.” (and so you go find another rock)

You: “Sire, I have brought you another rock, here it is, it is the finest rock in the land.”

King: “This is not the rock I need. It needs to be brown, and this rock is gray.”

You: “Yes, Sire, I shall …”

Of course, we can go on like this for hours until we accidentally bring the King a rock that he finds suitable, and even then it might not be exactly what he was looking for. What we need to do is elicit some requirements, shorten the time by focusing in on a solution that meets the actual expressed need. In order to do this, one is going to have to ask questions! Of course, if any time one asks the King a question one’s head is removed, that might be a cause of this wasteful behavior we obviously refer to as Rock Engineering.

Gold-Plating

A poor practice accentuated by Waterfall is called gold-plating. It occurs in conjunction with developing some capability that we perceive is needed or has been asked for. We have plenty of time to develop it, because it’s Waterfall and there’s never any urgency early in Waterfall. Let’s be sure we get this right and Wow our customer. So in addition to the bare minimum need we go over the top and really add some bells and whistles.

Yup, this is called gold-plating. Several results can accrue, and only a few are good. Among the not-so-good:

  1. Customer says, wow, that’s really cool, but I don’t need that and I’m not going to pay for that. All I needed was the functionality I asked you for.
  2. In your zeal to build this cool stuff, you injected Mr. Debt. Mr. Technical Debt. It’s buggy. It doesn’t run, or it runs slowly or otherwise unreliably.
  3. You overrun the time budget and/or the dollars budget.
  4. Your supervisor discovers that you’re overrunning the time/dollars budgets.
  5. Customer says, ugh, that’s really awful, that sucks, and I really I don’t need it so I’m not going to pay for that. All I needed was the functionality I asked you for.

One reason for Agile’s focus on values like sustainability, the customer, customer collaboration, short feedback loops and minimum viable product (MVP) is to avoid this anti-pattern.

Features

This one is short but I found it amusing once upon a time.

Definitions:

  • Bug: informal name for a software defect. The name arose because Adm. Hopper was tracking down a defect during some of her early work and found a physical bug in the hardware that was shorting it out and yielding erroneous results. Often depicted graphically as the image of an insect.
  • Feature: a bug that has been lovingly documented and memorialized in the software and its documentation artifacts. Depicted as an insect wearing a tuxedo.