The Rudiments of Asset Management

There are two topics herein; the first is on reusability with Microsoft™ – powerpint in particular, and the second on initiating a reusable asset program.

In one of my recent “previous lives”, there was a pressing need to produce reusable intellectual capital (IC) to permit our consulting and coaching practice to grow. The shame of it was, we just couldn’t do it. Because of time constraints (gotta be productive!! gotta be chargeable!!) we were unable to effectively codify what we learned so that subsequent practitioners could build on that learning. We also tended to put everything into PowerPoint™ [1]. (I call it powerpint, I hope that doesn’t get me in trouble with Microsoft trademark enforcement!) 

As it transpires, it appears to be very difficult to develop effective reuse patterns for powerpint; we didn’t find a way. The elements of reuse are single slides or small groups of slides, and the elements of configuration management are larger and essentially indivisible groups of slides called decks. This impedance mismatch makes a simple reuse task boringly and tediously difficult. (And it’s much more difficult when the only “asset repository” your organization makes available is email – s-m-h.) I’ve heard there is a composition manager available to pull the current versions of single and small collections of slides into “the next deck” but I’ve never been in an organization that uses this composition manager, if it even exists. I suppose it would be possible to code it up in Visual Basic™, but I’ve not seen that done. 

A conclusion is, yes, graphics are important, but use them effectively and if you are building lots of pictures, consider changing your paradigm from glossy powerpint to meaningful semantics with a genuine model-building application whose diagrams are views into the underlying, all-important semantic model. When I was at IBM, our original go-to for this was Rational Rose; later it was Rational Software Architect. I’ve not found a free UML modeling tool that will do this job with any clarity but all the ones you pay for probably will do it. Unfortunately, this results in the need for a considerable investment in training time and tools. I’ll consider this aspect more carefully another time. Meanwhile …

Developing Reusable IC while Engaged

The best way to develop reusable capital is for its immediate use in the context of one’s current consulting/coaching engagement. One is on the ground, executing, and realizes the need for something, for the benefit of the client. Sometimes it’s a new way of doing business. Sometimes it is an artifact or collection thereof that can be used to facilitate some processes. Usually, it’s both, actually. You build what you need, you try it out and experiment with the client or customer, you refine it based on that experience, but not too much to avoid gold-plating[2], and then you submit it to the asset repository. And there it sits, waiting for the next need.

The key to its eventual excellence is its reuse, and the first criterion for that is someone who needs it actually finds it in the repository – we’ll get to that in a minute. Effective reuse does not always look like, “Oh, here! This is perfect! This is exactly what I need!” This is often called pulling the asset off the shelf. It’s rare, but it’s nice when it happens because you’ve found something that will save you work. More often though, effective reuse looks like this: “This might work. I’ll make some refinements to suit my client’s situation and then give it a try.” Once it’s been refined and used, it is reviewed and possibly refined again – but once again not too much to avoid gold-plating2 – and placed back into the asset repository as an update or new version for the next consultant/coach to use.

Note that now, this asset is useful in two different engagement contexts. When a third context arises, it is likely to be better fit-for-purpose and require less work to get it to the point where it is usable in that new context. (It is important to ensure that you don’t break it for purposes of previous contexts to which it has already been applied.) In the fourth context, it’ll be even better. It might never be reusable just by pulling it off the shelf and starting it up. Some small amount of configuration or customization might be required every single time it is used. But it is better every time due to the continuous improvement applied to it, and due to the time and especially thought savings associated with its reuse.

Finding the Asset

I learned the above Build-while-Executing pattern for building and maintaining quality reusable engagement assets while I was at IBM. Most of what I learned came from the gentleman who led the small staff of, well, I’ll call them librarians, who made the asset repository sing; his name is Darrel Rader. Believe me, I consider the role name librarian a compliment. If I recall correctly, our consulting staff was about 500 worldwide and his staff numbered maybe 5. Maintaining the collection of reusable assets in a suitable repository – indeed a repository designed and developed by a team at IBM Rational led by another mentor Grant Larsen, for expressly this purpose – was hard work. This investment paid off extremely well. Hundreds of engagement assets were pioneered and then refined in engagement after engagement, and it served to save consultants lots of time and improved engagement quality significantly as time passed. I can only hope that the repository still exists and is maintained within IBM. I’m afraid I don’t know.

Darrel spent an impressive amount of time on Search and Find. It turns out, for Darrel and his crew, the assets were the easy part: the consultants built them on their own and were happy to do so! On submission his crew would review an asset, possibly suggest minor improvements, and then categorize, package and tag the asset within the tool, called Rational Asset Manager (RAM). This was really important, because if it was poorly packaged it would be hard to reuse, and if it was poorly tagged, no one would ever find it.

Packaging

Packaging proved to be extremely important in order to avoid duplication of artifacts. It is worth pausing here to provide a few definitions:

Artifact: a document, for example a powerpint deck, a word document, a spreadsheet, a text file, a model file. A single thing. If a group of files belonged together tightly, then it might be a zip file containing those in that group. But an artifact is a single thing. An artifact can be a reference, that is perhaps a URL or some other reference which points to the physical file located in some other repository like Jive™ or Sharepoint™ or git™. And finally, an artifact should be (or become soon) a formal member of one or more assets. Artifacts can be tagged.

Asset: a governed and audited collection of related artifacts with a declared purpose. We endow the asset with a lifecycle to promote an asset-based form of governance, that is we make decisions on how the asset can/should be used by reviewing the quality of the asset when or after work is done on the asset (not just on whether the work is “done”). We enable this by endowing the asset with a lifecycle (state machine) – probably derived from its type, and a state within that lifecycle. As mentioned above, an artifact might be found in more than one asset; as such, it should be equally usable when sought by practitioners looking to use any asset which references that artifact. Assets must be tagged and categorized; one must be able to find them when they are a possible contributor to the solution to a consulting/coaching issue.

The flexibility implied by the above packaging implies that there is work to maintain the packaging. A category hierarchy must exist, but much more importantly, a tagging taxonomy must exist. We found as this capability was being developed at IBM that we quickly lost control of tags. We solved this problem by enforcing that tags be part of a managed tag taxonomy. If you needed a tag that wasn’t in the taxonomy, you went to Darrel’s team, and either they would find you the tag you needed, or they would invent one with your concurrence and place it into the taxonomy, and finally tag your asset with that tag.

Searching and Finding

The taxonomy was the single most important element of the librarians’ efforts making the reusable asset repository useful in the long term. Put succinctly, if you were looking for something, you consulted the taxonomy, chose your search terms, did your search, and behold … assets worthy of your consideration for use/reuse.

Discipline and Evangelism

A most important role of the librarians was evangelism. Encouraging practitioners to build and submit, or find and refine, assets in conjunction with their work never stopped. This would not have worked had the practitioners (coaches and consultants) lacked permission and the resources to build, refine, and share ownership over our reusable assets. For example, if one is continuously “full time” on engagements, asset development rarely occurs. It was also common for the librarians to review a proposed submission and suggest combining it with a previous submission or two to make a refined asset rather than a new asset. It remained difficult to find what you were searching for even with all the effort on tagging. But it was possible and worth the effort.

Summary

The above constitutes the beginnings of a genuine intellectual capital development and leveraging strategy, backed up by an ongoing investment in growing capability, both in tools (repository) and labor (librarians and consultants). The result was a robust reusable asset management program that yielded higher quality consulting and continuous improvement. It also made IBM a joy to work at, because your work was acknowledged and supported. Indeed it was specifically measured by reuse! and rewards acknowledging contributions were possible based on the measurements. Indeed my friends, those were the days. 


[1] with apologies to Microsoft™ because PowerPoint isn’t really a bad product, just a horribly misused one. I do have a friend who says “we are all subject to the vast mediocrity of Microsoft”, but that isn’t quite true, although it does often require organizational buy-in. If we have that, we can rise above it with some effort, and it’s often quite valuable to do so.

[2] Gold-plating, see https://densmore.lindenbaum.us/dysfunction/rock-engineering/

Systems, Complexity, Crashes, Oh My

This blog entry is written together with my old IBM buddy Mike Mott, Distinguished Engineer Retired[1]

As my colleagues and friends are aware, I am fond of discovering unbridled and poorly managed complexities that result in bad stuff. I don’t look for such issues that have killed people, though it has happened. But pretty much everything else is fair game, including the YF-22 accident where the test pilot left the prototype airplane via the ejection seat because he suddenly was utterly unable to control the airplane. As it transpires, there was a section of computer code tasked with translating pilot commands into control surface actions where there were eight discrete modes, and the pilot had unwittingly flown into the one of those eight that had never been tested until now. There was a sign error. Up was down, down was up. Close to the ground, there was no time to figure it out. Scratch one really expensive fighter jet. 

A recent, and very public, complexity issue presents itself as a target rich case study for how to avoid doing business with software, and that is the results reporting from the Iowa Caucuses. There are much better approaches to the management of high-tech programs.  There is a huge difference between hacking out an app for a smart phone and building a system to support a caucus.  The app is used in support of a process of vote counting, reporting and summary.  All of the people involved in this process must be properly trained to perform their roles.  The app must be realized in hardware, which requires loading and testing of the app in the production environment.  The networks and servers that realize the results must provide enough capacity to perform the computer tasks within the timelines. Use Case modeling of the end to end system operation is an excellent way to flesh out the preparation work for the caucus and the performance of the precincts on caucus night.  

Surveying websites such as sysml.org from time to time, the trend is good; modern system management technologies are evolving well.  However, reviewing pmi.org, it is still stuck in the past with the same old work breakdown structuring and risk management approaches that we know lead to long feedback loops, poor quality and performance issues.  There is a gap between the technologies available for the system itself, for system development, and for the approach used by program managers to pull it off.

Now, as my buddy Mike writes …

Looking at Iowa from afar, it seems obvious there was a total failure to plan the scenarios, the intended results, the usage models … or maybe anything at all, as though it was just going to work.   I would bet the house, farm, and alley cat that no clear-cut statement of objectives were ever written or socialized. They needed measurable outcomes to shoot for, something like “we shall have the conference results complete within 2 hours of when the doors close to start the caucuses”.  Another bet: not a single Use Case (or Epic, or group of scenarios, or whatever you want to call it) will be found in their statements of need that lays out the entire process of taking votes to result.  And stunningly, it appears that no thought whatever was given to the hardware realization. They left it at “hey, we have an app”. According to Chad Wolf, Acting Secretary of DHS, during an interview Tuesday on Fox and Friends, there was no viable test plan, and it wasn’t for lack of resources, the DNC actually declined an offer to test the system by DHS’s Cybersecurity and Infrastructure Security Agency. It also appears that the folks involved even now don’t understand what happened. All these issues are being spoken of by the DNC as a simple app failure. Indeed, that’s a piece of it, but astute observers know that the larger failure was one of project management and/or product development management.  Success on a crucial, scaled system must consider the people, process, tools, and technology.  But no … they believed an app purchased from Mrs. Clinton’s former campaign manager would ensure the caucus ran well.

Alas, this seems very typical of how government spends our money.  Even after so many lessons, in the form of failed systems, have been rolled out for the world to see. The government lays out huge projects that fail time after time, with billions wasted. Is there no self-reflection or retrospective at all? There doesn’t seem to be. Would it not make some sense to seek assistance from those who have actually delivered on big efforts?  On this point, it is notable that Vivek Kundra, young Ph.D. IT wunderkind, President Obama’s information czar and the first CIO for the federal government, invited major players in the industry to come and explain to him how the government could improve their IT performance. I developed material on the topic for my colleague Dave McQueeney, still at IBM, who pitched the slides to Kundra.  To his credit, Kundra eventually published a 25-point plan to improve the federal government’s IT performance. He leaned more heavily on Agile principles and Agile methodologies than on the architecture quality-driven Model Driven Systems Development (MDSD[2]) process framework we developed at IBM Rational to manage such high-complexity programs. Alas, I understand that IBM just didn’t quite connect with Dr. Kundra.  Today, he is out of government, and the initiatives he started spent around $1 billion – they never die! – but are effectively complete.  

As a number of outlets reported, the testing of the app was limited (ref Slate[3]).  Indeed.  Pulling on this notion of a system – the app was part of the larger process of the caucus.  The caucus leadership had a process to follow.  Was it documented? Was its use rehearsed?  Obviously not.  According to Slate, “The problem with the caucuses is that we don’t run them except in a major national election, so there’s no way to ramp up to it. Imagine going to war with only war games under your belt, without facing an actual battle.”   Although they knew this, the Iowa DNC failed to properly train their people in the use of a new app; ensure that the new app was installed and ready to go; or verify that the system hardware was scaled properly to support the load.  

Ok, Jim here. To summarize, it seems certain that eventually … eventually … the ideas of incremental development yielding short feedback loops, ongoing risk mitigation, more objective and measurable criteria, end-to-end testing of the technology with the processes, and explicit management of system architectures as represented in multiple stakeholder viewpoints will get through to the Project Management field.  It appears to Mike and I that the expected path for that will be via Agile. Indeed, as it evolves, it is adopting many of the tenets of of MDSD. 


[1]  https://www.linkedin.com/in/michael-mott-3590997/

[2]  https://www.academia.edu/9790859/Model-driven_systems_development

[3]  https://slate.com/technology/2020/02/iowa-caucus-app-fail-shadow.html

Distributed Agile – Executing Agile with non-collocated teams; a call to Coaches

I am motivated to write this entry on geographically distributed Agile teams because most clients now, at least those of significant size, seem to have issues stemming from this situation. Indeed, a timely tweet from Scott Ambler on 27-April-2018, referencing his 2016 Agile At Scale survey, says it all: “Less than one-third of #agileteams are co-located! http://dld.bz/fxnJr Isn’t it amazing what surveys discover?”

I have seen three major reasons/scenarios for geographically distributed teams, and I am sure there are more reasons I’ve not seen yet:

  1. Our company is ginormous (technical term!). Most of us jumped onto the big “offshoring”[1] bandwagon because our CFO was so excited to see the reduced development costs that we projected would result. However, we didn’t effectively consider the impact on value associated with offshoring, including the reduction in collaboration that would result from offshoring. Collaboration between team members “on the other side of the world” is very difficult.
  2. Our company is ginormous, and we got this way by buying other companies. Sometimes their offices were located many timezones away. As we merged, people were placed in fragmented fashion onto teams based on functional expertise. Now we have teams where members are in multiple timezones, some having been on the acquired company, and others on the acquiring company, and the cultures are still far apart.
  3. The technology was developed in, say, China, but it is applicable only or mostly to, say, a US and/or European market. Our dev teams are in China because that is where the technology expertise resides. However, our product owner needs to be in the US and Europe, where the market for the product is understood. Therefore we have split our teams up in that fashion. It is a constant struggle to keep the bandwidth of collaboration sufficient between the dev teams and the product owners.

A common coaching pattern that arises from this situation occurs when an Agile coach consulting to such a company inevitably perceives collaboration difficulties. S/he asserts that one of the root causes of the collaboration difficulties is that the teams are not collocated, and/or that they spend insufficient time interacting face-to-face. After all, the agilemanifesto.org principles[2]are clear:

  • Business people and developers must work together daily throughout the project”,
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation”.

My call to the coaching community is simple: this is no longer a useful answer. That may be the beautiful horse we rode in on, but the horse has a broken leg and we need to put it out of its misery. Too few clients can, or will, take the advice. Scott Ambler’s numbers bear this out. To help make our client successful, we need to recommend something else. We can start by asking, “What if I said, ‘You have to collocate your teams?'” but they are usually going to tell you that’s not an option, and sadly, they mean it. If you persist, then their response may well be, “We don’t need you – you are no help to us.”


My call to the coaching community is simple:  this is no longer a useful answer.


What shall we tell them instead? How do we make our client successful? I don’t yet know. I haven’t gone down this road enough times yet. But I’ve developed some options, and I’m interested in knowing if you’ve tried any of these options, how successful they have been, and what other options we need to begin also recommending to our clients. Some definitions help to get us started, and Scott Ambler in this article ([3]) does a good job with the definitions – indeed, the entire article is excellent.

Coaching Options for non-collocated teams

  • Invest in really good collaboration technology. I mean the kind that just works, and initializes in seconds. It’s available, I have seen it, and it really helps. You walk into a conference room, you push a button or dial a number, and there on the screen in the front of the room is another conference room, half way around the world, and your mates are there, and the video is clear, and the audio is clear, and screen sharing is easy. It just works, and it takes 15 seconds not 15 minutes to initialize. I don’t know how much it costs to get that, but its justification is probably there in the time being wasted, along with the frustration and distraction. It’s certainly there if you miss a market window or fail a quality gate because you can’t get the needed level of collaboration. An Agile principle covers this: “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”
  • Look for an early opportunity for people frequently collaborating to meet in person, face-to-face, at least once. And again when possible. It makes a huge difference.
  • Look for patterns to improve collocation and cross-functionality. Agile teams that are not cross-functional are problematic. The way I often tell it, “If Fred isn’t here today, you can wait until tomorrow for Fred, who will do the task by himself in an hour, or you can assign two people to do the task Fred normally does. Even though they’ll take twice the time and four times the manpower, they’ll learn how Fred does it, might even figure out a better way, and the rest of the team will be awaiting the result for only 2 hours instead of 8 (or 24).” Once again, Agile principles cover the situation: “The best architectures, requirements, and designs emerge from self-organizing teams.” And (derived from Lean) “Simplicity–the art of maximizing the amount of work not done–is essential.” A Scaled Agile principle[4]also applies, “Apply systems thinking.”, i.e. optimize globally not locally.

How do these principles manifest here? Well, suppose we have two distributed teams, and both are half in a Philadelphia office and half in a Mumbai office. Take the parts of the two teams in Philadelphia and make them a team. Take the parts of the two teams in Mumbai and make them a team. For awhile, it may be that neither team performs well. They’re missing skills! But they will acquire those skills. A pattern like this for reorganizing teams to make them more collocated and more cross functional can usually be found amongst the teams in a distributed organization.

  • Be extremely loath to place individuals in situations where they are working by themselves, e.g. with no office or in a different city from everyone else. They never get to collaborate with high bandwidth, and honestly that’s just depressing. Actually, I’ve seen this several times recently and it didn’t seem depressing at all to the people involved, and I wondered why. I eventually realized that the people in these situations do not realize what they are missing. Contacting someone several times and finally getting them to cooperate, followed by calling the person several more times and finally getting them to do it right, seems normal to these people. The people involved often are those who the company believes are true experts and SMEs, so it doesn’t seem unusual to anyone that conveying what’s really needed is difficult and will take time. I think that’s hogwash. When they collaborate in person it does not take nearly as long, and the result is usually higher quality.
  • Moreover, such people are fragmenting their capabilities by having too many things on their plate, which Agile would call a high WIP limit, so their efficiency is reduced that way too. That group of people known as architects seem to have this problem very often. I’ve spoken to many architects who wish they could do their day job, but instead they spend the whole day telling the next team what the architectural guidelines, patterns and constraints are that apply to that team. There is too much information in their heads, and too much of it is known only to them! (I’ve heard this referred to as low truck number – if they get run over by a truck…) Experts/SMEs must write down the basics, and refer practitioners and teams to what’s written first, socializing the location of that information. That way when there is a conversation, it doesn’t start with the basics, it starts with the exception of interest – this is higher bandwidth.
  • When you have a distributed team, you need the skills of a Scrum Master in each location. The Scrum Masters should meet regularly. One should be a chief of sorts for the team.
  • When you have a distributed team, you need the skills of a Product Owner in each location. The Product Owners should meet regularly. One should be a chief, who absolutely must have absolute final say (did I sufficiently emphasize that?). However, there is no issue with one of the Product Owners making a decision that is later undone by the chief Product Owner. She was doing her best, and most of the time she’ll make the right decision, avoiding blocking the team. When she has a miss, not much work must be undone because she and the chief PO meet regularly. (SAFe® principle #9: “Decentralize decision making.”)

There are a number of large-organization antipatterns we can recognize also, and coach to improve:

  • Antipattern: IT is a Cost Center. This is never true. To prove it, next time you hear “My IT Department is treated as a Cost Center”, offer that they should all turn the lights out and go home. The lowest cost I can offer to my company is to cost them nothing, right? The refrain will be immediate: you can’t do that! Well why not? You wanted me to lower costs and I have done so. But the IT Department provides… whatever they say next, it’s value. It’s worth money. Determine the approximate monetary value if you must, to make your point. IT is a Value Center, and everyone needs to treat it that way.
  • Antipattern: we must measure Productivity and Utilization. Gaaah. If you measure utilization, you stifle your teams. Lots of studies out there demonstrate this. I spent much time in my youth going after productivity measures. They’re worthless. Measure predictability and value accrued. Measure cost of delay. (Don Reinertsen: “If you measure nothing else, measure the cost of delay.”) But don’t measure productivity or utilization of knowledge workers.[5]
  • Antipattern: SMEs and other True Experts can work alone. Nope, not really. It’s not very efficient and it’s ineffective. See my coaching discussion bullet above.
  • Antipattern: SMEs and other True Experts don’t need to document their knowledge. Yes they do! Use the SAFe® concept of capacity allocation[6]to ensure that documentation of things the Expert seems to say all the time is getting performed regularly. Socialize the location of that information. Use it to raise the bandwidth of the next conversation.
  • Antipattern: We support our meetings with frustrating collaboration technology. Oh my golly this happens so often you would think they actually word their goal exactly that way. You walk into the conference room and the technology isn’t ready for another 15 minutes. Incredibly wasteful, very frustrating and distracting too. This comedy video is just depressing, ain’t it? ( [7])
  • Antipattern: We’ll ignore the latest acquisition’s effect on overall product architecture; they’ll figure it out. This is a simple application of Conway’s Law[8], which, roughly stated, is “Organization drives architecture.” Mel Conway said “If you have four teams writing a compiler, you’ll get a four pass compiler.” You can’t ignore the acquisition in this way. Figure out what the end architecture is that you want, and mold the whole development and delivery organization around that architecture. Just do it.
  • Antipattern: We ignore cultural differences during an acquisition. This is what killed Daimler-Chrysler.[9]

Finally, I have an update to this post, my colleague Bob posted this blog entry in May 2020 and I find it quite germane. I hope this helps as you work to improve the effectiveness of your distributed Agile teams.


[1]Offshore is a term I dislike, but it is in common usage so I am using it in these pseudo-quotes. In India, it is the US that is offshore. Let’s just say where the people are, e.g. U.S., India, China, Ireland … wherever they are. While we’re at it, let’s call those resources people, or human beings, or team members, since that is what and who they are.

[2]agilemanifesto.org and its page www.agilemanifesto.org/principles.html

[3]http://www.disciplinedagiledelivery.com/agility-at-scale/geographically-distributed-agile-teams/

[4]https://www.scaledagileframework.com/safe-lean-agile-principles/

[5]References include SAFe® Principle #8: “Unlock the intrinsic motivation of knowledge workers”, and Dan Pink’s books, or even just his video: https://www.youtube.com/watch?v=u6XAPnuFjJc.

[6]Search for “Optimizing Value and Solution Integrity with Capacity Allocation” here: https://www.scaledagileframework.com/program-and-solution-backlogs/

[7]https://www.youtube.com/watch?v=kNz82r5nyUw

[8]http://www.melconway.com/Home/Conways_Law.html

[9]https://www.forbes.com/sites/georgebradt/2015/06/29/the-root-cause-of-every-mergers-success-or-failure-culture/#7812cdcd305b

Decentralizing decisions, motivating workers

Scenario 1

It’s in 2012, and it’s a Friday. I’m in Colorado Springs. My boss/manager calls. We need you in Tampa on Tuesday. Ok, what’s going on? Fred was consulting at our client there, and he will no longer be able to be there due to another unavoidable, overlapping, kinda-last-minute business commitment. Stuff happens. Well that’s fine, but I can’t leave the office here on Monday until 5pm due to my own previous business commitments. Do what you can, my boss says. Let me know if you find a flight and a way to be there.

I dialed up the corporate travel website. 5 minutes later, it had found me a nonstop flight, a room at a Hilton (I’m a #HiltonZealot), and a car. The airline roundtrip was expensive, “out of policy” even. Since there were no “in policy” options given me, I booked it. The software asked why I was out of policy, and I answered that. I sent an email to my boss: I’m “in”, it was “out of policy”, and mentioned the price. (Never blindside your boss if you can help it!) No reply, nor was one needed. I don’t know what happened to the “out of policy” issue after that. It could have been that he was “pinged”, or his supervisor was “pinged”. If so, they answered truthfully, and there was no further problem. I went to the client. I did my job. Client was happy.

Scenario 2

2016 now, a Thursday. Different company. I’m in Denver at one of their offices. My engagement supervisor and technical lead texts me. We’ll call him Fred. We need you in Charlotte. Ok, what’s going on? Fred needs to return to Denver Tuesday night for a meeting Wednesday in Denver. It’s unavoidable; the business wants him present due to his much longer/deeper experience with my current client. He asks if I can teach his training class that starts on Wednesday at lunch time and concludes Thursday mid-day? Yup, stuff happens. Fine except that I can’t leave the office here on Tuesday until 5pm. Do what you can, Fred says; let me know when you’ve made your travel request.

10 minutes later, the corporate travel website had my travel request. It showed me the required approvals lifecycle: first, my first line manager, then the second line, then a third manager at the VP level who resides in another country; her approval was needed because the travel date was within 7 days. And then, alas, we wait.

On Friday afternoon the lack of progress results in our engagement executive getting into the loop. He “pings” my first line manager by text message, asking that he immediately approve the travel request. He does so, but not until Saturday morning.

On Monday morning, the request is still not approved by the second line, so the engagement executive tries a different tack. He asks me to submit a second travel request on a different charge number because he has more immediate access to the first line manager and the second line manager who are on the second charge number. I do so. This time, I get very specific about what flight I will demand if the travel request is approved, because if the flight isn’t the one remaining nonstop available that leaves at 7:45pm, I will not get any sleep. My own leads, at least, understand completely that it won’t be very useful for me to show up to teach a class that I sleep through.

It’s Wednesday morning. The first and second lines approved the travel request, but the third approval never was accomplished. I have no idea what communications transpired behind the scenes. Obviously we abandoned the travel plans and went to a backup plan. That really important meeting for Fred on Wednesday started spot on time at Noon. One minute after it began it was postponed to another date by the client, a fortuitously good result given the process we engaged in to get to that point.

Thoughts

Two scenarios. What do Lean, Agile & SAFe(r) teach here? I quickly perceive two lessons at least. First, Reinertsen is screaming to review the cost of delay. As we waited for approvals, the travel cost was going up nonlinearly, the uncertainty in client success was going up unchecked, and employee morale was degrading – these all are costs. Then more time and money was spent trying to accomplish the approvals using a different tack.

Secondly, a corporate escalation culture leaves the employee who is looking at a strong business need no way to respond confidently to the business need. This is a great example of the value of distributed/decentralized decision making (i.e. SAFe Principle #9 – http://www.scaledagileframework.com/decentralize-decision-making/ ). SAFe even has a decision decentralization calculator:

Given the decision to make:

  1. Is it frequent? Yes=2, No=0
  2. Is it time-critical? Yes=2, No=0
  3. Does it have economies of scale? Yes=0, No=2

Sum these three numbers; if the sum >= 4 then decentralize it.

For this decision, remember we’re not looking at the individual case, we’re looking at the corporate travel policy. We have

  1. 2=frequent – surely many employees have similar situations, and often; often enough,
  2. 2=yes it’s clearly time-critical, and
  3. 0=yes, there are indeed economies of scale I’m sure.

Total 4=make it decentralized. Besides, what is the cost of someone’s poor behavior, booking a flight three days out for no good business reason? One’s first-line manager can easily be put into the loop by the automation. The result would be you’d get away with it once, be chastised, and likely wouldn’t get away with a poor decision a second time. Put this next to the cost savings of spending less on travel costs because employees immediately make travel reservations as soon as they become aware of the business need, not to mention being more confident and responsive for your customer, and it’s a no-brainer.

The Dangers of Normalized Story Point Estimation

Summary

The Scaled Agile Framework® (SAFe)[i] contains a method for initializing and normalizing an Agile team’s effort and/or complexity estimates, the use of which can result in poor behavior by Agile teams. In defending this claim of danger, this paper first discusses Planning Poker and Story Pointing in Scrum as background information, and highlights the importance of relative and unanchored estimating. A brief discussion of SAFe®’s normalized story point estimation method follows. Poor behaviors observed on teams at a recent client of the author’s is then discussed. SAFe®’s normalized story point estimation initialization technique is hypothesized as part of the cause. Finally, a brief discussion of a proposed solution is offered. A followup paper is proposed if necessary that would discuss solutions that have been tried, and their level of success.

Planning Poker and Story Points – background

In Agile/Scrum[ii], story pointing is a method for estimating the amount of work to be done by a development team over a period of time in a predictable manner. It is usually done by a team via an estimating procedure called Planning Poker[iii], which yields a relative estimate for the work to complete a requirement based on a small reference requirement whose baseline point value is arbitrarily assigned, usually 1, 2 or 3. It is called story pointing because story point values have no units (this means they do not refer to hours or any other duration or cost), and because the requirements in Agile/Scrum take the form of use case-like statements called user stories[iv] which contain a user role, a statement of functional need, and a statement of value.

Planning Poker is a variant of an estimation method developed in the 1950s-60s at the Rand Corporation called Delphi. The Delphi Method[v] is a systematic, structured communication method that includes participant anonymity and simultaneity (avoiding the influence of other participants), a consensus basis, and regular feedback (each of which contributes to gaining agreement and commitment). Barry Boehm and John Farquhar originated the Wideband[vi] variant of the Delphi method in the 1970s, calling it wideband because the new method involved greater collaboration among those participating. Finally, Planning Poker is a “gamified” form of Wideband Delphi.

Estimates in Planning Poker take the form of a number in a (modified) Fibonacci sequence[vii]. That is, suppose our reference story is assigned 2 story points; then a relative estimate of the work for some other user story might be 2 (roughly the same effort and/or complexity[viii]), or 3 (a bit more), or 5 (more), or 8 (a lot more[ix]), etc. Many cite that the reason for using relative estimating and Fibonacci is to reflect the inherent uncertainty in estimating larger items[x] and to avoid equating the relative estimates with specific time units like hours. The industry also has found empirically that relative estimates yield better predictability properties for a team[xi].

The Fibonacci sequence has the interesting property that the ratio of Fn+1/Fn converges (i.e. limit as n approaches infinity) to an irrational number called the Golden Ratio[xii] phi = (1+5^0.5)/2 = 1.6180339887…

Phi appears surprisingly often in nature, such as the arrangement of leaves and branches in plants, the proportions of chemical compounds and the geometry of crystals. Its use in Planning Poker (via the Fibonacci sequence) is – perhaps due to its frequent appearance in nature -because the human mind perceives ratios larger than phi as significant in some sense, and ratios smaller than phi as insignificant[xiii]. A second reason is that it forces participants to avoid simple ratios like “twice” or “four time as big”, or “half as big”[xiv]. Using hours or days in lieu of Fibonacci-based points leaves a team free to use such simple ratios and to quibble over relatively insignificant differences unnecessarily and wastefully.

SAFe®’s Story Point Initialization

Tucked into the intellectual capital on SAFe® team-level iteration planning[xv] is the concept of Normalized Story Point Estimating. First it is acknowledged that in Scrum, each team’s velocity[xvi] is associated only with that team. However, it is asserted, in SAFe®, story point estimation shall be normalized. The reason given is that estimates for requirements such as features whose development comes from multiple teams must be based on the same story point definition. This, in turn, is said to provide a way to perform ART[xvii] and Solution-level economic decision-making on a common basis.

The following algorithm for normalizing story point estimating across multiple teams is offered by SAFe® on its team-level iteration planning page:

1. Normalize story points:

Find a story that will consume about ½ day in development and ½ day for test and validation; assign this story 1 story point; estimate your stories relative to this baseline story

2. Establish the team velocity Vteam prior to the existence of historical data:

Let the effective team size be Nteam, i.e. the total number of developers and testers on the team

Let DL be the total number of effective team-member vacation, holiday, sick and other leave days anticipated for the iteration or sprint (for all the team members)

Then:

where At is the fraction allocation – At is in (0,1] – for each team member t, e.g. each FTE[xviii] on the team who is allocated full-time to that team has an At of 1.0.

In 1. above, it is readily seen that 1 story point is equated to 1 day’s effort. The justification for the constant 8 in 2. above is similar, at least in the SAFe® SPC training class attended by the author: in a two week sprint, there are 10 days, then subtract 2 days for meetings and other miscellaneous inefficiencies. In other words, in order to normalize story pointing for collaboration during cross-team story point estimating, such as in ARTs, SAFe® asks that a time-based method for estimation initialization be used.

Story Points should not be about hours or days

The first issue with this advice is that story points, while they are about effort and complexity, are not about hours or days. While it is clear that a story that has more effort and complexity takes more time, how much more varies from team to team and with the situation. Let’s hear it from one of the acknowledged experts, Mike Cohn[xix] (underlining is my emphasis):

I’ve been quite adamant lately that story points are about time, specifically effort. But that does not mean you should say something like, “One story point = eight hours.”

Doing this obviates the main reason to use story points in the first place. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively.

Two developers can start by estimating a given user story as one point even if their individual estimates of the actual time on task differ. Starting with that estimate, they can then agree to estimate something as two points if each agree it will take twice as long as the first story.

When story points [are] equated to hours, team members can no longer do this. If someone instructs team members that one point equals eight (or any number of) hours, the benefits of estimating in an abstract but relatively meaningful unit like story points are lost.

When told to estimate this way, the team member will mentally estimate first in number of hours and then convert that estimate to points. Something the developer estimates to be 16 hours will be converted to 2 points.

Contrast this with a team member’s thought process when estimating in story points as they are truly intended. In this case, team members will consider how long each new story will take in comparison to other stories. For example, you and I might agree that a new story will take twice as long as a one-point story, and so we agree it’s a two.

Knowledge and use of the SAFe® normalization approach is leading to poor behaviors

The second issue with SAFe®’s advice stems from my own consulting team’s experience with clients using the SAFe® story point normalization and initialization process. In our experience it demonstrably leads to

  • being an excuse to allow anchored behavior, i.e. non-anonymous and non-simultaneous effort and/or complexity estimating by teams
  • non-relative estimating, i.e. use of hours as a means to derive story points, which means of course that one might as well just use hours (at least it is more honest)
  • management imposition of target velocities for teams as a misguided productivity motivator[xx].

With regard to the last bullet, let’s remind ourselves that in order to double a team’s velocity so that they can meet a target velocity imposed on them, all the team needs to do is halve the size of the reference requirement or user story, or double the number of story points assigned to that reference story.

Solution

“Help Teams excel, don’t punish them.”[xxi]

SAFe® claims that story point normalization is needed “so that estimates for Features or Epics that require the support of multiple teams are based on the same story point definition, allowing a shared basis for economic decision making.[xxii] The author does not buy this argument. Each team has a run-rate (cost per unit time), and each team commits to developing a certain set of requirements, and therefore value, in each 2 week iteration and/or in each 10 week program increment23. That value is sufficient to determine the economics of the situation where tradeoffs are necessary; such tradeoffs take place no lower than at the team level anyway. Moreover, the team who has a history performing using Scrum who is subsequently assigned to an Agile Release Train arrives at the Train’s first PI Planning[xxiii] meeting with an unnormalized velocity already in place. One should be reluctant to disturb the team’s existing velocity.

Suppose a team is assigned to an ART, and is also just starting to use Scrum. How should such a team in an ART initialize their velocity? Despite several expert Scrum sites that warn against anchoring using time, only to propose a time-based initialization method just as does SAFe®, (e.g. [xxiv] )!, VersionOne suggests what may be a better procedure: “Initially, teams new to Agile software development [with Scrum] should just dive in and select an initial velocity using available guidelines and information.”[xxv] That is, you know your team, just give it your best shot! Remember, this exercise starts with a reference user story, that story to which an arbitrary story points value was assigned – be that value 1, 2, or 3 (since different Agile sites suggest each of these three arbitrary values early in the Fibonacci sequence). Will your initial velocity be right? Quite unlikely! The goal is not the impossible one of being predictable in your very first sprint. The goal is the continuous improvement of the team’s predictability over time. Predictability is valuable[xxvi] because it generates trust. This is a good goal.

Epilogue … not everything transcribed well from the original Word document. Please let me know if you see any errors, thank you.

[i] Dean Leffingwell’s framework for scaling Agile development, – see http://www.scaledagile.com (corporate/administrative) and http://www.scaledagileframework.com (technical, and by the way, highly “clickable”)

[ii] What is Scrum? : https://www.scrum.org/resources/what-is-scrum?gclid=Cj0KCQiAyZLSBRDpARIsAH66VQItwbMIu3mxrGvzBy2P-ZWhn9AhkWLTbN7yY7q3fYr_Z8-9vnBRrogaAnl0EALw_wcB

[iii] https://en.wikipedia.org/wiki/Planning_poker

[iv] https://www.mountaingoatsoftware.com/agile/user-stories

[v] https://en.wikipedia.org/wiki/Delphi_method

[vi] https://en.wikipedia.org/wiki/Wideband_delphi

[vii] The Fibonacci sequence, defined by Fn+2=Fn+1+Fn where F1=1 & F2=1 (or optionally F0=0 & F1=1), starts with (optionally) 0, then 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, etc. Regarding “modified”: one always modifies the sequence for use in estimating by including only a single 1. Additionally, perhaps because it’s easier to think about these numbers, larger numbers can be rounded, e.g. 20, 40, 100 instead of 21, 34, 55, 89, and sometimes more esoteric values are included such as 0 (meaning trivial), ½, infinity, “?” and the flippant “I’ll go make some coffee”. One such scheme is codified in a commercial card deck product: https://store.mountaingoatsoftware.com .

[viii] This intentionally avoids the current discussion in the literature about whether story pointing should be based on effort (per Cohn and others, e.g. https://www.mountaingoatsoftware.com/blog/dont-equate-story-points-to-hours) or complexity (per Giddings and others, e.g. https://www.clearvision-cm.com/blog/why-story-points-are-a-measure-of-complexity-not-effort/)

[ix] Why the phrase “a lot more” instead of “four times more”? After all, 8/2 is 4. The answer is that some experts/authors don’t believe it is correct to make that assumption, in particular because of the presence of uncertainty in the estimate. As with the complexity vs. effort argument referenced earlier, discussion of that topic is being intentionally avoided.

[x] It has been difficult to find where this was originally stated. Wikipedia’s Planning_poker page says “citation needed”. Several other references were consulted, and they either make this statement without citation, or they cite Wikipedia. A reasonable guess is that it’s in one of Mike Cohn’s books. Stack Overflow, at https://stackoverflow.com/questions/9362286/why-is-the-fibonacci-series-used-in-agile-planning-poker, contains the amusing statement that this description on Wikipedia holds “the mysterious sentence” and then echoes the phrase, “reflect the inherent uncertainty in estimating larger items”. Regardless, the author believes the statement to be reasonably accurate.

[xi] http://blogs.collab.net/agile/perfectly-predictable-why-story-points-are-better-than-detailed-estimates  and http://gettingpredictable.com/the-attitude-of-estimation/

[xii] https://en.wikipedia.org/wiki/Golden_ratio

[xiii] I swear I have read this before! and it was in a decent reference; I am searching desperately for the citation, yes indeed … but I have not yet found it

[xiv] https://www.scrum.org/forum/scrum-forum/7897/why-do-we-use-fibonacci-series-estimation

[xv] http://www.scaledagileframework.com/iteration-planning/

[xvi] Velocity: as used here, velocity is a key to improving the predictability of an Agile development team. Velocity is an assessment of how many story points a single team can commit to achieving, or performing, in a single iteration or sprint. When a team has a history of prior sprints’ story points achievement, velocity is some reasonable function of that history – the function is determined by the team but an average is a good start. When the team has no such history, this is when SAFe®’s normalization/initialization process might be applied. Scrum.org has a good page (https://www.scruminc.com/velocity/) on velocity:

Another good page on velocity is: https://www.scrumalliance.org/community/articles/2014/february/velocity .

[xvii] ART: a SAFe® Agile Release Train, SAFe®’s organizational structure for multiple, persistent Agile development teams; see http://www.scaledagileframework.com/agile-release-train

[xviii] FTE: full-time employee

[xix] https://www.mountaingoatsoftware.com/blog/dont-equate-story-points-to-hours

[xx] https://vimeo.com/49263000 is a superb video by Dan Pink which speaks to how real motivation of knowledge workers arises.

[xxi] https://www.scrumalliance.org/community/articles/2014/february/velocity

[xxii] http://www.scaledagileframework.com/iteration-planning/

[xxiii] http://www.scaledagileframework.com/pi-planning/

[xxiv] https://stackoverflow.com/questions/1232281/how-to-measure-estimate-and-story-points-in-scrum “… start out by assuming a story point is a single ‘ideal day’ …”

[xxv] https://www.versionone.com/agile-101/agile-management-practices/agile-scrum-velocity/

[xxvi] https://dzone.com/articles/predictability-really-what-we ,  https://uxmag.com/articles/being-predictable ; also information on predictability metrics: https://www.leadingagile.com/2013/07/agile-health-metrics-for-predictability/  ,  http://www.scaledagileframework.com/metrics/#P2 ,