Dumbbell Delivery; Antifragile Software

Not online fitness shopping. Not the brogrammer pumping iron. This is a brief discussion of Antifragile – the latest book by Nassim Nicholas Taleb – and relevant insights for software delivery or other complex work.

This isn’t meant to be an exhaustive exploration of the topics. It’s more a join-the-dots exercise, and it’s left up to the reader to explore topics of interest.

Antifragile

Antifragile is a word coined by Taleb to describe things that gain from disorder. Not things that are impervious to disorder; the words for that are robust, or resilient. Of course, things that are harmed by disorder are fragile. Consideration of the fragile, the robust, and the antifragile leads Taleb in many interesting directions.

Fragile, Robust, and Antifragile Software

A running software program is fragile. It is harmed by the most minor variations in its source code, its build process, its dependencies, its runtime environment and its inputs.

But software is eating the world. The global software ecosystem has grown enormously over an extended time – time being a primary source of variation – and hence appears to be antifragile. How do we reconcile this apparent paradox?

Here is a grossly simplified perspective.

First, software code can evolve very quickly, passing on an improved design to the next generation of runtime instances. In this way, tools, platforms, libraries and products rapidly become more robust. However, human intervention is still required for true operational robustness.

Second, humans exercise optionality in selecting progressively better software. In this way, beneficial variation can be captured, deleterious variation discarded, and software goes from robust to antifragile.

So – as fragile parts create an antifragile whole – runtime software instances are fragile, but fragile instances that are constantly improved and selected by humans create an antifragile software ecosystem. (If software starts doing this for itself, we may be in trouble!)

Some Delivery Takeaways

Yes, I know that’s an oxymoron. Nonetheless, here are some of my highlights. It’s a while now since I read the book, and I might add to this in future, so don’t take it as the last word.

Dumbbell Delivery

The idea of “dumbbell”/”barbell” risk management is that you place your bets in one of two places, but not in between. You first ensure that you are protected from catastrophic downside, then expose yourself to a portfolio of potentially large upsides. In such cases, you are antifragile.

If, instead, you spread yourself across the middle of the dumbell, you carry both unacceptably large downside exposure and insufficiently large upside exposure. In such cases, you are fragile.

For me, “dumbbell delivery” is how we counter insidious elements of the construct of two-speed-IT (insidious because no one has ever asked to go slow, or asked for high risk as the alternative). We ensure any project is as protected as possible from catastrophic downside – by decoupling the commission of error from any impact on operations or reputation – and as exposed as possible to potentially large upsides – by providing maximum freedom to teams to discover and exploit opportunities in a timely manner.

Donald Reinertsen makes a similar argument for expoliting the asymmetries of product development payoffs in The Principles of Product Development Flow.

Via Negativa

Those who intervene in complex systems may cause more harm than good. This is known as iatrogenics in medicine. To manage complex systems, removing existing interventions is more likely to be successful than making additional interventions, as each additional intervention produces unanticipated (side) effects by itself, and unanticipated interactions with other interventions in tandem. Via negativa is the philosophy of managing by taking things away.

Software delivery, and organisations in general, are complex in that they are difficult to understand and respond unpredictably to interventions. What’s an example of an intervention we could take away?  Well, let’s say a project is “running late”. Instead of adding bodies to the team or hours to the schedule, start by trying to eliminate work through a focus on scope and quality. Also, why not remove targets?

Big Projects, Monolithic Systems

Anything big tends to be fragile. Break it into smaller pieces for greater robustness. Check.

Waterfall and Agile

Waterfall done by the book is fragile. Agile done as intended is antifragile.

Procrustean Beds

Forcing natural variation into pre-defined, largely arbitrary containers creates fragility. Velocity commitments and other forms of management by performance target come to mind.

Skin in the Game

Of course, anyone making predictions should have skin in the game. On the other hand, Hammurabi’s code is the converse of the safe-to-fail environment.

The Lindy Effect on Technology lifespan

The life expectancy of a technology increases the longer it has been around. Remember this the next time you want to try something shiny.

Phenomenology and Theory

Phenomenology may be superior to theory for decision-making in complex work. Phenomenology says “if x, then we often observe y“. Theory says “if x, then y, because z“. Theory leads to the illusion of greater certainty, and probably a greater willingness to intervene (see above).

Flaneurs and Tourists

Chart your own professional journey. Allow yourself the time and space for happy discoveries.

Narrative Visualisation Tools

I use narrative visualisations a lot. I like to frame evidence so that it commands attention, engages playful minds, and tells its own story (see also Corporate Graffiti). I’ll put new tools on GitHub as I create them. Here are three to start.

Visualising Stand-Up Attendance

I used the Space Invader metaphor with a busy leadership team to explain how things would slip through the gaps from day if they didn’t attend stand-up in sufficient numbers and with sufficient regularity. The invaders represent the team members present each day, and each advancing row is a new day. The goal of the game is reversed in this case – we want the invaders to win! The team loved it and loved seeing their improved attendance reflected in a denser mesh of invaders.

Standup Space Invaders
Standup Space Invaders

Source on GitHub.

Aggregating Retrospectives

Useful if you want to aggregate multiple retrospectives – either the same team over time, or multiple teams on a common theme – and present them back while preserving the sincerity of the original outputs.

Re-retro screenshot
Re-retro screenshot

Source on GitHub.

Cycle Times from Trello

Trello is a wonderful tool for introducing visual management. It is not, however, great for reporting. Trycle (source on GitHub) will calculate cycle times for all cards transitioning between two lists using the JSON export of a Trello board (or the dwell time if just one list). Visuals and narrative not included.

Visual Knowledge Cycles

Visualisation is a key tool for the management of knowledge, especially knowledge from data. We’ll explore different states of knowledge, and how we can use visualisation to drive knowledge from one state to another, as individual creators of visualisation and agents within an organisation or society.

Visualisation Cycle
Visualisation Cycle

(There’s some justifiable cynicism about quadrant diagrams with superimposed crap circles. But, give me a chance…)

Awareness and Certainty about Knowledge

We’re used to thinking about knowledge in terms of a single dimension: we know something more or less well. However, we’ll consider two dimensions of knowledge. The first is certainty – how confident are you that what you know is right? (Or wrong?) The second is awareness – are you even conscious of what you know? (Or don’t know?)

These two dimensions define four states of knowledge – a framework you might recognise – from “unknown unknowns” to “known knowns”. Let’s explore how we use visualisation to drive knowledge from one state to another.

Knowledge states
Knowledge states

(Knowledge is often conceived along other dimensions, such as tacit and explicit, due to Nonaka and Takeuchi. I’d like to include a more detailed discussion of this model in future, but for now will note that visualisation is an “internalisation” technique in this model, or an aid to “socialisation”.)

Narrative Visualisation

I think this is the easiest place to start, because narrative visualisation helps us with knowledge we are aware of. Narrative visualisation means using visuals to tell a story with data.

Narrative Visualisation
Narrative Visualisation

We can use narrative visualisation to drive from low certainty to high certainty. We can take a “known unknown”, or a question, and transform it to a “known known”, or an answer.

“Where is time spent in this process?” we might ask. A pie chart provides a simple answer. However, it doesn’t tell much of a story. If we want to engage people in process of gaining certainty, if we want to make the story broader and deeper, we need to visually exploit a narrative thread. Find a story that will appeal to your audience and demonstrate why they should care about this knowledge, then use the narrative to drive the visual display of data. Maybe we emphasise the timeliness by displaying the pie chart on a stopwatch, or maybe we illustrate what is done at each stage to provide clues for improvement. (NB. Always exercise taste and discretion in creating narrative visualisations, or they may be counter-productive.)

Here is a brilliant and often cited narrative visualisation telling a powerful story about drone strikes in Pakistan.

Drones Visualisation
Screen shot of Pitch Interactive Drones Visualisation

The story also provides a sanity check for your analysis – is the story coherent, is it plausible? This helps us to avoid assigning meaning to spurious correlation (eg, ski accidents & bed-sheet strangulation), but do keep an open mind all the same.

Discovery Visualisation

But where do the questions to be answered come from? This is the process of discovery, and we can use visualisation to drive discovery.

Discovery Visualisation
Discovery Visualisation

Discovery can drive from low awareness, low certainty to high awareness, low certainty – from raw data to coherent questions. Discovery is where to start when you have “unknown unknowns”.

But how do you know you have “unknown unknowns”? Well, the short answer is: you do have them – that’s the thing about awareness. However, we’ll explore a longer answer too.

If someone drops a stack of new data in your lap (and I’m not suggesting that is best practice!), it’s pretty clear you need to spend some time discovering it, mapping out the landscape. However, when it’s data in a familiar context, the need for discovery may be less clear – don’t you already know the questions to be answered? We’ll come back to that question later.

A classic example of this kind of discovery can be found at Facebook Engineering, along with a great description of the process.

Facebook friends
Facebook friends visualisation

In discovery visualisation, we let the data lead, we slice and dice many different ways, we play with the presentation, we use data in as raw form as possible. We don’t presuppose any story. On our voyage of discovery, we need to hack through undergrowth to make headway and scale peaks for new vistas, and in that way allow the data to reveal its own story.

Inductive Drift

What if you’ve done your discovery and done your narration? You’re at “known knowns”, what more need you do?

If the world was linear, the answer would be “nothing”. We’d be done (ignoring the question of broader scope). The world is not linear, though. Natural systems have complex interactions and feedback cycles. Human systems, which we typically study, comprise agents with free will, imagination, and influence. What happens is that the real world changes, and we don’t notice.

We don’t notice because our thinking process is inductive. What that means is that our view of the world is based on an extrapolation of a very few observations, often made some time in the past. We also suffer from confirmation bias, which means we tend to downplay or ignore evidence which contradicts our view of the world. This combination makes it very hard to shift our superstitions beliefs. (The western belief that men had one less rib than women persisted until the 16th century CE due to the biblical story of Adam and Eve.)

So where does this leave us? It leaves us with knowledge of which we are certain, but unaware. These are the slippery “unknown knowns”, though I think a better term is biases.

Unlearning Visualisation

Unlearning visualisation is how we dispose of biases and embrace uncertainty once more. This is how we get to a state of “unknown unknowns”.

Unlearning Visualisation
Unlearning Visualisation

However, as above, unlearning is difficult, and may require overwhelming contradictory evidence to cross an “evidentiary threshold”. We must establish a “new normal” with visuals. This should be the primary concern of unlearning visualisation – to make “unknown unknowns” look like an attractive state.

Big data is particularly suited to unlearning, because we can – if we construct our visualisation right – present viewers with an overwhelming number of sample points.

Unlearning requires both data-following and story-telling approaches. If we take away one factually-based story viewers tell themselves about the world, we need to replace it with another.

Recap

Visualisation Cycle
Visualisation Cycle

Your approach to visualisation should be guided by your current state of knowledge:

  • If you don’t know what questions to ask, discovery visualisation will help you find key questions. In this case, you are moving from low awareness to high awareness of questions, from “unknown unknowns” to “known unknowns”.
  • If you are looking to answer questions and communicate effectively, narrative visualisation helps tell a story with data. In this case, you are moving from low certainty to high certainty, from “known unknowns” to “known knowns”.
  • If you have thought for some time that you know what you know and know it well, you may be suffering from inductive drift. In this case, use unlearning visualisation to establish a new phase of inquiry. In this case, you are moving from high certainty and awareness low certainty and awareness, returning to “unknown unknowns”.

Of course, it may be difficult to assess your current state of knowledge! You may have multiple states superimposed. You may only be able to establish where you were in hindsight, which isn’t very useful in the present. However, this framework can help to cut through some of the fog of analysis, providing a common language for productive conversations, and providing motivation to keep driving your visual knowledge cycles.

Corporate Graffiti – Being Disruptive with Visual Thinking

As you go about your work you’ll come up against walls. Some walls will be blank and boring blockers to progress. These need decoration; spraying with layers that confer meaning. So pick a corner and start doodling. With a new perspective, you’ll find a way around the blockers. Other walls will come with messages – by design or default – leading you in a certain direction. If this isn’t where you want to go, you’ll need to plot your own course by subverting or overwhelming the prevailing visuals.

This is your challenge, and your opportunity for innovation: to disrupt the established visual environment with new ways of looking at the world that, in turn, unlock new ways of thinking. If you think you could make your organisation more agile with some disruptive visual thinking, read on for my experience [on the Organisational Agility channel of ThoughtWorks Insights].

Seeing Stars – Bespoke AR for Mobiles

I presented on the development of the awesome Fireballs in the Sky app (iOS and Android) at YOW! West with some great app developers. See the PDF. (NB. there were a lot of transitions)

Abstract

We’ll explore the development of the Fireballs in the Sky app, designed for citizen scientists to record sightings of meteorites (“fireballs”) in the night sky. We’ll introduce the maths for AR on a mobile device, using the various sensors, and we’ll throw in some celestial mechanics for good measure.

We’ll discuss the prototyping approach in Processing. We’ll describe the iOS implementation, including: libraries, performance tuning, and testing. We’ll then do the same for the Android implementation. Or maybe the other way around…

Playing Games is Serious Business

Simple game scenarios can produce the same outcomes as complex and large-scale business scenarios. Serious business games can therefore reduce risk and improve outcomes when launching new services. Gamification also improves alignment and engagement across organisational functions.

This is a presentation on using games to understand and improve organisational design and service delivery, which I presented at the Curtin University Festival of Teaching and Learning.

(Don’t be concerned by what looks like a bomb-disposal robot in the background.)

The slides provide guidance on applying serious business games in your context.

Iterative vs Incremental Flashcard

Sometimes, the difference between incremental and iterative (software) product development is subtle. Often it is crucial to unlocking early value or quickly eliminating risk – an iterative approach will do this for you, while incremental will not.

Incremental vs Iterative
Incremental vs Iterative

Let’s review the distinction. Incremental means building something piece by piece, like creating a picture by finishing a jigsaw puzzle. This is great for visibility of progress, especially if you make the pieces very small. However, the inherent risk is that an incremental build is not done until the last piece is in place. Splitting something into incremental pieces implies the finished whole is understood (by the jigsaw designer, at least). If something changes during the build, like a bump to the table,  all of your work to date is at risk. Future work – to finish the whole – is also at risk of delivering less than optimal value, if our understanding of value changes during an incremental build. Much development work done under an agile banner is in fact incremental, and therefore more like a mini-waterfall approach than an essentially agile approach.

Iterative, on the other hand, means building something through successive refinements, starting from the crudest implementation that will do the job, and at each stage refining in such a way that you maintain a coherent whole. You might think of this like playing Pictionary. When you are asked to draw the Mona Lisa, you start with (perhaps) a rectangle with a circle inside. If your partner guesses at this point, great! If not, you might add the smile, the hair, the eyes. Hopefully, your partner has guessed by now.  If not, embellish the frame, add the landscape, draw da Vinci painting it, show it hanging in the Louvre, etc. Your risk exposure (that time will expire before your partner guesses) is far lower with an iterative approach. With each iteration, you have captured some value. If your understanding of value changes (eg, your partner shouts something unhelpful like “stockade”), you still retain your captured value, and you can also adjust your future activities to accommodate your new understanding.

I think I capture all of this in the diagram above. If you’re having trouble articulating the difference between incremental and iterative (because both show similar signs of progress at times), or you’re concerned about the risk profile of your delivery, refer to this handy pocket guide.

The Like-for-Like Project Antipattern

Like-for-like replacement.

Sounds pretty simple, doesn’t it? That’s an easy project to deliver, right?

Wrong.

Why would we do a like-for-like (L4L) project? The IT group may want to upgrade to a new system, because the old one is broken, or because they’ve found something better. Maybe we want to avoid re-training users. Or, maybe our L4L project is hiding in a larger project. It could be phase 1, in which functionality is replicated, while new value is delivered in phase 2. There’s no strong pull from users for this L4L project, so to avoid ‘disruption’, the project plans to hot swap a technology layer while otherwise preserving functionality, much like a magician yanking off a tablecloth without disturbing any of the tableware. However, someone is about to have their dinner spoiled.

A clarification. L4L exhibits some of the characteristics of refactoring. But refactoring deliberately tries to stay small, in time and cost. The success criteria are also easily established, for instance, as unit tests. I’m talking here about replacing one non-trivial IT system with another, especially if the target system is primarily bought rather than built.

The Key Problem

Too many constraints
Too many constraints

Framing a project as L4L is not just demonstrably incorrect, it is lazy to the point of negligence. The L4L framing is incorrect because it can never be achieved. The current technology solution has constraints, and business processes have evolved under the current technology constraints. But the new solution will have new technological constraints. There is no chance the two sets of constraints will be equivalent (if they are equivalent, why would you bother changing systems?) Therefore, business processes will be forced to evolve under the new solution. If business processes are changing, this cannot be a L4L project, and the framing is incorrect.

It’s irrelevant whether we’re talking about L4L functionality or L4L business outcomes. As above, L4L functionality is a logical impossibility. If we’re talking L4L business outcomes, which users are going to support a disruptive change in functionality in order to be able to achieve exactly what they do today, and no more?

The L4L framing is lazy because it discourages critical thinking. Stakeholders will confuse the apparent simplicity of framing with simplicity of execution, meaning they will be less engaged in resolving the thorny problems that will inevitably crop up. This is especially important for project sponsors and other senior stakeholders. The business will not be engaged in a process that gives them no voice. This will be doubly so if, as above, the project is not actually like for like, and the deleterious changes in business process are being driven by IT.

The real negligent laziness, however, is in assuming that collectively we haven’t learnt any more about how to deliver business value since we implemented the current system. The current system is probably five to ten years old, and inevitably has shortcomings – why would we copy it? We might, at great effort, be able to figure out what the system is capable of, then build this, but this is far more than users actually use, and entirely different from what they want. This lazy framing leads to much more work than would be required if we simply went to the business/customers to understand what they really need at this time.

You’ll end up taking longer, costing more, and delivering poorer outcomes if you frame your project as like-for-like.

More Problems

Like for like framing is the key problem, but it spawns a host of other problems:

  • Inflexible execution means no ability to respond to change
  • Analysis by reverse engineering is very wasteful
  • Sysadmin as the customer precludes insight
  • Prioritisation is backwards to avoid destroying value
  • Value delivered in a Phase 2, which never happens
  • Reporting misses the fact that there are two different things to report

These are substantial topics in their own right. I’d like to finish this post sometime, so I’ll try to pick up these threads in detail in future posts.

What Might Happen?

So, how might an L4L project play out?

Well, it’s hard to predict the future, but like Plutonium , L4L projects are unstable and tend to disintegrate. While a typical agile project is self-correcting, in that everyone sees the value, scope can be adjusted to meet time, and so on, a L4L project has no give.

When estimates are discovered to be optimistic – as complex workarounds will inevitably be required to deliver old results on new technology – the only option for a true L4L project is to run late. Or, we expose the L4L fallacy by making functional or non-functional compromises. Likely, it will be a combination of both.

Stakeholders who were reluctant from the start are now in an even worse position. They originally stood to gain nothing but disruption, but now they definitely lose out. They may begin political manoeuvring to make the project go away. Governance will probably start sniffing around if this late, costly project is not delivering value.

Again, at this point, there’s not much room to move in a L4L project. There may be nothing of value delivered. You stuck with either writing the project off, or toughing it out to an expensive and unsatisfactory conclusion. You may even tough it out only to write it off later. Of course, you can change the way you are doing things, but that basically means starting over. So, why not get it right from the start?

The Solution

The solution is, of course, to frame the project as delivering new value.

Project framing report card
Project framing report card
Like-for-like Delivering value
Technically can’t be achieved Value drives right behaviours
Business disengaged – everyone wants to go last Business engaged – everyone wants to be first
Analysis & design = reverse engineering Break current constraints for better solution
Wasteful and high risk Efficient and low risk

Even if you’re being forced to replace a system, make sure you go out to the stakeholders and ask them what they want to make their lives better, right now. This is the only way you really get their buy-in, the only way they’ll make do with less here and there because they’re getting more overall, the only way they’ll be engaged in resolving difficult delivery problems, the only way they’ll back you up instead of sell you out when things get tough. It’s also the only way you’ll avoid perpetuating those arbitrary constraints you inherit with a L4L project.

Next time you’re asked to start a L4L project, start by changing the framing.

Backwards Prioritisation

Imagine you’re in the middle of a big software project. Maybe, you’re replacing an internal system, or something like that. You make an observation: when asked to prioritise, everyone wants to go last. They want to hang on to the status quo for as long as possible.

Instead of beating down your door to get their hands on desirable new features, they are all running for the exits in the hope that your project fails before it impacts them.

This is logical from an economic or financial perspective. If value is to be destroyed, then you should destroy the items of least value first, and of most value last, as you maximize value-in-use in this scenario. When your action would result in reduced future cash flows, the longer you wait before acting, the better.

Imagine your home was being gradually inundated; floodwater seeping in and rising toward the ceiling. Imagine that you were waiting for rescue and knew that you could take some things with you. You might pile up your possessions in the living room. Where would you put your most treasured possessions? At the top of the pile, of course, where they would be last to suffer water damage. Though you don’t know when help will arrive, when it does you will know you have saved only the most important stuff.

Maybe, though, the new solution is just as good as – if not better than – the old, but people fear change and disruption. If the level of disruption is high, and the new solution is no better, this is indeed value destruction. However, if the disruption is less than feared, and the new solution does offer benefits that haven’t been effectively sold, then this needs to be demonstrated to stakeholders so they come seeking change. This requires senior leaders to change the framing of the project to highlight the value created, not the value destroyed. It also requires the delivery team to support the new framing by delivering a high profile change that  adds value.

Just as value-creating projects maximize value delivered by creating the items of largest value first, value-destroying projects minimize value destroyed by destroying the items of largest value last. So, if your prioritisation looks backwards, ask seriously if your project is destroying value.

Data Visualisation: Good for Business

It was great to be part of the recent ThoughtWorks data visualisation event in Perth. There’s a summary on the ThoughtWorks Insights channel.

Visualisation is a topic I love talking about – especially demonstrating why it’s good for business – and presenting with Ray Grasso was a lot of fun.

Here’s the full video of the presentation.

If you want to pick and choose:

  • I start with the historical perspective and current state
  • 5.40, Ray starts the IMO story
  • 28.55, I start the call centre story
  • 41.53, Ray starts the NOPSEMA story
  • 54.39, We take questions

I’ve been talking to people about the event, and they always say something like:

“I’m such a visual person. I love it when people explain things to me visually.”

No-one ever says:

“Don’t show me a picture.”

Words are important, of course, as are other means of communicating. We all have multiple ways of processing information. However, visual processing is almost always a key component. Consider my friend the lawyer, who remembered cases because her lecturer pinned them on a map and illustrated them with holiday snap shots. I’m sure you have a similar example.

So we “see” that data visualisation is good for humans. And what’s good for humans is good for business. Key business outcomes include engaging communications, operational clarity, and unexpected insights.

Enough words. Browse the slides below or watch the presentation above.

Thanks to Diana Adorno for the feature pic.