The life-changing magic of tidying your work

Surprise! Managing work in a large organisation is a lot like keeping your belongings in check at home.

Get it wrong at home and you have mess and clutter. Get it wrong in the organisation and you have excessive work in progress (WIP), retarding responsiveness, pulverising productivity, and eroding engagement.

Reading Marie Kondo’s The Life-Changing Magic of Tidying Up (Amazon), I was struck by a number of observations about tidying personal belongings that resonated with how individuals, teams and organisations manage their work.

First, reading TLCMOTU helped me tidy my things better. Second, it reinforced lean and agile management principles.

I won’t review the book here. Maybe the methods and ideas resonate with you, maybe they don’t. However, because I think tidying is something that everyone can relate to, I will compare some of KonMari’s (as Marie Kondo is known) explanations of the management of personal belongings with the management of work in organisations. The translation heuristic is to replace stuff with work, and clutter with excessive WIP, to highlight the parallels.

I’d love to know if you find the comparison useful.

On the complexity of work storage systems

KonMari writes:

Most people realise that clutter is caused by too much stuff. But why do we have too much stuff? Usually it is because we do not accurately grasp how much we actually own. And we fail to grasp how much we own because our storage methods are too complex.

Organisations typically employ complex storage methods for their work: portfolio and project management systems with myriad arcane properties, intricate plans, baselines and revisions, budget and planning cycle constraints, capitalisation constraints, fractional resource allocations, and restricted access to specialists who are removed from the outcomes but embrace the management complexity.

And this is just the work that’s stored where it should be. Then there’s all the work that’s squirrelled away into nooks and crannies that has to be teased out by thorough investigation (see below).

Because organisations don’t comprehend the extent of their work, they invent ever-more complex systems to stuff work into storage maximise utilisation of capacity, which continues to hide the extent of the work.

Thus, we fail to grasp how much work is held in the organisation, and the result is excessive WIP, which inflates lead times and reduces productivity, failing customers and leaving workers disengaged. Simplifying the storage of work – as simple as cards on a wall, with the information we actually need to deliver outcomes – allows us to comprehend the work we hold, and allows us to better manage WIP for responsiveness and productivity.

On making things visible

KonMari observes that you cannot accurately assess how much stuff you have without seeing it all in one place. She recommends searching the whole house first, bringing everything to the one location, and spreading the items out on the floor to gain visibility.

Making work visible, in one place, to all stakeholders is a tenet of agile and lean delivery. It reveals amazing insights, many unanticipated, about the volume, variety and value (or lack of) of work in progress. The shared view helps build empathy and collaboration between stakeholders and delivery teams. You may need to search extensively within the organisation to discover all the work, but understanding of the sources of demand (as below) will guide you. A great resource for ideas and examples of approaches is Agile Board Hacks.

So get your work on cards on a wall so you can see the extent of your WIP.

On categories

KonMari observes that items in one category are stored in multiple different places, spread out around the house. Categories she identifies include clothes, books, etc. She contends that it’s not possible to assess what you want to keep and discard without seeing the sum of your belongings in each category. Consequently, she recommends thinking in terms of category, rather than place.

If we think organisationally in terms of place, we think of silos – projects, teams, functions. We can’t use these storage units to properly assess the work we hold in the organisation. Internal silos don’t reflect how we serve customers.

Instead, if we think organisationally in terms of category, we are thinking strategically. With a cascading decomposition of strategy, driven by the customer, we can assess the work in the organisation at every level for strategic alignment (strategy being emergent as well as explicit). Strategy could be enterprise level themes, or the desired customer journey at a product team level.

With work mapped against strategy, we can see in one place the sum of efforts to execute a given branch of strategy, and hence assess what to keep and what to discard. We further can assess whether the entire portfolio of work is sufficiently aligned and diversified to execute strategy.

So use your card wall to identify how work strategically serves your customers.

On joy

KonMari writes:

The best way to choose what to keep and what to throw away is to … ask: ‘Does this spark joy?’ If it does keep it. If not, throw it out.

We may ask of each piece of work: ‘Is this work valuable?’ ‘Is it aligned to the purpose of the organisation?’ ‘Is it something customers want?’ If it is, keep it. If not, throw it out.

KonMari demonstrates why this is effective by taking the process to its logical conclusion. If you’ve discarded everything that doesn’t spark joy, then everything you have, everything you interact with, does spark joy.

What better way to spark joy in your people than to reduce or eliminate work with no value and no purpose?

On discarding first

KonMari observes that storage considerations interrupt the process of discarding. She recommends that discarding comes first, and storage comes second, and the activities remain distinct. If you start to think about where to put something before you have decided whether to keep or discard it, you will stop discarding.

Prioritisation is the act of discarding work we do not intend to pursue. Prioritisation comes first, based purely on value, before implementation considerations. Sequencing can be done with knowledge of effort and other dependencies. Then scheduling, given capacity and other constraints, is the process of deciding which “drawers” to put work in.

On putting things away

KonMari observes that mess and clutter is a result of not putting things away. Consequently she recommends that storage systems should make it easy to put things away, not easy to get them out.

Excessive WIP may also be caused by a failure to rapidly stop work (or perceived inability to do so). Organisational approaches to work should reduce the effort needed to stop work. For instance, with continuous delivery, a product is releasable at all times, and can therefore be stopped after any deployment. Work should be easily stoppable in preference to easily startable. (This could also be framed as “stop starting and start finishing”.)

Further, while many organisations aim for responsiveness with a stoppable workforce (of contractors), they should instead aim for a stoppable portfolio, and workforce responsiveness will follow.

On letting things go

A client of KonMari’s comments:

Up to now, I believed it was important to do things that added to my life …  I realised for the first time that letting go is even more important than adding.

I have written about the importance of letting go of work from the perspective of via negativa management in Dumbbell Delivery; Antifragile Software, and managing socialisation costs in Your Software is a Nightclub.

However, KonMari also observes that, beyond the mechanics of managing stuff (or work), there is a psychological cost of clutter (or excessive WIP). Her clients often report feeling constrained by perceived responsibility to stuff that brings them no joy. I suspect the same is true in the organisation: we fail to recognise and embrace possibilities because we are constrained by perceived responsibilities to work that ultimately has no value.

Imagine if we could throw off those shackles. That’s worth letting a few things go.

No Smooth Path to Good Design

The path to good design is bumpy, as we will demonstrate with four teapots. (Yes, teapots. Teapots are a staple of computer science and philosophy.)

The path to good design matters, because if you are trying to build a design capability, the journey will be smoother if you understand that the path is bumpy.

Leaders who appreciate the bumpy path can facilitate far greater value creation and support a more engaged group of workers.

What is design?

Design is an activity, but also a result: the specification for a product (service), which determines how it is made or delivered.

Performance is a measure of how a product actually functions, for a given task in a given context. Performance in the broadest sense includes emotional responses, static and dynamic physical characteristics, service characteristics, etc. For simplicity, let’s measure performance in monetary terms; eg. lifetime economic value.

Design is important as an activity and a result, because it is the prime determinant of performance that is within your control.

The smooth path

Teapot by Norman
Teapot by Norman[1]
Consider the distinctive teapot from the cover of Don Norman’s Design of Everyday Things, where the handle – instead of opposing – is aligned with the spout.

We know a thing or two about teapots, so we assume this design has very poor performance!

However, we also assume that a traditional design with handle opposed to the spout produces the best performance.

We can plot our smooth model of how performance varies as a function of the angle between spout and handle.

Performance of teapot design variants
Performance of teapot design variants

And it’s pretty clear how to find the best design. The more opposing the handle and spout, the better the performance, the more value created, and hence the better the design.

The first bump in the path

Yokode Kyusu
Yokode Kyusu [1]
However, this model is broken. We can’t interpolate smoothly (linearly) between design points, as demonstrated by the Japanese yokode kyusu, which features a handle at right angles to its spout, to extract every last drop of tea.

With this new insight, and a further assumption that handles in between the points we’ve plotted (eg, 45 degrees) are much worse due to awkward twisting motions when pouring, we can draw a new model, which is already much less smooth.

Teapot performance with new information
Teapot performance with new information

What’s interesting about this landscape is that most design variants perform pretty poorly, and you must be close to a good design to find it. If you didn’t have the insight into teapot performance that we have assumed – if you had only tested performance at the awkward angles, and you had assumed smooth behaviour in between – you would likely miss the best designs and leave significant value on the table. (Note that the scale of this diagram should be greatly exaggerated to demonstrate the true size of value creation opportunities.)

Value created by discovery
Value created by exploration

So, this is the first lesson of the bumpy path to good design. We need to explore the performance of multiple design variants, and understand that small changes in design can have enormous impacts on performance, to be confident we are approaching our potential to create value.

Teapot with handle on top
Teapot with handle on top [3]
So far, we have only explored the impact of one design variable, but for any product we have effectively infinitely many design variables (if we can just conceive them). For instance, the handle of a teapot could also be on top, but we could also consider the shape, material, fixtures, etc. Then we could move beyond the handle to the design of the rest of the teapot!

Now consider the design and delivery of digital products and services. Constraints do exist, but infinite design variants still exist within those constraints. Further, like the rolled up dimensions of string theory, there are extra dimensions of design that are easy to miss, but once discovered can be expanded and explored to create ever more value.

The first lesson

How do leaders get this wrong? By failing to encourage the exploration of a sufficient number of design variants, and by failing to encourage the exploration of minor changes that have outsize impact.

As a leader, you must be prepared to carve out time and space, embrace uncertainty and ambiguity, and bring creativity, compassion and patience to the exploration process. As important as this is to creating value, it is also key to maintaining the engagement of teams involved in or interacting with design.

I’m often told that exploration feels inefficient. Or, rather, felt inefficient. The distinction is importation. Hindsight bias distorts the reality that before starting an exploration into a sufficiently bumpy landscape, we simply cannot know what we will find. So how do we measure efficiency of exploration? Certainly not by how quickly we arrive at a design, or by how many designs are discarded. Should we even measure efficiency of exploration? That is a better question. We should focus on net value creation, and do enough exploration to mitigate the risk that we are leaving significant value on the table.

This design sensibility, however, may not be apparent to the whole team. Designers will be frustrated being managed to a smooth path, while others who perceive the challenge to be simple may become frustrated when the bumpiness is allowed to surface. The team’s various activities may have different cadences that sometimes align, and sometimes don’t. This can create friction and dissatisfaction in teams. Some functional conflict is healthy in this regard, but as a leader, you must support and enable a team to focus on what it takes to create value.

The second bump in the path

I have used word “assume” liberally and deliberately above. I have assumed a large number of things about the tasks that users of the teapots are seeking to achieve, and the broader contexts of use. I have further assumed that my readers share a traditional western notion of teapots and their use. I have done this to keep simple – I hope – the explanation of the first bump.

But “assume” is at the root of the second bump. During product development, we can’t assume performance, we must test designs with users engaged a task in a context. We may take shortcuts by prototyping, simulating, etc, but we must test as objectively as possible, for a meaningful prediction of a product’s performance, and potential to create value.

In a bumpy design landscape, poor predictions of actual performance carry significant opportunity cost.

Value created by testing
Value created by testing

(Note also that during the development of a typical digital product/service, we are typically iteratively discovering the task and the context in parallel.)

We assumed, with our teapots above, that a spout aligned with the handle would lead to poor performance, but we didn’t test it (with a minor tweak in a hidden dimension). If we’d tested this traditional oriental design (as UX Designer Mike Eng did), we would have discovered that, for the task of serving oneself, in a solitary context, the aligned handle actually produces superior performance.

Aligned handle teapot
Aligned handle teapot [4]
I was surprised to find this teapot design existed when I stumbled upon the post from above. I suspect this teapot design has a specific name or an interesting story behind it, but I haven’t been able to track it down. However, it serves as an excellent demonstration that the best design paths are bumpy.

The second lesson

The second lesson is that assumptions about performance, task and context hide the inherent bumpiness in design. As a leader, you must recognise and challenge assumptions, encourage the testing of designs under the correct conditions, and appreciate that our understanding of task and context may evolve with testing.

There are many resources that discuss lightweight and effective approaches to UX research and testing; you could do worse than to start here.

Conclusion

We have discussed two major value creation activities in design:

  • Exploration and consequent discovery of performant designs
  • Testing and consequent selection of more performant designs

But these activities are overlooked or de-prioritised with a smooth mindset. While there is uncertainty, ambiguity and friction along the path, and sometimes progress is difficult to discern, as a leader, you must embrace the bumps because – if you are in the business of creating value – there is no smooth path to good design.

Image credits
  1. http://www.amazon.co.uk/Design-Everyday-Things-Donald-Norman/dp/0262640376
  2. http://commons.wikimedia.org/wiki/File:JapaneseTeapot.jpg
  3. http://www.wilkinpottery.com/product/teapot-top-handle/
  4. Ebay listing from seller http://www.ebay.com/usr/mitch8670

Concrete Culture Change

Culture is often difficult to define, and culture change even more so – what concrete actions do we need to take to change a culture?

Despite this apparent difficulty, it is possible to spend an hour or two with a group, and leave with consensus on practical actions for culture change.

This exercise achieves that by make culture change something concrete. We look to the questions we ask everyday as reinforcing values and thus being drivers of culture. Then we challenge ourselves to find better questions, and explore what it will take to adopt those better questions in our specific context.

Questions driving culture

Let’s keep our definition of culture really simple: the sum of our everyday behaviours as a group.

To give an example: typically, you and your colleagues juggle many tasks at once. Multitasking is part of your culture.

What is driving this behaviour though? One strong driver is the questions that are asked in your group. For instance, in this environment, you probably find people explicitly asking something like “can you take this on?” The multitasking behaviour is a natural response to that question. Especially if all parties are, consciously or otherwise, implicitly asking themselves “how do we get everything done?”

Now let’s assume that you want to change your multitasking culture to one where people limit their work in progress to become more productive overall.

Making change more concrete

To change the behaviour, we can look for the driving questions and change those.

For instance, we might aim to change “how do we get everything done?” to “how do we do a great job of the most important things?”

And that is the heart of the change. If everyone is asking themselves, consciously or otherwise “how do we do a great job of the most important things?”, their behaviours will follow that question. In this case (and with training and support as required), we expect they will try to identify priorities, understand success and deliver on that before moving on to the next thing. People can helpfully answer “no” to the old question “can you take this on?”, but more importantly, that question will no longer be asked as frequently, because it will cease to make sense.

However, that’s still not as concrete a recipe as we would like. The exercise (below) helps us get down to the concrete actions required in a given context to change one driving question to another.

Before we go any further, though, a reminder that questions do not exist in isolation, and that we must tackle consistent set of questions simultaneously:

Today’s orthodoxy has institutionalised a set of internally consistent but dysfunctional beliefs. This has created a tightly interlocking and self-reinforcing system, a system from which it is very difficult to break free. Even when we change one piece, the other pieces hold us back by blocking the benefits of our change. When our change fails to produce benefits, we revert to our old approaches.

Donald G. Reinertsen, The Principles of Product Development Flow

The Exercise

This exercise can be run with the group whose culture we are looking to change.

At the end of the exercise, you will have a list of concrete actions that can be taken to change driving questions, and will have identified potential blockers to plan around.

To prepare:

  1. Observe the group and its behaviours
  2. Identify instances of counter-productive behaviours
  3. Analyse these behaviours to propose driving questions
  4. Pair current, undesirable driving questions with new, desirable driving questions
  5. Find examples to illustrate why each question should change

You should have something like the table below:

Example set of driving questions to change
Example set of driving questions to change

The exercise can then be run as follows:

  1. Discuss the premise of changing culture by changing questions
  2. Share your first example of a pair of driving questions, and the instance of the behaviour (this should be an instance widely understood and accepted by the group)
  3. Work through the other question pairs in your list, and ask the group to come up with examples themselves. They will generally do so enthusiastically! It’s unlikely, but if they don’t, you have your prepared examples to fall back on.
  4. Because you won’t be able to solve everything in this session, prioritise as a group (through dot voting, etc) the question pairs to focus on (no more than 3 for the first session). Allow 30 mins to 1 hour to get to this point.
  5. Now for each question pair, run an “anchors and engines” exercise to identify – in the group’s context – the potential blockers (“anchors”) and the supporting factors or concrete actions (“engines”). Take 15-30 minutes per pair. Synthesise individual contributions into themes.

You now have a set of concrete actions to support, and real issues that might hinder, the type of culture change you are seeking to achieve. It might look something like:

Culture change anchors and engines
Culture change anchors and engines

Of course, effort remains to make this change happen, but it can be directed very precisely, and that is valuable when dealing with culture.

Jetty to Jetty app

I released an app 🙂 – for iOS and Android.

It’s a self-guided audio tour of historic sites in Broome, Western Australia, including beautiful stories told by locals. Nyamba Buru Yawuru developed the concept, curated the media, engaged local stakeholders, and were product owners for the app.

Jetty to Jetty screenshots
Jetty to Jetty screenshots

This work was exciting for its value to the Broome and Yawuru community, but also because it was an opportunity to innovate under the constraint of building the simplest thing possible. The simplest thing possible was in stark contrast to the technical whizbangery (though lean delivery) of my previous app project – Fireballs in the Sky.

I had fun working on the interaction and visual design challenges under the constraints, and I think the key successes were:

  • Simplifying presentation of the real-world and in-app navigation as a hand-rolled map (drawn in Inkscape), showing all the sites, that scrolls in a single direction.
  • Hiding everything unnecessary during playback of stories, to allow the user to focus on the place and the story.
  • Playback control behaviour across sites and the main map.
  • Not succumbing to the temptation to add geo-location, background audio, or anything else that could have added to the complexity!

My colleague Nathan Jones laid the technical foundations – Phonegap/Cordova wrapping a static site built by Middleman and using CoffeeScript, knockout.js, HAML, Sass and HTML5/Cordova plugin for media. He later went on to extend and open-source (as Jila) this framework for the Yawuru Ngan-ga language app. Most of the development work by Nathan and me was done in early 2014.

While intended to be used in Broome (and yet another reason to visit Broome), the app and its beautiful stories can be enjoyed anywhere.

Health Hack Perth 2015

HealthHack is a three-day event bringing medical researchers and health practitioners together with software creators to prototype a new generation of health products.

Business News Western Australia covered the Perth 2015 event in: HealthHack – ailments, remedies in equal doses.

I helped organise this event with assistance from sponsors ThoughtWorks and Curtin University (among numerous other generous sponsors). It was a great event, with important and challenging problems presented, innovative solution concepts delivered, and new relationships formed between individuals and organisations in health and technology.

Health Hack summary
Health Hack summary

Please refer to the report and the catalogue of products for detailed information on this event, and resources for hackathons in general. Health Hack is an Open Knowledge Foundation Australia event, so is predicated on sharing open source deliverables.

Some Highlights and Lessons Learned

We focussed on curated problems for this event, approaching a large number of potential “problem owners” with a checklist to recruit those with the most appropriate challenges for the weekend hackathon format. We then worked with the problem owners to shape their challenges and pitches for the “ideas market”. This was a very substantial effort (primarily by the fabulous Diana Adorno) in the lead-up to the weekend, but the well-formed problems were key to the success of the hack.

Health Hack pitch posters
Health Hack pitch posters

We attracted a diverse set of participants, with skills ranging from design, to software development, to data science, and these individuals organised themselves into teams around the problems most suited to their collective skill set. As organisers, we made only one substitution to balance teams.

We started with fewer participants than expected, because the drop-off rate from registrations was substantially higher (50%) than previous years at other sites (30%). However, attrition over the weekend was virtually zero, as the participants were uniformly enthusiastic and energised by their challenges.

The ideas market built great energy around the challenges and the potential for the weekend. We posted the challenges around the room prior to the event. Then the problems owners took turns to pitch in just 2 minutes each from their challenge posters. The pitches were clear and concise, and the cumulative effect was really energising. When the pitches were done, participants had time to walk the room, seek more information from problem owners, and organise their own teams.

Coaching and regular check-ins on team progress helped keep the teams focussed on solving key problems and having a demonstrable product at the end of the weekend. No team failed to showcase. However, we had feedback that access to more coaching would have been valuable.

Health Hack showcase
Health Hack showcase

The venue at Curtin University Chemistry Precinct was ideal, with team tables, breakout spaces and bean bags, and surrounded by gardens. However, it was the only Health Hack venue not in the CBD of the host city, and this may have presented transport challenges (though we didn’t collect any data on this). The plan at the time was to rotate the venue through various supporting institutions in future years.

Food trucks and coffee vans were a great way to service participants! Although it required some coordination ahead of the event, and may not be possible in CBD sites, it was very easy on the weekend, and lots of fun.

For more, see the full report.

Your Software is a Nightclub

Why a nightclub? Well, it’s a better model than a home loan. I’m talking here about technical debt, the concept that describes how retarding complexity (cost) builds up in software development and other activities, and how to manage this cost. A home loan is misleading because product development cost doesn’t spiral out of control due to missed interest payments over time. Costs blow out due to previously deferred or unanticipated socialisation costs being realised with a given change.

So what are socialisation costs? They are the costs incurred when you introduce a new element to an existing group: a new person to a nightclub, or a new feature into a product. Note that we can consider socialisation at multiple levels of the product – UX design, information architecture, etc – not just source code.

Why is socialisation so costly? Because in general you have to socialise each new element with all existing elements, and so you can expect each new element you add to cost more than the last. If you keep adding elements, and even if each pair socialises very cheaply, eventually socialisation cost dominates marginal cost and total cost.

What is the implication of poor socialisation? In a nightclub, this may be a fight, and consequent loss of business. In software, this may be delayed releases or operational issues or poor user experience, and consequent lack of business. If you build airplanes, it could cost billions of dollars.

What does this mean for software delivery, or brand management, or product management, or organisational change, or hiring people, or nightclub management, or any activity where there is continued pressure to add new elements, but accelerating cost of socialisation?

Well, consider that production (of stuff) achieves efficiencies of scale by shifting variable cost to fixed for a certain volume. But software delivery is not production, it is design, and continuous re-design in response to change in our understanding of business outcomes.

Change can be scaled by shifting socialisation costs to variable; we take a variable cost hit with each new element to reduce the likelihood we will pay a high price to socialise future elements. Then we can change and change again in a sustainable manner. We can also segment elements to ensure pairwise cost is zero between segments (architecture). But, ideally, we continue to jettison elements that aren’t adding sufficient value – this is the surest way minimise marginal socialisation cost and preserve business agility. We can deliver a continuous MVP.

So what does this add to the technical debt discussion? All models are wrong; some are useful. Technical debt is definitely useful, and reaches some of the same management conclusions as above.

For me, the nightclub model is a better holistic model for product management, not just for coding. It is more dynamic and reflective of a messy reality. Further, with an economic model of marginal cost, we can assess whether the economics of marginal value stack up. Who do we want in out nightclub? How do we ensure the mix is good for business? Who needs to leave?

What do you think?

Postscript: The Economic Model

We write total cost (C) as the sum of fixed costs (f), constant variable cost per-unit (v) and a factor representing socialisation cost per pair (s):

\[ C = f + vN + sN^2\]

Then marginal cost (M) may be written as:

\[ M = v + 2sN \]

Socialisation Cost
Socialisation cost against fixed and variable costs

Note: This post was originally published August 2014, and rebooted April 2015

Visual Knowledge Cycles

Visualisation is a key tool for the management of knowledge, especially knowledge from data. We’ll explore different states of knowledge, and how we can use visualisation to drive knowledge from one state to another, as individual creators of visualisation and agents within an organisation or society.

Visualisation Cycle
Visualisation Cycle

(There’s some justifiable cynicism about quadrant diagrams with superimposed crap circles. But, give me a chance…)

Awareness and Certainty about Knowledge

We’re used to thinking about knowledge in terms of a single dimension: we know something more or less well. However, we’ll consider two dimensions of knowledge. The first is certainty – how confident are you that what you know is right? (Or wrong?) The second is awareness – are you even conscious of what you know? (Or don’t know?)

These two dimensions define four states of knowledge – a framework you might recognise – from “unknown unknowns” to “known knowns”. Let’s explore how we use visualisation to drive knowledge from one state to another.

Knowledge states
Knowledge states

(Knowledge is often conceived along other dimensions, such as tacit and explicit, due to Nonaka and Takeuchi. I’d like to include a more detailed discussion of this model in future, but for now will note that visualisation is an “internalisation” technique in this model, or an aid to “socialisation”.)

Narrative Visualisation

I think this is the easiest place to start, because narrative visualisation helps us with knowledge we are aware of. Narrative visualisation means using visuals to tell a story with data.

Narrative Visualisation
Narrative Visualisation

We can use narrative visualisation to drive from low certainty to high certainty. We can take a “known unknown”, or a question, and transform it to a “known known”, or an answer.

“Where is time spent in this process?” we might ask. A pie chart provides a simple answer. However, it doesn’t tell much of a story. If we want to engage people in process of gaining certainty, if we want to make the story broader and deeper, we need to visually exploit a narrative thread. Find a story that will appeal to your audience and demonstrate why they should care about this knowledge, then use the narrative to drive the visual display of data. Maybe we emphasise the timeliness by displaying the pie chart on a stopwatch, or maybe we illustrate what is done at each stage to provide clues for improvement. (NB. Always exercise taste and discretion in creating narrative visualisations, or they may be counter-productive.)

Here is a brilliant and often cited narrative visualisation telling a powerful story about drone strikes in Pakistan.

Drones Visualisation
Screen shot of Pitch Interactive Drones Visualisation

The story also provides a sanity check for your analysis – is the story coherent, is it plausible? This helps us to avoid assigning meaning to spurious correlation (eg, ski accidents & bed-sheet strangulation), but do keep an open mind all the same.

Discovery Visualisation

But where do the questions to be answered come from? This is the process of discovery, and we can use visualisation to drive discovery.

Discovery Visualisation
Discovery Visualisation

Discovery can drive from low awareness, low certainty to high awareness, low certainty – from raw data to coherent questions. Discovery is where to start when you have “unknown unknowns”.

But how do you know you have “unknown unknowns”? Well, the short answer is: you do have them – that’s the thing about awareness. However, we’ll explore a longer answer too.

If someone drops a stack of new data in your lap (and I’m not suggesting that is best practice!), it’s pretty clear you need to spend some time discovering it, mapping out the landscape. However, when it’s data in a familiar context, the need for discovery may be less clear – don’t you already know the questions to be answered? We’ll come back to that question later.

A classic example of this kind of discovery can be found at Facebook Engineering, along with a great description of the process.

Facebook friends
Facebook friends visualisation

In discovery visualisation, we let the data lead, we slice and dice many different ways, we play with the presentation, we use data in as raw form as possible. We don’t presuppose any story. On our voyage of discovery, we need to hack through undergrowth to make headway and scale peaks for new vistas, and in that way allow the data to reveal its own story.

Inductive Drift

What if you’ve done your discovery and done your narration? You’re at “known knowns”, what more need you do?

If the world was linear, the answer would be “nothing”. We’d be done (ignoring the question of broader scope). The world is not linear, though. Natural systems have complex interactions and feedback cycles. Human systems, which we typically study, comprise agents with free will, imagination, and influence. What happens is that the real world changes, and we don’t notice.

We don’t notice because our thinking process is inductive. What that means is that our view of the world is based on an extrapolation of a very few observations, often made some time in the past. We also suffer from confirmation bias, which means we tend to downplay or ignore evidence which contradicts our view of the world. This combination makes it very hard to shift our superstitions beliefs. (The western belief that men had one less rib than women persisted until the 16th century CE due to the biblical story of Adam and Eve.)

So where does this leave us? It leaves us with knowledge of which we are certain, but unaware. These are the slippery “unknown knowns”, though I think a better term is biases.

Unlearning Visualisation

Unlearning visualisation is how we dispose of biases and embrace uncertainty once more. This is how we get to a state of “unknown unknowns”.

Unlearning Visualisation
Unlearning Visualisation

However, as above, unlearning is difficult, and may require overwhelming contradictory evidence to cross an “evidentiary threshold”. We must establish a “new normal” with visuals. This should be the primary concern of unlearning visualisation – to make “unknown unknowns” look like an attractive state.

Big data is particularly suited to unlearning, because we can – if we construct our visualisation right – present viewers with an overwhelming number of sample points.

Unlearning requires both data-following and story-telling approaches. If we take away one factually-based story viewers tell themselves about the world, we need to replace it with another.

Recap

Visualisation Cycle
Visualisation Cycle

Your approach to visualisation should be guided by your current state of knowledge:

  • If you don’t know what questions to ask, discovery visualisation will help you find key questions. In this case, you are moving from low awareness to high awareness of questions, from “unknown unknowns” to “known unknowns”.
  • If you are looking to answer questions and communicate effectively, narrative visualisation helps tell a story with data. In this case, you are moving from low certainty to high certainty, from “known unknowns” to “known knowns”.
  • If you have thought for some time that you know what you know and know it well, you may be suffering from inductive drift. In this case, use unlearning visualisation to establish a new phase of inquiry. In this case, you are moving from high certainty and awareness low certainty and awareness, returning to “unknown unknowns”.

Of course, it may be difficult to assess your current state of knowledge! You may have multiple states superimposed. You may only be able to establish where you were in hindsight, which isn’t very useful in the present. However, this framework can help to cut through some of the fog of analysis, providing a common language for productive conversations, and providing motivation to keep driving your visual knowledge cycles.

Iterative vs Incremental Flashcard

Sometimes, the difference between incremental and iterative (software) product development is subtle. Often it is crucial to unlocking early value or quickly eliminating risk – an iterative approach will do this for you, while incremental will not.

Incremental vs Iterative
Incremental vs Iterative

Let’s review the distinction. Incremental means building something piece by piece, like creating a picture by finishing a jigsaw puzzle. This is great for visibility of progress, especially if you make the pieces very small. However, the inherent risk is that an incremental build is not done until the last piece is in place. Splitting something into incremental pieces implies the finished whole is understood (by the jigsaw designer, at least). If something changes during the build, like a bump to the table,  all of your work to date is at risk. Future work – to finish the whole – is also at risk of delivering less than optimal value, if our understanding of value changes during an incremental build. Much development work done under an agile banner is in fact incremental, and therefore more like a mini-waterfall approach than an essentially agile approach.

Iterative, on the other hand, means building something through successive refinements, starting from the crudest implementation that will do the job, and at each stage refining in such a way that you maintain a coherent whole. You might think of this like playing Pictionary. When you are asked to draw the Mona Lisa, you start with (perhaps) a rectangle with a circle inside. If your partner guesses at this point, great! If not, you might add the smile, the hair, the eyes. Hopefully, your partner has guessed by now.  If not, embellish the frame, add the landscape, draw da Vinci painting it, show it hanging in the Louvre, etc. Your risk exposure (that time will expire before your partner guesses) is far lower with an iterative approach. With each iteration, you have captured some value. If your understanding of value changes (eg, your partner shouts something unhelpful like “stockade”), you still retain your captured value, and you can also adjust your future activities to accommodate your new understanding.

I think I capture all of this in the diagram above. If you’re having trouble articulating the difference between incremental and iterative (because both show similar signs of progress at times), or you’re concerned about the risk profile of your delivery, refer to this handy pocket guide.

The Like-for-Like Project Antipattern

Like-for-like replacement.

Sounds pretty simple, doesn’t it? That’s an easy project to deliver, right?

Wrong.

Why would we do a like-for-like (L4L) project? The IT group may want to upgrade to a new system, because the old one is broken, or because they’ve found something better. Maybe we want to avoid re-training users. Or, maybe our L4L project is hiding in a larger project. It could be phase 1, in which functionality is replicated, while new value is delivered in phase 2. There’s no strong pull from users for this L4L project, so to avoid ‘disruption’, the project plans to hot swap a technology layer while otherwise preserving functionality, much like a magician yanking off a tablecloth without disturbing any of the tableware. However, someone is about to have their dinner spoiled.

A clarification. L4L exhibits some of the characteristics of refactoring. But refactoring deliberately tries to stay small, in time and cost. The success criteria are also easily established, for instance, as unit tests. I’m talking here about replacing one non-trivial IT system with another, especially if the target system is primarily bought rather than built.

The Key Problem

Too many constraints
Too many constraints

Framing a project as L4L is not just demonstrably incorrect, it is lazy to the point of negligence. The L4L framing is incorrect because it can never be achieved. The current technology solution has constraints, and business processes have evolved under the current technology constraints. But the new solution will have new technological constraints. There is no chance the two sets of constraints will be equivalent (if they are equivalent, why would you bother changing systems?) Therefore, business processes will be forced to evolve under the new solution. If business processes are changing, this cannot be a L4L project, and the framing is incorrect.

It’s irrelevant whether we’re talking about L4L functionality or L4L business outcomes. As above, L4L functionality is a logical impossibility. If we’re talking L4L business outcomes, which users are going to support a disruptive change in functionality in order to be able to achieve exactly what they do today, and no more?

The L4L framing is lazy because it discourages critical thinking. Stakeholders will confuse the apparent simplicity of framing with simplicity of execution, meaning they will be less engaged in resolving the thorny problems that will inevitably crop up. This is especially important for project sponsors and other senior stakeholders. The business will not be engaged in a process that gives them no voice. This will be doubly so if, as above, the project is not actually like for like, and the deleterious changes in business process are being driven by IT.

The real negligent laziness, however, is in assuming that collectively we haven’t learnt any more about how to deliver business value since we implemented the current system. The current system is probably five to ten years old, and inevitably has shortcomings – why would we copy it? We might, at great effort, be able to figure out what the system is capable of, then build this, but this is far more than users actually use, and entirely different from what they want. This lazy framing leads to much more work than would be required if we simply went to the business/customers to understand what they really need at this time.

You’ll end up taking longer, costing more, and delivering poorer outcomes if you frame your project as like-for-like.

More Problems

Like for like framing is the key problem, but it spawns a host of other problems:

  • Inflexible execution means no ability to respond to change
  • Analysis by reverse engineering is very wasteful
  • Sysadmin as the customer precludes insight
  • Prioritisation is backwards to avoid destroying value
  • Value delivered in a Phase 2, which never happens
  • Reporting misses the fact that there are two different things to report

These are substantial topics in their own right. I’d like to finish this post sometime, so I’ll try to pick up these threads in detail in future posts.

What Might Happen?

So, how might an L4L project play out?

Well, it’s hard to predict the future, but like Plutonium , L4L projects are unstable and tend to disintegrate. While a typical agile project is self-correcting, in that everyone sees the value, scope can be adjusted to meet time, and so on, a L4L project has no give.

When estimates are discovered to be optimistic – as complex workarounds will inevitably be required to deliver old results on new technology – the only option for a true L4L project is to run late. Or, we expose the L4L fallacy by making functional or non-functional compromises. Likely, it will be a combination of both.

Stakeholders who were reluctant from the start are now in an even worse position. They originally stood to gain nothing but disruption, but now they definitely lose out. They may begin political manoeuvring to make the project go away. Governance will probably start sniffing around if this late, costly project is not delivering value.

Again, at this point, there’s not much room to move in a L4L project. There may be nothing of value delivered. You stuck with either writing the project off, or toughing it out to an expensive and unsatisfactory conclusion. You may even tough it out only to write it off later. Of course, you can change the way you are doing things, but that basically means starting over. So, why not get it right from the start?

The Solution

The solution is, of course, to frame the project as delivering new value.

Project framing report card
Project framing report card
Like-for-like Delivering value
Technically can’t be achieved Value drives right behaviours
Business disengaged – everyone wants to go last Business engaged – everyone wants to be first
Analysis & design = reverse engineering Break current constraints for better solution
Wasteful and high risk Efficient and low risk

Even if you’re being forced to replace a system, make sure you go out to the stakeholders and ask them what they want to make their lives better, right now. This is the only way you really get their buy-in, the only way they’ll make do with less here and there because they’re getting more overall, the only way they’ll be engaged in resolving difficult delivery problems, the only way they’ll back you up instead of sell you out when things get tough. It’s also the only way you’ll avoid perpetuating those arbitrary constraints you inherit with a L4L project.

Next time you’re asked to start a L4L project, start by changing the framing.

Backwards Prioritisation

Imagine you’re in the middle of a big software project. Maybe, you’re replacing an internal system, or something like that. You make an observation: when asked to prioritise, everyone wants to go last. They want to hang on to the status quo for as long as possible.

Instead of beating down your door to get their hands on desirable new features, they are all running for the exits in the hope that your project fails before it impacts them.

This is logical from an economic or financial perspective. If value is to be destroyed, then you should destroy the items of least value first, and of most value last, as you maximize value-in-use in this scenario. When your action would result in reduced future cash flows, the longer you wait before acting, the better.

Imagine your home was being gradually inundated; floodwater seeping in and rising toward the ceiling. Imagine that you were waiting for rescue and knew that you could take some things with you. You might pile up your possessions in the living room. Where would you put your most treasured possessions? At the top of the pile, of course, where they would be last to suffer water damage. Though you don’t know when help will arrive, when it does you will know you have saved only the most important stuff.

Maybe, though, the new solution is just as good as – if not better than – the old, but people fear change and disruption. If the level of disruption is high, and the new solution is no better, this is indeed value destruction. However, if the disruption is less than feared, and the new solution does offer benefits that haven’t been effectively sold, then this needs to be demonstrated to stakeholders so they come seeking change. This requires senior leaders to change the framing of the project to highlight the value created, not the value destroyed. It also requires the delivery team to support the new framing by delivering a high profile change that  adds value.

Just as value-creating projects maximize value delivered by creating the items of largest value first, value-destroying projects minimize value destroyed by destroying the items of largest value last. So, if your prioritisation looks backwards, ask seriously if your project is destroying value.