Once upon a time, scaling production may have been enough to be competitive. Now, the most competitive organisations scale change to continually improve customer experience. How can we use what we’ve learned scaling production to scale change?
I recently presented a talk titled “Scaling Change”. In the talk I explore the connections between scaling production, sustaining software development, and scaling change, using metaphors, maths and management heuristics. The same model of change applies from organisational, marketing, design and technology perspectives. How can factories, home loans and nightclubs help us to think about and manage change at scale?
Read on with the spoiler post if you’d rather get right to the heart of the talk.
When software engineers think about scaling, they think in terms of the order of complexity, or “Big-O“, of a process or system. Whereas production is O(N) and can be scaled by shifting variable costs to fixed,I contend that change is O(N2) due to the interaction of each new change with all previous changes. We could visualise this as a triangular matrix heat map of the interaction cost of each pair of changes (where darker shading is higher cost).
Therefore, a nightclub, where each new patron potentially interacts with all other denizens is an appropriate metaphor. Many of us can also relate to changes that have socialised about as well as drunk nightclub patrons.
The thing about change being O(N2) is that the old production management heuristics of shifting variable cost to fixed no longer work, because the dominant mode is interaction cost. The nightclub metaphor suggests the following management heuristics:
We take a variable cost hit for each change to help it play more nicely with every other change. This reduces the cost coefficient but not the number of interactions (N2).
We only take in the most valuable changes. Screening half our changes (N/2) reduces change interactions by three quarters (N2/4).
We arrange changes into separate spaces and prevent interaction between spaces. Using n spaces reduces the interactions to N2/n.
Like screening, but at the other end. We actively manage out changes to reduce interactions. Surrendering half our changes (N/2) reduces change interactions by three quarters (N2/4).
Where do we see these approaches being used? Just some examples:
Start-upsscreen or surrender changes and hence are more agile than incumbents because they have less history of change.
Product managersscreen changes in design and seclude changes across a portfolio, for example the separate apps of Facebook/ Messenger/ Instagram/ Hyperlapse/ Layout/ Boomerang/ etc
To manage technical debt, good developers socialise via refactoring, better seclude through architecture, and the best surrender
In hiring, candidates are screened and socialised through rigorous recruitment and training processes
Brand architectures also seclude changes – Unilever’s Dove can campaign for real beauty while Axe/Lynx offends Dove’s targets (and many others).
The path to good design is bumpy, as we will demonstrate with four teapots. (Yes, teapots. Teapots are a staple of computer science and philosophy.)
The path to good design matters, because if you are trying to build a design capability, the journey will be smoother if you understand that the path is bumpy.
Leaders who appreciate the bumpy path can facilitate far greater value creation and support a more engaged group of workers.
What is design?
Design is an activity, but also a result: the specification for a product (service), which determines how it is made or delivered.
Performance is a measure of how a product actually functions, for a given task in a given context. Performance in the broadest sense includes emotional responses, static and dynamic physical characteristics, service characteristics, etc. For simplicity, let’s measure performance in monetary terms; eg. lifetime economic value.
Design is important as an activity and a result, because it is the prime determinant of performance that is within your control.
The smooth path
Consider the distinctive teapot from the cover of Don Norman’s Design of Everyday Things, where the handle – instead of opposing – is aligned with the spout.
We know a thing or two about teapots, so we assume this design has very poor performance!
However, we also assume that a traditional design with handle opposed to the spout produces the best performance.
We can plot our smooth model of how performance varies as a function of the angle between spout and handle.
And it’s pretty clear how to find the best design. The more opposing the handle and spout, the better the performance, the more value created, and hence the better the design.
The first bump in the path
However, this model is broken. We can’t interpolate smoothly (linearly) between design points, as demonstrated by the Japanese yokode kyusu, which features a handle at right angles to its spout, to extract every last drop of tea.
With this new insight, and a further assumption that handles in between the points we’ve plotted (eg, 45 degrees) are much worse due to awkward twisting motions when pouring, we can draw a new model, which is already much less smooth.
What’s interesting about this landscape is that most design variants perform pretty poorly, and you must be close to a good design to find it. If you didn’t have the insight into teapot performance that we have assumed – if you had only tested performance at the awkward angles, and you had assumed smooth behaviour in between – you would likely miss the best designs and leave significant value on the table. (Note that the scale of this diagram should be greatly exaggerated to demonstrate the true size of value creation opportunities.)
So, this is the first lesson of the bumpy path to good design. We need to explore the performance of multiple design variants, and understand that small changes in design can have enormous impacts on performance, to be confident we are approaching our potential to create value.
So far, we have only explored the impact of one design variable, but for any product we have effectively infinitely many design variables (if we can just conceive them). For instance, the handle of a teapot could also be on top, but we could also consider the shape, material, fixtures, etc. Then we could move beyond the handle to the design of the rest of the teapot!
Now consider the design and delivery of digital products and services. Constraints do exist, but infinite design variants still exist within those constraints. Further, like the rolled up dimensions of string theory, there are extra dimensions of design that are easy to miss, but once discovered can be expanded and explored to create ever more value.
The first lesson
How do leaders get this wrong? By failing to encourage the exploration of a sufficient number of design variants, and by failing to encourage the exploration of minor changes that have outsize impact.
As a leader, you must be prepared to carve out time and space, embrace uncertainty and ambiguity, and bring creativity, compassion and patience to the exploration process. As important as this is to creating value, it is also key to maintaining the engagement of teams involved in or interacting with design.
I’m often told that exploration feels inefficient. Or, rather, felt inefficient. The distinction is importation. Hindsight bias distorts the reality that before starting an exploration into a sufficiently bumpy landscape, we simply cannot know what we will find. So how do we measure efficiency of exploration? Certainly not by how quickly we arrive at a design, or by how many designs are discarded. Should we even measure efficiency of exploration? That is a better question. We should focus on net value creation, and do enough exploration to mitigate the risk that we are leaving significant value on the table.
This design sensibility, however, may not be apparent to the whole team. Designers will be frustrated being managed to a smooth path, while others who perceive the challenge to be simple may become frustrated when the bumpiness is allowed to surface. The team’s various activities may have different cadences that sometimes align, and sometimes don’t. This can create friction and dissatisfaction in teams. Some functional conflict is healthy in this regard, but as a leader, you must support and enable a team to focus on what it takes to create value.
The second bump in the path
I have used word “assume” liberally and deliberately above. I have assumed a large number of things about the tasks that users of the teapots are seeking to achieve, and the broader contexts of use. I have further assumed that my readers share a traditional western notion of teapots and their use. I have done this to keep simple – I hope – the explanation of the first bump.
But “assume” is at the root of the second bump. During product development, we can’t assume performance, we must test designs with users engaged a task in a context. We may take shortcuts by prototyping, simulating, etc, but we must test as objectively as possible, for a meaningful prediction of a product’s performance, and potential to create value.
In a bumpy design landscape, poor predictions of actual performance carry significant opportunity cost.
(Note also that during the development of a typical digital product/service, we are typically iteratively discovering the task and the context in parallel.)
We assumed, with our teapots above, that a spout aligned with the handle would lead to poor performance, but we didn’t test it (with a minor tweak in a hidden dimension). If we’d tested this traditional oriental design (as UX Designer Mike Eng did), we would have discovered that, for the task of serving oneself, in a solitary context, the aligned handle actually produces superior performance.
I was surprised to find this teapot design existed when I stumbled upon the post from above. I suspect this teapot design has a specific name or an interesting story behind it, but I haven’t been able to track it down. However, it serves as an excellent demonstration that the best design paths are bumpy.
The second lesson
The second lesson is that assumptions about performance, task and context hide the inherent bumpiness in design. As a leader, you must recognise and challenge assumptions, encourage the testing of designs under the correct conditions, and appreciate that our understanding of task and context may evolve with testing.
There are many resources that discuss lightweight and effective approaches to UX research and testing; you could do worse than to start here.
We have discussed two major value creation activities in design:
Exploration and consequent discovery of performant designs
Testing and consequent selection of more performant designs
But these activities are overlooked or de-prioritised with a smooth mindset. While there is uncertainty, ambiguity and friction along the path, and sometimes progress is difficult to discern, as a leader, you must embrace the bumps because – if you are in the business of creating value – there is no smooth path to good design.
Culture is often difficult to define, and culture change even more so – what concrete actions do we need to take to change a culture?
Despite this apparent difficulty, it is possible to spend an hour or two with a group, and leave with consensus on practical actions for culture change.
This exercise achieves that by make culture change something concrete. We look to the questions we ask everyday as reinforcing values and thus being drivers of culture. Then we challenge ourselves to find better questions, and explore what it will take to adopt those better questions in our specific context.
Questions driving culture
Let’s keep our definition of culture really simple: the sum of our everyday behaviours as a group.
To give an example: typically, you and your colleagues juggle many tasks at once. Multitasking is part of your culture.
What is driving this behaviour though? One strong driver is the questions that are asked in your group. For instance, in this environment, you probably find people explicitly asking something like “can you take this on?” The multitasking behaviour is a natural response to that question. Especially if all parties are, consciously or otherwise, implicitly asking themselves “how do we get everything done?”
Now let’s assume that you want to change your multitasking culture to one where people limit their work in progress to become more productive overall.
Making change more concrete
To change the behaviour, we can look for the driving questions and change those.
For instance, we might aim to change “how do we get everything done?” to “how do we do a great job of the most important things?”
And that is the heart of the change. If everyone is asking themselves, consciously or otherwise “how do we do a great job of the most important things?”, their behaviours will follow that question. In this case (and with training and support as required), we expect they will try to identify priorities, understand success and deliver on that before moving on to the next thing. People can helpfully answer “no” to the old question “can you take this on?”, but more importantly, that question will no longer be asked as frequently, because it will cease to make sense.
However, that’s still not as concrete a recipe as we would like. The exercise (below) helps us get down to the concrete actions required in a given context to change one driving question to another.
Before we go any further, though, a reminder that questions do not exist in isolation, and that we must tackle consistent set of questions simultaneously:
Today’s orthodoxy has institutionalised a set of internally consistent but dysfunctional beliefs. This has created a tightly interlocking and self-reinforcing system, a system from which it is very difficult to break free. Even when we change one piece, the other pieces hold us back by blocking the benefits of our change. When our change fails to produce benefits, we revert to our old approaches.
Donald G. Reinertsen, The Principles of Product Development Flow
This exercise can be run with the group whose culture we are looking to change.
At the end of the exercise, you will have a list of concrete actions that can be taken to change driving questions, and will have identified potential blockers to plan around.
Observe the group and its behaviours
Identify instances of counter-productive behaviours
Analyse these behaviours to propose driving questions
Pair current, undesirable driving questions with new, desirable driving questions
Find examples to illustrate why each question should change
You should have something like the table below:
The exercise can then be run as follows:
Discuss the premise of changing culture by changing questions
Share your first example of a pair of driving questions, and the instance of the behaviour (this should be an instance widely understood and accepted by the group)
Work through the other question pairs in your list, and ask the group to come up with examples themselves. They will generally do so enthusiastically! It’s unlikely, but if they don’t, you have your prepared examples to fall back on.
Because you won’t be able to solve everything in this session, prioritise as a group (through dot voting, etc) the question pairs to focus on (no more than 3 for the first session). Allow 30 mins to 1 hour to get to this point.
Now for each question pair, run an “anchors and engines” exercise to identify – in the group’s context – the potential blockers (“anchors”) and the supporting factors or concrete actions (“engines”). Take 15-30 minutes per pair. Synthesise individual contributions into themes.
You now have a set of concrete actions to support, and real issues that might hinder, the type of culture change you are seeking to achieve. It might look something like:
Of course, effort remains to make this change happen, but it can be directed very precisely, and that is valuable when dealing with culture.
Here are slides from my talk at LASTconf 2015. The title is “Bring Your A-Game to Arguments for Change”. The premise is that there are different types of arguments, more or less suited to various organisational and delivery scenarios, and the best ones have their own agency. In these respects, you can think of them like Pokemon – able to go out and do your bidding, with the right preparation.
The content draws heavily from ideas shared on this blog:
Why a nightclub? Well, it’s a better model than a home loan. I’m talking here about technical debt, the concept that describes how retarding complexity (cost) builds up in software development and other activities, and how to manage this cost. A home loan is misleading because product development cost doesn’t spiral out of control due to missed interest payments over time. Costs blow out due to previously deferred or unanticipated socialisation costs being realised with a given change.
So what are socialisation costs? They are the costs incurred when you introduce a new element to an existing group: a new person to a nightclub, or a new feature into a product. Note that we can consider socialisation at multiple levels of the product – UX design, information architecture, etc – not just source code.
Why is socialisation so costly? Because in general you have to socialise each new element with all existing elements, and so you can expect each new element you add to cost more than the last. If you keep adding elements, and even if each pair socialises very cheaply, eventually socialisation cost dominates marginal cost and total cost.
What is the implication of poor socialisation? In a nightclub, this may be a fight, and consequent loss of business. In software, this may be delayed releases or operational issues or poor user experience, and consequent lack of business. If you build airplanes, it could cost billions of dollars.
What does this mean for software delivery, or brand management, or product management, or organisational change, or hiring people, or nightclub management, or any activity where there is continued pressure to add new elements, but accelerating cost of socialisation?
Well, consider that production (of stuff) achieves efficiencies of scale by shifting variable cost to fixed for a certain volume. But software delivery is not production, it is design, and continuous re-design in response to change in our understanding of business outcomes.
Change can be scaled by shifting socialisation costs to variable; we take a variable cost hit with each new element to reduce the likelihood we will pay a high price to socialise future elements. Then we can change and change again in a sustainable manner. We can also segment elements to ensure pairwise cost is zero between segments (architecture). But, ideally, we continue to jettison elements that aren’t adding sufficient value – this is the surest way minimise marginal socialisation cost and preserve business agility. We can deliver a continuous MVP.
So what does this add to the technical debt discussion? All models are wrong; some are useful. Technical debt is definitely useful, and reaches some of the same management conclusions as above.
For me, the nightclub model is a better holistic model for product management, not just for coding. It is more dynamic and reflective of a messy reality. Further, with an economic model of marginal cost, we can assess whether the economics of marginal value stack up. Who do we want in out nightclub? How do we ensure the mix is good for business? Who needs to leave?
What do you think?
Postscript: The Economic Model
We write total cost (C) as the sum of fixed costs (f), constant variable cost per-unit (v) and a factor representing socialisation cost per pair (s):
\[ C = f + vN + sN^2\]
Then marginal cost (M) may be written as:
\[ M = v + 2sN \]
Note: This post was originally published August 2014, and rebooted April 2015
Not online fitness shopping. Not the brogrammer pumping iron. This is a brief discussion of Antifragile – the latest book by Nassim Nicholas Taleb – and relevant insights for software delivery or other complex work.
This isn’t meant to be an exhaustive exploration of the topics. It’s more a join-the-dots exercise, and it’s left up to the reader to explore topics of interest.
Antifragile is a word coined by Taleb to describe things that gain from disorder. Not things that are impervious to disorder; the words for that are robust, or resilient. Of course, things that are harmed by disorder are fragile. Consideration of the fragile, the robust, and the antifragile leads Taleb in many interesting directions.
Fragile, Robust, and Antifragile Software
A running software program is fragile. It is harmed by the most minor variations in its source code, its build process, its dependencies, its runtime environment and its inputs.
But software is eating the world. The global software ecosystem has grown enormously over an extended time – time being a primary source of variation – and hence appears to be antifragile. How do we reconcile this apparent paradox?
Here is a grossly simplified perspective.
First, software code can evolve very quickly, passing on an improved design to the next generation of runtime instances. In this way, tools, platforms, libraries and products rapidly become more robust. However, human intervention is still required for true operational robustness.
Second, humans exercise optionality in selecting progressively better software. In this way, beneficial variation can be captured, deleterious variation discarded, and software goes from robust to antifragile.
So – as fragile parts create an antifragile whole – runtime software instances are fragile, but fragile instances that are constantly improved and selected by humans create an antifragile software ecosystem. (If software starts doing this for itself, we may be in trouble!)
Some Delivery Takeaways
Yes, I know that’s an oxymoron. Nonetheless, here are some of my highlights. It’s a while now since I read the book, and I might add to this in future, so don’t take it as the last word.
The idea of “dumbbell”/”barbell” risk management is that you place your bets in one of two places, but not in between. You first ensure that you are protected from catastrophic downside, then expose yourself to a portfolio of potentially large upsides. In such cases, you are antifragile.
If, instead, you spread yourself across the middle of the dumbell, you carry both unacceptably large downside exposure and insufficiently large upside exposure. In such cases, you are fragile.
For me, “dumbbell delivery” is how we counter insidious elements of the construct of two-speed-IT (insidious because no one has ever asked to go slow, or asked for high risk as the alternative). We ensure any project is as protected as possible from catastrophic downside – by decoupling the commission of error from any impact on operations or reputation – and as exposed as possible to potentially large upsides – by providing maximum freedom to teams to discover and exploit opportunities in a timely manner.
Those who intervene in complex systems may cause more harm than good. This is known as iatrogenics in medicine. To manage complex systems, removing existing interventions is more likely to be successful than making additional interventions, as each additional intervention produces unanticipated (side) effects by itself, and unanticipated interactions with other interventions in tandem. Via negativa is the philosophy of managing by taking things away.
Software delivery, and organisations in general, are complex in that they are difficult to understand and respond unpredictably to interventions. What’s an example of an intervention we could take away? Well, let’s say a project is “running late”. Instead of adding bodies to the team or hours to the schedule, start by trying to eliminate work through a focus on scope and quality. Also, why not remove targets?
Big Projects, Monolithic Systems
Anything big tends to be fragile. Break it into smaller pieces for greater robustness. Check.
Waterfall and Agile
Waterfall done by the book is fragile. Agile done as intended is antifragile.
Forcing natural variation into pre-defined, largely arbitrary containers creates fragility. Velocity commitments and other forms of management by performance target come to mind.
Skin in the Game
Of course, anyone making predictions should have skin in the game. On the other hand, Hammurabi’s code is the converse of the safe-to-fail environment.
The Lindy Effect on Technology lifespan
The life expectancy of a technology increases the longer it has been around. Remember this the next time you want to try something shiny.
Phenomenology and Theory
Phenomenology may be superior to theory for decision-making in complex work. Phenomenology says “if x, then we often observe y“. Theory says “if x, then y, because z“. Theory leads to the illusion of greater certainty, and probably a greater willingness to intervene (see above).
Flaneurs and Tourists
Chart your own professional journey. Allow yourself the time and space for happy discoveries.
I use narrative visualisations a lot. I like to frame evidence so that it commands attention, engages playful minds, and tells its own story (see also Corporate Graffiti). I’ll put new tools on GitHub as I create them. Here are three to start.
Visualising Stand-Up Attendance
I used the Space Invader metaphor with a busy leadership team to explain how things would slip through the gaps from day if they didn’t attend stand-up in sufficient numbers and with sufficient regularity. The invaders represent the team members present each day, and each advancing row is a new day. The goal of the game is reversed in this case – we want the invaders to win! The team loved it and loved seeing their improved attendance reflected in a denser mesh of invaders.
Useful if you want to aggregate multiple retrospectives – either the same team over time, or multiple teams on a common theme – and present them back while preserving the sincerity of the original outputs.
Trello is a wonderful tool for introducing visual management. It is not, however, great for reporting. Trycle (source on GitHub) will calculate cycle times for all cards transitioning between two lists using the JSON export of a Trello board (or the dwell time if just one list). Visuals and narrative not included.
Visualisation is a key tool for the management of knowledge, especially knowledge from data. We’ll explore different states of knowledge, and how we can use visualisation to drive knowledge from one state to another, as individual creators of visualisation and agents within an organisation or society.
(There’s some justifiable cynicism about quadrant diagrams with superimposed crap circles. But, give me a chance…)
Awareness and Certainty about Knowledge
We’re used to thinking about knowledge in terms of a single dimension: we know something more or less well. However, we’ll consider two dimensions of knowledge. The first is certainty – how confident are you that what you know is right? (Or wrong?) The second is awareness – are you even conscious of what you know? (Or don’t know?)
These two dimensions define four states of knowledge – a framework you might recognise – from “unknown unknowns” to “known knowns”. Let’s explore how we use visualisation to drive knowledge from one state to another.
(Knowledge is often conceived along other dimensions, such as tacit and explicit, due to Nonaka and Takeuchi. I’d like to include a more detailed discussion of this model in future, but for now will note that visualisation is an “internalisation” technique in this model, or an aid to “socialisation”.)
I think this is the easiest place to start, because narrative visualisation helps us with knowledge we are aware of. Narrative visualisation means using visuals to tell a story with data.
We can use narrative visualisation to drive from low certainty to high certainty. We can take a “known unknown”, or a question, and transform it to a “known known”, or an answer.
“Where is time spent in this process?” we might ask. A pie chart provides a simple answer. However, it doesn’t tell much of a story. If we want to engage people in process of gaining certainty, if we want to make the story broader and deeper, we need to visually exploit a narrative thread. Find a story that will appeal to your audience and demonstrate why they should care about this knowledge, then use the narrative to drive the visual display of data. Maybe we emphasise the timeliness by displaying the pie chart on a stopwatch, or maybe we illustrate what is done at each stage to provide clues for improvement. (NB. Always exercise taste and discretion in creating narrative visualisations, or they may be counter-productive.)
Here is a brilliant and often cited narrative visualisation telling a powerful story about drone strikes in Pakistan.
The story also provides a sanity check for your analysis – is the story coherent, is it plausible? This helps us to avoid assigning meaning to spurious correlation (eg, ski accidents & bed-sheet strangulation), but do keep an open mind all the same.
But where do the questions to be answered come from? This is the process of discovery, and we can use visualisation to drive discovery.
Discovery can drive from low awareness, low certainty to high awareness, low certainty – from raw data to coherent questions. Discovery is where to start when you have “unknown unknowns”.
But how do you know you have “unknown unknowns”? Well, the short answer is: you do have them – that’s the thing about awareness. However, we’ll explore a longer answer too.
If someone drops a stack of new data in your lap (and I’m not suggesting that is best practice!), it’s pretty clear you need to spend some time discovering it, mapping out the landscape. However, when it’s data in a familiar context, the need for discovery may be less clear – don’t you already know the questions to be answered? We’ll come back to that question later.
A classic example of this kind of discovery can be found at Facebook Engineering, along with a great description of the process.
In discovery visualisation, we let the data lead, we slice and dice many different ways, we play with the presentation, we use data in as raw form as possible. We don’t presuppose any story. On our voyage of discovery, we need to hack through undergrowth to make headway and scale peaks for new vistas, and in that way allow the data to reveal its own story.
What if you’ve done your discovery and done your narration? You’re at “known knowns”, what more need you do?
If the world was linear, the answer would be “nothing”. We’d be done (ignoring the question of broader scope). The world is not linear, though. Natural systems have complex interactions and feedback cycles. Human systems, which we typically study, comprise agents with free will, imagination, and influence. What happens is that the real world changes, and we don’t notice.
We don’t notice because our thinking process is inductive. What that means is that our view of the world is based on an extrapolation of a very few observations, often made some time in the past. We also suffer from confirmation bias, which means we tend to downplay or ignore evidence which contradicts our view of the world. This combination makes it very hard to shift our superstitions beliefs. (The western belief that men had one less rib than women persisted until the 16th century CE due to the biblical story of Adam and Eve.)
So where does this leave us? It leaves us with knowledge of which we are certain, but unaware. These are the slippery “unknown knowns”, though I think a better term is biases.
Unlearning visualisation is how we dispose of biases and embrace uncertainty once more. This is how we get to a state of “unknown unknowns”.
However, as above, unlearning is difficult, and may require overwhelming contradictory evidence to cross an “evidentiary threshold”. We must establish a “new normal” with visuals. This should be the primary concern of unlearning visualisation – to make “unknown unknowns” look like an attractive state.
Big data is particularly suited to unlearning, because we can – if we construct our visualisation right – present viewers with an overwhelming number of sample points.
Unlearning requires both data-following and story-telling approaches. If we take away one factually-based story viewers tell themselves about the world, we need to replace it with another.
Your approach to visualisation should be guided by your current state of knowledge:
If you don’t know what questions to ask, discovery visualisation will help you find key questions. In this case, you are moving from low awareness to high awareness of questions, from “unknown unknowns” to “known unknowns”.
If you are looking to answer questions and communicate effectively, narrative visualisation helps tell a story with data. In this case, you are moving from low certainty to high certainty, from “known unknowns” to “known knowns”.
If you have thought for some time that you know what you know and know it well, you may be suffering from inductive drift. In this case, use unlearning visualisation to establish a new phase of inquiry. In this case, you are moving from high certainty and awareness low certainty and awareness, returning to “unknown unknowns”.
Of course, it may be difficult to assess your current state of knowledge! You may have multiple states superimposed. You may only be able to establish where you were in hindsight, which isn’t very useful in the present. However, this framework can help to cut through some of the fog of analysis, providing a common language for productive conversations, and providing motivation to keep driving your visual knowledge cycles.