End-to-end simulation hello world!

I’ve talked to many people about how to maximise the utility of a simulator for business decision-making, rather than focussing on the fidelity of reproducing real phenomena. This generally means delivering a custom simulator project lean, in thin, vertical, end-to-end slices. This approach maximises putting learning into action and minimises risk carried forward.

For practitioners, and to sharpen my own thinking, I have provided a concrete example in code; a Hello, World! for custom simulator development. The simulated phenomena will be simple, but we’ll drive all the way through a very thin slice to supporting a business decision.

Introduction to simulation

To understand possible futures, we can simulate many things; physical processes, digital systems, management processes, crowd behaviours, combinations of these things, and more.

When people intend to make changes in systems under simulation, they wish to understand the performance of the system in terms of the design of actions they may execute or features that may exist in the real world. The simulator’s effectiveness is determined by how much it accelerates positive change in the real world.

Even as we come to understand systems through simulation, it is also frequently the case that people continue to interact with these systems in novel and unpredictable ways – sometimes influenced by the system design and typically impacting the measured performance – so it remains desirable to perform simulated and real world tests may in some combination, sequentially or in parallel.

For both of these reasons – because simulation is used to shorten the lead time to interventions, and because those interventions may evolve based on both simulated and exogenous factors – it is desirable to build simulation capability in an iterative manner, in thin, vertical slices, in order to respond rapidly to feedback.

This post describes a thin vertical slice for measuring the performance of a design in a simple case, and hence supporting a business decision that can be represented by the choice of design, and the outcomes of which can be represented by performance measures.

This post is a walk-through of this simulation_hello_world notebook and, as such, is aimed at a technical audience, but one that cares about aligning responsive technology delivery and simulation-augmented decision making for a wider stakeholder group.

If you’re just here to have fun observing how simulated systems behave, this post is not for you, because we choose a very simple, very boring system as our example. There are many frameworks for the algorithmic simulation of real phenomena in various domains, but our example here is so simple, we don’t use a framework.

The business decision

How much should we deposit fortnightly into a bank account to best maintain a target balance?

Custom simulator structure

To preserve agility over multiple thin slices through separation of concerns, we’ll consider the following structure for a custom simulator:

  1. Study of the real world
  2. Core algorithms
  3. Translation of performance and design
  4. Experimentation
  5. Translation of an environment

We’ll look at each stage and then put them all together and consider next steps. As a software engineering exercise, the ongoing iterative development of a simulator requires consideration of Continuous Delivery, and in particular shares techniques with Continuous Delivery for Machine Learning (CD4ML). I’ll expand on this in a future post.

Study of the real world

We study how a bank account works and aim to keep the core algorithms really simple – as befits a Hello, World! example. In our study, we recognise the essential features of: a balance, deposits and withdrawals. The balance is increased by deposits and reduced by withdrawals, as shown in the following Python code.

class Account:

  def __init__(self):
    self.balance = 0

  def deposit(self, amount):
    self.balance = self.balance + amount

  def withdraw(self, amount):
    self.balance = self.balance - amount

Core algorithms

Core algorithms reproduce real world phenomena, in some domain, be it physical processes, behaviour of agents, integrated systems, etc. We can simulate how our simple model of bank account will behave when we perform some set of transactions on it. In the realm of simulation, these are possible rather than actual transactions.

def simulate_transaction(account, kind, amount):
  if kind == 'd':
    account.deposit(amount)
  elif kind == 'w':
    account.withdraw(amount)
  else:
    raise ValueError("kind must be 'd' or 'w'")

def simulate_balance(transactions):
  account = Account()
  balances = [account.balance]
  for t in transactions:
    simulate_transaction(account, t[0], t[1])
    balances.append(account.balance)
  return balances

When we simulate the bank account, note that we are interested in capturing a fine-grained view of its behaviour and how it evolves (the sequence of balances), rather than just the final state. The final state can be extracted in the translation layer if required.

tx = [('d', 10), ('d', 20), ('w', 5)]
simulate_balance(tx)
[0, 10, 30, 25]

Visualisation is critical in simulator development – it helps to communicate function, understand results, validate and tune implementation and diagnose errors, at every stage of development and operation of a simulator.

This could be considered a type of discrete event simulation, but as above, we’re more concerned with a thin slice through the whole structure than the nature of the core algorithms.

Translation of performance and design

The core algorithms are intended to provide a fine-grained, objective reproduction of real observable phenomena. The translation layer allows people to interact with a simulated system at a human scale. This allows people to specify high-level performance measures used to evaluate the system designs, which are also specified at a high level, while machines simulate all the rich detail of a virtual world using the core algorithms.

Performance

We decide for our Hello, World! example that this is a transactional account, and as such we care about keeping the balance close to a target balance, so that sufficient funds will be available, but we don’t leave too much money in a low interest account. We therefore measure performance as the average (mean) absolute difference between the balance at each transaction, and the target balance. Every transaction that leaves us over or under the target amount is penalised by the difference and this penalty is averaged over transactions. A smaller measure means better performance.

This may be imperfect, but there are often trade-offs in how we measure performance. Discussing these trade-offs with stakeholders can be a fruitful source of further insight.

def translate_performance_TargetBalance(balances, target):
  return sum([abs(b - target) for b in balances]) / len(balances)

Design

While we can make any arbitrary number of transactions and decide about each individual transaction, we consider for simplicity that set up a fortnightly deposit schedule, and make one decision about the amount of that fortnightly deposit.

As the performance translation distills complexity into a single figure, so the design translation causes the inverse, with complexity blooming from a single parameter We translate the single design parameter into a list of every individual transaction, suitable for simulation by core algorithms.

def translate_design_FortnightlyDeposit(design_parameter):
  return [('d', design_parameter)] * ANNUAL_FORTNIGHTS

translate_design_FortnightlyDeposit(10)[:5]
[('d', 10), ('d', 10), ('d', 10), ('d', 10), ('d', 10)]

Performance = f(Design)

Now we can connect human-relevant design to human-relevant performance measure via the sequence:

design -> design translation -> core algorithm simulation -> performance translation -> performance measure

If we set our target balance at 100, we see how performance becomes a function of design by chaining translations and simulation.

def performance_of_design(design_translator, design_parameters):
  return translate_performance_Target100(
      simulate_balance(
          design_translator(design_parameters)))

This example above (and from the notebook) specialises the target balance performance translator to Target100 and makes the design translator configurable.

Remebering that we’re interested in the mean absolute delta between the actual balance and the target, here’s what it might look like to evaluate and visualise a design:

evaluating account balance target 100
with FortnightlyDeposit [9]
the mean abs delta is 61.89

Experimentation

Experimentation involves exploring and optimising simulated results through the simple interface of performance = f(design). This means choosing a monthly deposit amount and seeing how close we get to our target balance over the period. Note that while performance and design are both single values (scalars) in our Hello, World! example, in general they would both consist of multiple, even hundreds of, parameters.

This performance function shows visually the optimum (minimum) design is the point at the bottom of the curve. We can find the optimal design automatically using (in this instance) scipy.optimize.minimize on the function performance = f(design).

We can explore designs in design space, as above, and we can also visualise how an optimal design impacts the simulated behaviour of the system, as below.

Note that in the optimal design, the fortnightly deposit amount is ~5 for a mean abs delta of ~42 (the optimal performance) rather than a deposit of 9 (an arbitrary design) for a mean abs delta of ~62 (the associated performance) in the example above.

Translation of an environment

The example so far assumes we have complete control of the inputs to the system. This is rarely the case in the real world. This section introduces environmental factors that we incorporate into the simulation, but consider out of our control.

Just like design parameters, environmental factors identified by humans need to be translated for simulation by core algorithms. In this case, we have a random fortnightly expense that is represented as a withdrawal from the account.

def translate_environment_FortnightlyRandomWithdrawal(seed=42, high=5):
  rng = np.random.RandomState(seed)
  random_withdrawals = rng.randint(0, high=high, size=ANNUAL_FORTNIGHTS)
  return list(zip(['w'] * ANNUAL_FORTNIGHTS, random_withdrawals))

translate_environment_FortnightlyRandomWithdrawal()[:5]
[('w', 3), ('w', 4), ('w', 2), ('w', 4), ('w', 4)]

If we were to simulate this environmental factor with without any deposits being made, it would look like below.

Now we translate design decisions and environmental factors together into the inputs for the simulation core algorithms. In this case we interleave or “zip” the events together

def translate_FortnightlyDepositAndRandomWithdrawal(design_parameters):
  interleaved = zip(translate_design_FortnightlyDeposit(design_parameters),
                    translate_environment_FortnightlyRandomWithdrawal())
  return [val for pair in interleaved for val in pair]

translate_FortnightlyDepositAndRandomWithdrawal(design_1)[:6]
[('d', 9), ('w', 3), ('d', 9), ('w', 4), ('d', 9), ('w', 2)]

Putting it all together

We’ll now introduce an alternative design and optimise that design by considering performance when the simulation includes environmental factors.

Alternative design

After visualising the result of our first design against our performance measurement, an alternative design suggests itself. Instead of having only a fixed fortnightly deposit, we could make an initial large deposit, followed by smaller fortnightly deposits. Our design now has two parameters.

def translate_design_InitialAndFortnightlyDeposit(design_pars):
  return [('d', design_pars[0])] + [('d', design_pars[1])] * ANNUAL_FORTNIGHTS

design_2 = [90, 1]

In the absence of environmental factors, our system would evolve as below.

However, we can also incorporate the environmental factors by interleaving the environmental transactions as we did above.

Experimentation

Our performance function now incorporates two design dimensions. We can run experiments on any combination of the two design parameters to see how they perform. With consideration of environmental factors now, we can visualise performance and optimisation the design in two dimensions.

Instead of the low point on a curve, the optimal design is now the low point in a bowl shape, the contours of which are shown on the plot above.

This represents our optimal business decision based on what we’ve captured about the system so far, an initial deposit of about $105 and an fortnightly deposit of $2.40. If we simulate the evolution of the system under the optimal design, we see a result like below, and can see visually why it matches our intent to minimise the deviation from the target balance at each time..

The next thin slice

We could increase fidelity by modelling transactions to the second, but it may not substantially change our business decision. We could go on adding design parameters, environmental factors, and alternative performance measures. Some would only require changes at the translation layer, some would require well-understood changes to the core algorithms, and others would require us to return to our study of the world to understand how to proceed.

We only add these things when required to support the next business decision. We deliver another thin slice through whatever layers are required to support this decision by capturing relevant design parameters and performance measures at the required level of fidelity. We make it robust and preserve future flexibility with continuous delivery. And in this way we can most responsively support successive business decisions with a thinly sliced custom simulator.

Weighting objectives and constraints

We used a single, simple measure of performance in this example, but in general there will be many measures of performance, which need to be weighted against each other, and often come into conflict. We may use more sophisticated solvers as the complexity of design parameters and performance measures increases.

Instead of looking for a single right answer, we can also raise these questions with stakeholders to understand where the simulation is most useful in guiding decision-making, and where we need to rely on other or hyrbid methods, or seek to improve the simulation with another development iteration.

Experimentation in noisy environments

The environment we created is noisy; it has non-deterministic values in general, though we use a fixed random seed above. To properly evaluate a design in a noisy environment, we would need to run multiple trials.

Final thoughts

Simulation scenarios become arbitrarily complex. We first validate the behaviour in the simplest point scenarios that ideally have analytic solutions, as above. We can then calibrate against more complex real world scenarios, but when we are sufficiently confident the core algorithms are correct, we accept that the simulation produces the most definitive result for the most complex or novel scenarios.

Most of the work is then in translating human judgements about design choices and relevant performance measures into a form suitable for exploration and optimisation. This is where rich conversations with stakeholders are key to draw out their understanding and expectations of the scenarios you are simulating, and how the simulation is helping drive their decision-making. Focussing on this engagement ensures you’ll continue to build the right thing, and not just build it right.

Visualising System Dynamics Models

Simulation is a powerful tool for understanding and solving complex problems, but sometimes simulations themselves can be hard to understand. Visualisation is a powerful tool for understanding what simulations are telling us, and also for socialising the limitations and assumptions built into predictions.

An approach

System dynamics is a simulation paradigm that can be used to model certain types of natural, social and business scenarios with interacting features.

The interactive visualisation above shows a system dynamics model of a pond, with seasonal inflow, evaporation, and outflow when an overflow level is reached. Hover over the nodes for the equations that link these quantities. You can also pan & zoom the view and drag the nodes to new locations. This type of visual language for models enables multidisciplinary teams to collaborate around simulations.

Augmenting open tools

While many commercial tools support visualisation of models, I haven’t found as much support for visualisation as I expected in open source system dynamics tools.

If I’m missing something, please let me know! But to fill the gap I wrote a simple visualisation module for BPTK_Py models, the output of which you see above. Usage is illustrated in this demonstration notebook; it’s just a few lines of code to transform BPTK model -> NetworkX graph -> interactice PyVis visualisation. The NetworkX graph can also be visualised statically, or exported for other uses.

I targeted BPTK_Py as the simulation framework because I liked its Python DSL for model definition. The visualisation is not a visual design environment, but it does support the visualisation of models defined in code, which still enables short feedback cycles in model design.

Pond example in detail

The model visualised above is a simple river system, where seasonal stream inflow feeds a pond (the arrow from inflow to pond). Water in the pond is lost to evaporation, depending on how much water there is in the pond (hence arrows in both directions between evaporation and pond). If the water in the pond reaches a certain level it is drained by outflow (hence pond level depends on outflow, and outflow depends on excessVolume of the pond above overflowLevel).

Time-series chart of stocks and flows for the pond example

The model graph for visualisation has nodes for each class of system dynamics object defined with a label: stocks, flows, constants and converters. Where these objects are related by equations, edges are added to the graph to show the dependencies. Edges are added by recursively inspecting terms in the equations until an object is reached. As I use a DiGraph and not a MultiDiGraph, a maximum of one edge will be created between each pair of nodes even if multiple references are found in the equation.

In system dynamics visualisations, flows are typically drawn connected to sources or sinks, but I decided to leave that construct implicit. My visualisations show the direction of dependency between stocks and flows, rather than the (nominal) direction of flow. To see the detail of dependencies, the equations can be overlaid on each node. The static version of the visualisation is shown below.

Graph representation of a system dynamics model of a pond. Stocks, flows, constants and their dependencies are shown. The dependency equations are shown on the nodes

The code could benefit from: further testing, additional support for all the equations types in BPTK_Py.sd_functions (reference), and better layout support for the static view. But maybe it helps fill a gap that would otherwise exist.

BPTK population tutorial example

This second example shows the visualisation of the BPTK introductory population tutorial.

First the interactive version.

And the static version (with first generation hexagonal converters) .

Graph representation of a system dynamics model of population. Stocks, flows, constants and their dependencies are shown. The dependency equations are shown on the nodes

You can compare both generated visualisations to the visual design from the tutorial.

Visual design of system dynamics model from BPTK population tutorial

What do you think? Let me know if you have any success using it or any thoughts on how important this type of visualisation is to building shared understanding in complex analytics projects.

Note this post was updated in September 2023 with the interactive version of the visualisation.

Corporate Graffiti – Being Disruptive with Visual Thinking

As you go about your work you’ll come up against walls. Some walls will be blank and boring blockers to progress. These need decoration; spraying with layers that confer meaning. So pick a corner and start doodling. With a new perspective, you’ll find a way around the blockers. Other walls will come with messages – by design or default – leading you in a certain direction. If this isn’t where you want to go, you’ll need to plot your own course by subverting or overwhelming the prevailing visuals.

This is your challenge, and your opportunity for innovation: to disrupt the established visual environment with new ways of looking at the world that, in turn, unlock new ways of thinking. If you think you could make your organisation more agile with some disruptive visual thinking, read on for my experience [on the Organisational Agility channel of ThoughtWorks Insights].

Playing Games is Serious Business

Simple game scenarios can produce the same outcomes as complex and large-scale business scenarios. Serious business games can therefore reduce risk and improve outcomes when launching and optimising services. Gamification also improves alignment and engagement across organisational functions.

This is a presentation on using games to understand and improve organisational design and service delivery, which I presented at the Curtin University Festival of Teaching and Learning.

(Don’t be concerned by what looks like a bomb-disposal robot in the background.)

The slides provide guidance on applying serious business games in your context.

Leave Product Development to the Dummies

This is the talk I gave at Agile Australia 2013 about the role of simulation in product development. Check out a PDF of the slides with brief notes.

Description

"Dummies" talk at Agile Australia

Stop testing on humans! Auto manufacturers have greatly reduced the harm once caused by inadvertently crash-testing production cars with real people. Now, simulation ensures every new car endures thousands of virtual crashes before even a dummy sets foot inside. Can we do the same for software product delivery?

Simulation can deliver faster feedback than real-world trials, for less cost. Simulation supports agility, improves quality and shortens development cycles. Designers and manufacturers of physical products found this out a long time ago. By contrast, in Agile software development, we aim to ship small increments of real software to real people and use their feedback to guide product development. But what if that’s not possible? (And can we still benefit from simulation even when it is?)

The goal of trials remains the same: get a good product to market as quickly as possible (or pivot or kill a bad product as quickly as possible). However, if you have to wait for access to human subjects or real software, or if it’s too costly to scale to the breadth and depth of real-world trials required to optimise design and minimise risk, consider simulation.

Learn why simulation was chosen for the design of call centre services (and compare this with crash testing cars), how a simulator was developed, and what benefits the approach brought. You’ll leave equipped to decide whether simulation is appropriate for your next innovation project, and with some resources to get you started.

Discover:

  • How and when to use simulation to improve agility
  • The anatomy of a simulator
  • A lean, risk-based approach to developing and validating a simulator
  • Techniques for effectively visualising and communicating simulations
  • Implementing simulated designs in the real world