Project Slackpose

Another lockdown, another project for body and mind. Slackpose allows me to track my slackline walking and review my technique. Spending 5 minutes on the slackline between meetings is a great way to get away from my desk!

I had considered pose estimation for wheelies last year, but decided slackline walking was an easier start, and something the whole family could enjoy.


I mount my phone on a tripod at one end of the slackline and start recording a video. This gives a good view of side-to-side balance, and is most pixel-efficient in vertical orientation.

Alternatively, duct tape the phone to something handy, or even hand-hold or try other angles. The 2D pose estimation (location of body joints in the video image) will be somewhat invariant to the shooting location, but looking down the slackline (or shooting from another known point) may help reconstruct more 3D pose data using only a single camera view. Luckily, it’s also invariant to dogs wandering into frame!

I use OpenPose from the CMU Perceptual Computing Lab to capture pose data from the videos. See below for details of the keypoint pose data returned and some notes on setting up and running OpenPose.

I then analyse the keypoint pose data in a Jupyter notebook that you can run on Colab. This allows varied post-processing analyses such as the balance analysis below.

Keypoint Pose Data

OpenPose processes the video to return data on 25 body keypoints for each frame, representing the position of head, shoulders, knees, and toes, plus eyes, ears, and nose and other major joints (but mouth only if you explicitly request facial features).

These body keypoints are defined as (x, y) pixel locations in the video image, for one frame of video. We can trace the keypoints over multiple frames to understand the motion of parts of the body.

Keypoints also include a confidence measure [0, 1], which is pretty good for the majority of keypoints from the video above.

Balance Analysis

I wanted to look at balance first, using an estimate of my body’s centre of mass. I calculated this from the proportional mass of body segments (sourced here) with estimates of the location centre of mass for each segment relative to the pose keypoints (I just made these up, and you can see what I made up in the notebook).

This looks pretty good from a quick eyeball, although it’s apparent that it is sensitive to the quality of pose estimation of relatively massive body segments, such as my noggin, estimated at 8.26% of my body mass. When walking away, OpenPose returns very low confidence and a default position of (0, 0) for my nose for many frames, so I exclude it from the centre of mass calculation in those instances. You can see the effect of excluding my head in the video below.

Not much more to report at this point; I’ll have a better look at this soon, now that I’m up and walking with analysis.

OpenPose Notes


I ran OpenPose on my Mac, following the setup guide at, and also referred to two tutorials. These instructions are collectively pretty good, but note:

  • 3rdparty/osx/ doesn’t exist, you will instead find it at scripts/osx/
  • I had to manually pip install numpy and opencv for OpenPose with pip install 'numpy<1.17' and pip install 'opencv-python<4.3‘, but this is probably due to my neglected Python2 setup.
  • Homebrew cask installation syntax has changed from the docs; now invoked as brew install --cask cmake


Shooting iPhone video, I need to convert format for input to OpenPose. Use ffmeg as below (replace with the name of your video).

ffmpeg -i -vcodec h264 -acodec mp2 slackline.mp4

I then typically invoke OpenPose to process video and output marked up frames and JSON files with the supplied example executable, as below (again replace input video and output directory with your own):

./build/examples/openpose/openpose.bin --video slack_video/slackline.mp4 --write_images slack_video/output --write_json slack_video/output

7 Wastes of Data Production

I realised recently that this is one of the lenses through which I look at the data engineering world, but I had never expressed these (lean) wastes explicitly. This post might be useful for data engineers exploring lean concepts, or lean practitioners trying to make sense of data & analytics processes. This isn’t just a theoretical view; these wastes are real, and have a real impact on organisational success, which I try to quantify. These wastes also impact our ability to do good work, and to enjoy it!

When I talk about data production, I’m talking about building and running a factory that transforms data signals from the world into useful insights, improved operations and great experiences. This means connecting data sources from suppliers through transformations to consumers, who might be customers, team members, or partners.

Data production factory, showing bits flowing from supplier to consumer through a machine. Developers and operators change the factory and keep it running

In this post, I’ll look at lean wastes through the lens of building the factory and running the factory. Building the factory – modifying the processing pathways for data – is a software development exercise. Running the factory – propagating new inputs along processing pathways – is a manufacturing operations exercise, but for data rather than physical products, and hence the manufacturing machines are all software. NB. one team can look through multiple lenses – see data + dev + ops!

Lean wastes (muda) were originally defined with reference to a physical manufacturing, though there are analogues for knowledge work, including Mary and Tom Poppendieck’s mapping to software development. So the translation is roughly:

  • the software development analogue for the build phase, and
  • a manufacturing analogue with bits rather than atoms for the run phase

Both are illustrated with examples specific to data. The drawings only show bits flowing through a running factory, but you can imagine ideas flowing through a development team as the equivalent for build.

I talk about data products as the end result of this development and manufacturing (or data production) activity, but also considering the complementary design or marketing perspective – i.e., what problems and how well do these products solve for a consumer (regardless of how they are made)? This brings us to…


Overproduction is delivering things consumers don’t need and haven’t asked for.

BuildUnnecessary products, over-designed products
For instance, a report that no-one will read, a 3D widget where a table will do, or a unified data model that doesn’t suit any one consumer.
RunUnused products
A report that people stopped reading long ago. A data set no-one ever accesses.
Data production factory spewing bits that aren't consumed by a consumer

The consequence of overproduction is that productive capacity is consumed with no business impact. These products are useless and prevent us creating value.

Considering some studies have found 50% of product features are rarely or never used, the cost of overproduction may be 50% of your data budget, but many organisations have very limited visibility of overproduction in data.

Overproduction is with reference to finished goods, but until finished, they are …


Inventory is partially completed work that causes a drain on resources while embodying little or no realisable value.

BuildWork in progress
Batching up development effort on data ingest activities, or platform features, without validating the use cases that motivate this data or functionality.
RunIncomplete pipelines (data not connected to consumers)
Data delivered to a data platform but not being used.
Data production factory accumulating bits inside the factory

The consequence of inventory is that no value is realised from effort to date, and as a result, unbounded effort may be expended before delivering value.

The cost that can be sunk into building and populating a data platform that is not connected to consumers (representing data production inventory) is, for all intents and purposes, unbounded. Very large initiatives have reached their conclusion with substantial data inventory (which ironically, until close to that point, may be considered a success measure), but marginal to no business value delivered.

A possible cause of failing to deliver finished goods from inventory is the additional effort associated with …

Over Processing

Over-Processing is doing more work than is necessary to deliver on an objective.

BuildReinventing products, working with untrusted data
Duplicated, divergent reports. Excessive logic to manage poor data quality.
RunCorrecting errors from upstream, propagating redundant data
Filling missing data and correcting schema violations with best guesses. Passing on data that isn’t valuable downstream.
Data production factory with many processing machines

The consequence of over-processing is the expenditure of unnecessary effort to realise value. This may feel like everything is harder than it should be.

Any code that exists to correct errors downstream of a source is over-processing, so is any duplicated reporting. Consider how much of this over-processing you may be doing, and what might change if you were to measure this.

Work must move between processing stages, but in doing so, we should minimise …


Transportation is moving products around in a way that is costly and may cause damage.

BuildHandoffs between siloed teams
Source app team → ingest team → platform team → analytics team → consuming app team… each team loses context from the last.
RunData replication without reproducibility
Creating unnecessary backups because systems aren’t trusted. Copying Excel files. You can damage a digital copy by losing its provenance.
Data production factory where bits disappear and reappear at different locations

The consequence of transportation is expending further additional effort that may reduce quality. There’s no clear single place to find what you need, and the more it moves, the less you trust it.

How much time have you lost due to misunderstanding between teams or to establishing the provenance of data? It may lower productivity or may be catastrophic if auditibility is sufficiently degraded. This is the cost of transportation in data production.

In addition to transportation, the act of processing may include unnecessary …


Motion is extra activity which doesn’t add to the product, and additionally creates opportunities for defects, and takes a toll on workers.

BuildContext switching
For example, a dedicated data ingest team working across multiple sources (work in progress), which may also frequently break (incident toil).
RunManual intervention or finishing of products
Copy this here, rename file X and save over there, run script Y, …
Data production factory with workers moving between multiple streams and stages in a stream

The consequence of motion is that work is complicated, in a way that is bad for people. Every job requires more actions than it should.

How much time and energy do you lose to picking up and putting down work – this can increase dramatically as the number of concurrent tasks increases – including switching to manual intervention in data production? This is the cost of motion.

All of the above reduce the flow of build or run to an extent. Collectively and in interaction with batching up work, they cause …


Waiting occurs when people or resources aren’t ready to pick up work as it arrives.

BuildDelays due to handoffs between siloed teams, long feedback cycles in development
Waiting for requirements or feedback on deliverables. Exacerbated in data-intensive applications by functionally-specialised handoffs (see transport), long running batch jobs, and a promote then test approach.
RunLead time to discover data, and from business event to insight or action
No-one owns this data or can tell you definitively if it exists, what’s in it, and where to find it. Reports come on a fixed schedule set by processing capabilities, not business needs.
Data production factory with waiting spinners in the flow of bits

The consequence of waiting is that business value realisation is delayed, in an environment where value decays rapidly. We might summarise this as: it would have been nice to have this data yesterday.

Where hand-offs occur between teams, tasks may take 12 times as long to complete, as a median measure, and much longer in extreme cases. Cascading scheduled batch jobs with buffers and retries due to uncontrolled variability and quality issues can quickly add up to insights lead times measured in weeks.

The factors above contribute to and are also caused by the introduction of …


Defects are the failure to do something, or the failure to do it right.

BuildDefects in processing code
Query specifies ‘m’ for minutes instead of intended ‘M’ for month.
RunDefects in data produced
People get the wrong emails.
Data production factory with some defective bits on input, a defective processing machine, and many defective bits on output

The consequence of defects is that the organisation increases risk exposure, while reducing consumer value delivered, and creating effort to remediate. Thus they have the potential to damage business and create yet more effort.

The cost of defects can be catastrophic, especially when related to personal information. If defects cause significant ongoing toil, reducing defects is a major lever for increasing productive capacity (eg, if defects are ~30% of capacity, the marginal improvement in productive capacity is ~50%).


Look out for these wastes in your work with data; find your own examples. Being able to recognise the various wastes is important in itself, and being able to quantify their potential impact is important to prioritise improvement efforts. There are approaches and solutions to reduce these wastes, and you can research those yourselves while I prepare for future posts. But take some time to understand the problem before you go rushing for solutions; there’s a lot you can do with knowledge of waste in data production to define and drive change for the better.

Thanks to Ned Letcher and Yekaterina Khomyakova for feedback on these wastes, which were included as part of their presentation on Data Mesh for the 2021 LAST Conference Melbourne.

LEGO and Software – Part Roles

This is the fifth post in a series exploring LEGO® as a Metaphor for Software Reuse. A key consideration for reuse is the various roles that components can play when combined or re-combined in sets. Below we’ll explore how we can use data about LEGO parts and sets to understand the roles parts play in sets.

I open a number of lines of investigation, but this is just the start, rather than any conclusion, of understanding the roles parts play and how that influences outcomes including reuse. The data comes from the Rebrickable data sets, image content & API and the code is available at

Hero Parts

Which parts play the most important roles in sets? Which parts could we least easily substitute from other sets?

We could answer this question in the same way as we determine relevant search results from documents, for instance with a technique called TFIDF (term frequency-inverse document frequency). We can find hero parts in sets with set frequency-inverse part frequency, which in the standard formulation requires a corpus of parts “documents” listing sets “terms” for each set that includes that part, as below.

part 10190: "10403-1 10403-1 10404-1 10404- ... "
part  3039: "003-1 003-1 003-1 003-1 021-1  ... "
part  3023: "021-1 021-1 021-1 021-1 021-1  ... "

Inverse part frequency is closely related to the inverse of the reuse metric from part 4, hence we can expect it will find the least reused parts. Considering again our sample set 60012-1, LEGO City Coast Guard 4×4, (including 4WD, trailer, dingy, and masked and flippered diver), we find the following “hero” parts.

Gallery of hero parts from LEGO Coast Guard set (60012-1) including stickers, 4WD tyres, a dinghy, flippers and mask

This makes intuitive sense. These “hero parts” are about delivering on the specific nature of the set. It’s much harder to substitute (or reuse other parts) for these hero parts – you would lose something essential to the set as it is designed. On the other hand, as you might imagine, the least differentiating parts (easiest to substitute or reuse alternatives) overlap significantly with the top parts from part 4. Note while mechanically – in a sense of connecting parts together – it may not be possible to replace these parts, these parts don’t do much to differentiate the set from other sets.

Gallery of least differentiated parts from LEGO Coast Guard set (60012-1) including common parts like plates, tiles, blocks and slopes.

Above, we consider sets as terms (words) in a document for each part. We can also reverse this by considering a set as a document, and included parts as terms in that document. Computing this part frequency-inverse set frequency measure across all parts and sets gives us a sparse matrix.

Visualisation of TFIDF or part-frequency-invers-set-frequency as a sparse 2D matrix for building a search engine for sets based on parts

This can be used as a search engine to find the sets most relevant to particular parts. For instance, if we query the parts "2431 2412b 3023" (you can see these in Recommended Parts below), the top hit is the Marina Bay Sands set, which again makes intuitive sense – all those tiles, plates, and grilles are the essence of the set.

Recommended Parts

Given a group of parts, how might we add to the group for various outcomes including reuse? For instance, if a new set design is missing one part that is commonly included with other parts in that design, could we consider redesigning the set to include that part to promote greater reuse?

A common recommendation technique for data in the shape of our set-part data is Association Rule Learning (aka “Basket Analysis”), which will recommend parts that are usually found together in sets (like items in baskets).

An association rule in this case is an implication of the form {parts} -> {parts}. Multiple of these rules form a directed graph, which we can visualise. I used the Efficient Apriori package to learn rules. In the first pass, this gives us some reasonable-looking recommendations for many of the top parts we saw in part 4.

Visualisation of discovered association rules as a directed graph showing common parts

You can read this as the presence of 2431 in a set implies (recommends) the presence of 3023, as does 2412b, which also implies 6141. We already know these top parts occur in many sets, so it’s likely they occur together, but we do see some finer resolution in this view. The association rules for less common parts might also be more insightful; this too may come in a future post.

Relationships Between Parts

How can we discover more relationships between parts that might support better outcomes including reuse?

We can generalise the part reuse analysis from part 4 and the techniques above by capturing the connections between sets and parts as a bipartite graph. The resultant graph contains about 63,000 nodes – representing both parts and sets – and about 633,000 edges – representing instances of parts included in sets. A small fragment of the entire graph, based on the flipper part 10190, the sets that include this part, and all other parts included in these sets, is shown below.

Visualisation of set neighbours (count 79) of flipper 10190 and their part neighbours (count 1313) as two parallel rows of nodes with many connections between them
Visualisation of selected set neighbours (count 3) of flipper 10190 and selected of their part neighbours (count 14) as two parallel rows of nodes with some connections between them

This bipartite representation allows us to find parts related by their inclusion in LEGO sets using a projection, which is a derived graph that only includes parts nodes, linked by edges if they share a set. In this projection, our flipper is directly linked to the 1312 other parts with which it shares any set.

Visualisation of 1312 immediate neighbours of flipper 10190 in the part projection of the set-part graph, shows only 1% of connections but this is very dense nonetheless

You can see this is a very densely connected set of parts, and more so on the right side, from 12 o’clock around to 6 o’clock. We could create a similar picture for each part, but we can also see the overall picture by plotting degree (number of connections to parts with shared sets) for all part, with a few familiar examples.

Degree of nodes in part projection, with plate 1x2, slope 45 2x2 and flipper highlighted. Steep drop-off from maximum and long flat tail

This is the overall picture of immediate neighbours, and it shows the familiar traits of a small number of highly connected parts, and a very long tail of sparsely connected parts. We can also look beyond immediate neighbours to the path(s) through the projection graph between parts that don’t directly share a set, but are connected by common parts that themselves share a set. Below is one of the longest paths, from the flipper 10190 to multiple Duplo parts.

Visualisation of a path through the part connection graph spanning 7 nodes, with some neighbouring nodes also shown and parts drawn on

With a projection graph like this, we could infer that parts that are designed to be used together are closer together. We could use this information to compile groups of parts to specific ends. Given some group of parts, we could (1) add “nearby” missing parts to that group to create flexible foundational groups that could be used for many builds, or we could (2) add “distant” parts that could allow us to specialise builds in particular directions that we might not have considered previously. In these cases, “nearby” and “distant” are measured in terms of the path length between parts. There are many other ways we could use this data to understand part roles.

(When I first plotted this, I thought I had made a mistake, but it turns out there are indeed sets including both regular and Duplo parts, in this case this starter kit.)

The analysis above establishes some foundational concepts, but doesn’t give us a lot of new insight into the roles played by parts. The next step I’d like to explore here is clustering and/or embedding the nodes of the part graph, to identify groups of similar parts, which may come in a future post.


As I said above, there are no firm conclusions in this post regarding reuse in LEGO or how this might influence our view of and practices around reuse in software. However, if we have data about our software landscape in a similar form to the set-part data we’ve explored here, we might be able to conduct similar analyses to understand the roles that reusable software components play in different products, and, as a result, how to get better outcomes overall from software development.

Coming Next

I think the next post, and it might just be the last in this series, is going to be about extending these views of the relationships between parts and understanding how they might drive or be driven by the increase in variety and specialisation discussed in part 2.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Part Reuse

This is the fourth post in a series exploring LEGO® as a Metaphor for Software Reuse. The story is evolving as I go because I keep finding interesting things in the data. I’ll tie it all up with the key things I’ve found at some point.

In this post we’re looking from the part perspective – the reusable component – at how many sets or products it’s [re]used in, and how this has changed over time. All the analysis from this post is available at

Inventory Data

The parts inventory data from the Rebrickable data sets, image content & API gives us which parts appear in which sets, in which quantities and colours. For instance, if we look at the minifigure flipper from part 3, we can chart its inclusion in sets as below.

The parts inventory data includes minifigures. I may later account for the effect of including or excluding minifgures in these various analyses. We can also tell if a part in a set is a spare; according to the data, 2% of LEGO parts in sets are spares.

Parts Included in Sets

The most reused parts are included in thousands of sets. Here is a gallery of the top 25 parts over all time – from number 1 Plate 1 x 2 (which is included in over 6,200 sets), to our favourite from last time, the venerable Slope 45° 2 x 2 (over 2,700 sets).

These parts appear in a significant fraction of sets, but we can already see a 50% reduction in the number of sets in just the top 25 used parts. Beyond this small sample, we can plot the number of sets that include a given part (call it part set count), as below.

This time we have used log scales on both axes, due to the extremely uneven distribution of part set counts. In contrast to the thousands of sets including top parts, a massive 60% of parts are included in only one set and only 10% of parts are included in more than ten sets. The piecewise straight line fit indicates a power law applies, for instance approximately count = 10000 / sqrt(rank) for the top 100 ranked parts.

Uneven distributions are often expressed in terms of the Pareto principle or 80/20 rule. If we define reuse instances as every time a part is included in a set, after the first set, then we can plot the contribution of each part to total reuse instances and see if this is more or less uneven that the 80/20 rule.

This shows us that reuse of LEGO parts in sets is much more uneven than the 80/20 rule. While the 80/20 rule says 80% of reuse would be due to 20% of parts, in fact we find by this definition that 80% of reuse is due to only 3% of parts, and 20% of parts account for 98% of reuse!

We find a similar phenomenon if we consider the quantities of parts included in sets, rather than just the count of sets per part. We could repeat the whole analysis based on quantities (and the notebook has some options for doing this), but I was fairly satisfied the results would be similar given the degree of correlation we find, below.

I was intrigued, though, by the part that appeared many times in a single set, then never again (the uppermost point of the lower left column). It turns out it is a Window 1 x 2 x 1 (old type) with Extended Lip included in a windows set from 1966 that probably looked a bit like this one.

This brings us neatly to the “long tail” to round out this view of reuse as parts included in multiple sets. As per the distribution above, the tail (of parts that belong to only one set) is really long and full of curiosities. The tail is over 22,000 parts long, though these belong to only about 10,000 unique sets. The parts belong about 60/40 to sets proper vs minifigure packs. Here’s a tail selection – you’ll see they are fairly specialised items like stickers, highly custom parts, minifigure items with unique designs, and even trading cards!

There’s even, perhaps my nemesis, a minifigure set called “Chainsaw Dave”!

Parts Included in Sets Over Time

Part reuse might vary with time, and might be dependent on time. In previous posts we’ve seen an exponential increase in new parts and sets over time and an exponential decay in part lifetimes. We can plot part reuse (set count) against lifespan, as below.

This shows some correlation – which we might expect as longievity is due to reuse and vice versa – but I was also intrigued by the many relatively short-lived parts with high set counts (in the mid-top left region). Colouring points by the year released shows that these are relatively recent parts (at the yellow end of the spectrum). This shows that, as well as long-lived parts, more recent parts also appear in many sets, which is good news for reuse.

However, it’s hard to determine the distribution of set count from the scatter plot, and hence how significant the reuse is. We can see the distribution better with a violin plot, which shows the overall range (‘T’ ends), the distribution of common values (shading), and the median (cross-bar), much like the box plot.

We see that although many parts released in the last few decades are reused in 100s or even 1000s of sets, the median or typical part appears in only a handful of sets. With 100s to 1000s of sets released each year recently, the sets are reusing both old and new parts, but the vast majority of parts are not significantly reused.

Top Parts for Reuse Over Time

Above, we introduced the top 25 parts by set count, based on all time set count, but how has this varied over time? We can chart – for each of the top 25 parts – how many sets they appeared in each year. However, as the number of sets released each year has increased exponentially, we see a clearer pattern if we chart the proportion of sets released each year including the top 25 parts, as below.

This shows variation in proportional set representation for the top parts from about 10% to 50% over the last 40 years. However, there was a very noticeable drop around the year 2000 to a maximum of only 20% of sets including top reused parts. Interestingly, this corresponds to a documented period of financial difficulty for The LEGO Group, but further research would be required to demonstrate any relationship. Prior to 1980, the maximum proportional representation was even higher, indicating a major intention of sets was to provide reusable parts.

From 2000 onwards, the all time ranking becomes more apparent, as the lines generally spread to match the rankings. We can also see this resolution of ranks in a bump chart for the top 4 parts over the time period from just before 2000 to the present.

Lessons for Software

As in previous posts, here’s where I speculate on what this data-directed investigation of LEGO parts and sets over the years might mean for software development. This is only on the presumption that – as often cited informally in my experience – LEGO products are a good metaphor for software. I take this as a given for this series, and as an excuse to play with some LEGO data, but I don’t really test it.

Given the reuse of LEGO parts across sets and time, we might expect that for software:

  • Most reuse will likely come from a small number of components, and this may be far more extreme than the 80/20 heuristic
  • If this is the case, then teams should build for use before reuse, or design very simple foundational components (see the top 25) for widespread reuse
  • Building for use before reuse means optimising for fast development and retirement of customised components that won’t be reusable
  • Reuse may vary significantly over time depending on your product strategy
  • It’s possible to introduce new reusable components at any time, but their impact might not be very noticeable with a product strategy that drives many customised components

Next Time

I plan to look further into the relationship between parts and sets, and how this has evolved over time.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Lifespans

This is the third post in a series exploring LEGO as a Metaphor for Software Reuse through data (part 1 & part 2).

In this post, we’ll look at reuse through the lens of LEGO® part lifespans. Not how long before the bricks wear out, are chewed by your dog, or squashed painfully underfoot in the dark, but for what period each part is included in sets for sale.

This is a minor diversion from looking further into the reduction in sharing and reuse themes from part 2, but lifespan is a further distinct concept related to reuse, worthy I think of its own post. All the analysis from this post, which uses the the Rebrickable API, is available at

Ages in a Sample Set

To understand sets and parts in the Rebrickable API, I first raided our collection of LEGO build instructions for a suitable set to examine, and I came up with 60012-1, LEGO City Coast Guard 4×4.

Picture of LEGO build instructions for Coast Guard set
The sample set

In the process I discovered parts data contained year_from and year_to attributes, which would enable me to chart the ages of each part in the set when it was released, as a means of understanding reuse.

Histogram of ages of parts in set 60012-1, with 12 parts of <1 year age, gradually decreasing to 7 parts 50-55 years of age

In line with the exponential increase of new parts we’ve already seen, the most common age bracket is 0-5 years, but a number of parts in this set from 2013 were 50-55 years old when it was released! Let’s see some examples of new and old parts…

Image of a new flipper (0-1 yrs age) and sloping brick (55 years age)

The new flipper is pretty cool, but is it even worthy to be moulded from the same ABS plastic as Slope 45° 2 x 2? The sloping brick surely deserves a place in the LEGO Reuse Hall of Fame. It has depth and range – from computer screen in moon base, to nose cone of open wheel racer, to staid roof tile, to minifigure recliner in remote lair. Contemplating this part took me back to my childhood, all those marvellous myriad uses.

And yet I also recalled the slightly unsatisfactory stepped profile that resulted from trying to build a smooth inclined surface with a stack of these parts. As such, this part captures the essence and the paradox of reuse in LEGO products; a single part can do a lot, but it can’t do everything. Let’s look at lifespan of parts more broadly.

Lifespans Across All Parts

The distribution of lifespan across LEGO parts is very uneven.

Distribution of lifespans of LEGO parts - histogram

The vast majority of parts are in use less than 1 year, and only a small fraction are used for more than 10 years. Note I calculate lifespan = year_to + 1 - year_from, and this is using strict parts data, rather than also including minifigures.

Distributions of lifespans of LEGE parts, pie chart with ranges

Exponential Decay

This distribution looks like exponential decay. To understand more clearly, it’s back to the logarithmic domain, where we can fit approximations in two regimes; for the first five years (>80% of parts) and then for the remaining life.

LEGO part lifespans, counts plotted on vertical log scale

The half-life of parts in the first 5 years is about 1 year over that period. So each of the first five years, only about half the parts live to the next year. However, the first year remains an outlier, as only 34% of parts make it to their second year and beyond. After 5 years, just under 1,000 survive compared to the 25,000 at 1 year, and from that point the half-life extends to about 7 years. So, while some parts like Slope 45° 2 x 2 live long lives, the vast majority are winnowed out much earlier.


We can also look at the count of parts released (year_from) and the count of parts retired (year_to + 1) each year.

LEGO parts released and retired each year - line chart

As expected, parts released shows exponential growth, but parts retired also grows, and almost in synchrony, so the net change is small compared to the total parts in and out. By summing up the difference each year, we can chart the number of active parts over time.

Active LEGO parts by year and change by year - line and column plot

Active parts are a small proportion of all parts released to date; they represent about one seventh of all parts released, approximately 5,500 of 36,500. Comparing total changes to the active set size each year also shows a high and increasing rate of churn.

LEGO part churn by year - line chart

So even as venerable stalwarts such as Slope 45° 2 x 2 persist, in recent years about 80% of active LEGO parts have churned each year! Interestingly, the early 1980s to late 1990s was a period of much lower churn. Note also churn percentages prior to 1970 were high and varied widely (not shown before 1965 for clarity), probably reflective of a much smaller base of parts and maybe artefacts with older data.

Lifespans vs Year Release and Retired

We’ve got a lot of insight from just year_from and year_to. One last way to look at lifespans is how they have changed over time, such as lifespan vs year released or retired.

LEGO part lifespan scatter plots

Obvious in these charts is that we only have partial data on the lifespan of active parts (we don’t know how much longer until they’ll be retired), but as above, they are a small proportion. We can discern a little more by using a box plot.

LEGO part lifespan box plots

The plot shows, for each year, median lifespans (orange), the middle range (box), the typical range (whiskers) and lifespan outliers (smoky grey). We see here again that the 1980s and 1990s were a good period in relative terms for releasing long-lived parts that have only just been retired. However, with the huge volume of more short-lived parts being retired in recent years, we don’t see their impact in the late 2010s on the retired plot, except as outliers. In general, the retired (left) plot, like the later years of the released (right) plot shows lower lifespan distributions because the long-lived parts are overwhelmed by ever-increasing numbers of contemporaneous short-lived parts.

Lessons for Software Reuse

If LEGO products are to be a metaphor and baseline for reuse in software products, this analysis of part lifespans is consistent with the observations from part 1, while further highlighting:

  • Components that are heavily reused may be a minority of all components, and in an environment of frequent and increasing product releases, many components may have very short lifetimes, driven by acute needs.
  • There may be a “great filter” for reuse, such as the one year or five year lifespan for LEGO parts. This may also be interpreted as “use before reuse”, or that components must demonstrate exceptional performance in core functions, or support stable market demands, before wider reuse is viable.
  • Our impressions and expectations for reuse of software components may be anchored to particular time periods. We see that the 1980s and 1990s (when there were only ~10% of the LEGO parts released to 2020) were a period of much lower churn and the release of relatively more parts with longer lifespans. The same may be true for periods of software development in an organisation’s history.
  • Retirement of old components can be synchronised with introduction of new components, and in fact, this is probably essential to effectively manage complexity and to derive benefits of reuse without the burden of legacy.

Further Analysis

We’ll come back to the reduction in sharing and reuse theme, and find a lot more interesting ways to look at the Rebrickable data in future posts.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Variety and Specialisation

Since my first post on LEGO as a Metaphor for Software Reuse, I have done some more homework on existing analyses of LEGO® products, to understand what I could myself reuse and what gaps I could fill with further data analysis.

I’ve found three fascinating analyses that I share below. However, I should note that these analyses weren’t performed considering LEGO products as a metaphor or benchmark for software reuse. So I’ve continued to ask myself: what lessons can we take away for better management of software delivery? For this post, the key takeaways are market and product drivers of variety, specialisation and complexity, rather than strategies for reuse as such. I’m hoping to share more insight on reuse in LEGO in future posts, in the context of these market and product drivers.

I also discovered the Rebrickable downloads and API, which I plan to use for any further analysis – I do hope I need to play with more data!

Reuse Concepts

I started all this thinking about software reuse, which is not an aim in itself, but a consideration and sometimes an outcome in efficiently satisfying software product objectives. As we think about reuse and consider existing analyses, I found it helpful to define a few related concepts:

  • Variety – the number of different forms or properties an entity under consideration might take. We might talk about variety of themes, sets, parts, and colours, etc.
  • Specialisation – of parts in particular, where parts serve only limited purposes.
  • Complexity – the combinations or interactions of entities, generally increasing with increasing variety and specialisation.
  • Sharing – of parts between sets in particular, where parts appear in multiple sets. We might infer specialisation from limited sharing.
  • Reuse – sharing, with further consideration of time, as some reuse scenarios may be identified when a part is introduced, some may emerge over time, and some opportunities for future reuse may not be realised.

Considering these concepts, the first two analyses focus mainly on understanding variety and specialisation, while the third dives deeper into sharing and reuse.

Increase in Variety and Specialisation

The Colorful Lego

Visualisation of colours in use in LEGO sets over time
Analysis of LEGO colours in use over time. Source: The Colorful Lego Project

Great visualisations and analysis in this report and public dashboard from Edra Stafaj, Hyerim Hwang and Yiren Wang, driven primarily by the evolving colours found in LEGO sets of over time, and considering colour as a proxy for complexity. Some of the key findings:

  • The variety of colours has increased dramatically over time, with many recently introduced colours already discontinued.
  • The increase in variety of colours is connected with growth of new themes. Since 2010, there has been a marked increase in co-branded sets (“cooperative” theme, eg, Star Wars) and new in-house branded sets (“LEGO commercial” theme, eg, Ninjago) as a proportion of all sets.
  • That specialised pieces (as modelled by Minifig Heads – also noted as the differentiating part between themes) make up the bulk of new pieces, compared to new generic pieces (as modelled by Bricks & Plates).

Colour is an interesting dimension to consider, as it may be argued an aesthetic, rather than mechanical, consideration for reuse. However, as noted in the diversification of themes, creating and satisfying a wider array of customer segments is connected to the increasing variety of colour.

So I see variety and complexity increasing, and more specialisation over time. The discontinuation of colours suggests reuse may be reducing over time, even while generic bricks & plates persist.

67 Years of Lego Sets

Visualisation of the LEGO, in the LEGO, for the LEGO people. Source 67 Years of Lego Sets

An engaging summary from Joel Carron of the evolution of LEGO sets over the years, including Python notebook code, and complete with a final visualisation made of LEGO bricks! Some highlights:

  • The number of parts in a set has in general increased over time.
  • The smaller sets have remained a similar size over time, but the bigger sets keep getting bigger.
  • As above, colours are diversifying, with minor colours accounting for more pieces, and themes developing distinct colour palettes.
  • Parts and sets can be mapped in a graph or network showing the degree to which parts are shared between sets in different themes. This shows some themes share a lot of parts with other themes, while some themes have a greater proportion of unique parts. Generally, smaller themes (with fewer total parts) share more than larger themes (with more total parts).

So here we add to variety and specialisation with learning about sharing too, but without the chronological view of that would help us understand more about reuse – were sets with high degrees of part sharing developed concurrently or sequentially?

Reduction in Sharing and Reuse

LEGO products have become more complex

A comprehensive paper, with dataset and R scripts, analysing increasing complexity in LEGO products, with a range of other interesting-looking references to follow up on, though acknowledgement that scientific investigations on the development of the LEGO products remain scarce.

This needs a thorough review in its own post, with further analysis and commentary on the implications for software reuse and management. That will be the third post of this trilogy in N parts.

Lessons for Software Reuse

If we are considering LEGO products as a metaphor and benchmark for software reuse, we should consider the following.

Varied market needs drive variety and specialisation of products, which in turn can be expected to drive variety and specialisation of software components. Reuse of components here may be counter-productive from a product-market fit perspective (alone, without further technical considerations). However, endless customisation is also problematic and a well-designed product portfolio will allow efficient service of the market.

Premium products may also be more complex, with more specialised components. Simpler products, with lesser performance requirements, may share more components. The introduction of more premium products over time may be a major driver of increased variety and specialisation.

These market and product drivers provide context for reuse of software components.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO as a Metaphor for Software Reuse – Does the Data Stack Up?

LEGO® products are often cited as a metaphor for software reuse; individual parts being composable in myriad ways.

I think this is a bit simplistic and may miss the point for software, but let’s assume we should aim to make software in components that are as reusable as LEGO parts. With that assumption, what level of reuse do we actually observe in LEGO sets over time?

I’d love to know if anyone else has done this analysis publicly, but a quick search didn’t reveal much. So, here’s a first pass from me. I discuss further analysis below.

The data comes from the awesome catalogue at Bricklink. I start with the basic catalogue view by year.

More New Parts Every Year

My first observation is that there are an ever-increasing number of new parts released every year.

Chart of Lego parts over time, showing 10x increase in parts from late 1980s to early 2020s

This trend in new parts has been exponential until just the last few years. The result is that there are currently 10 times the number of parts as when I was a kid – that’s why the chart is on a log scale! These new parts are reusable too.

Therefore, the existence of reusable parts doesn’t preclude the creation of new reusable parts. Nor should we expect the creation of new parts to reduce over time, even if existing parts are reusable. If LEGO products are our benchmark for software reuse, we should expect to be introducing new components all the time.

More New Parts in New Products

My second observation is that the new parts (by count) in new products have generally increased year by year.

The increase has been most pronounced from about 2000 to 2017, rising from about two to over five new parts per new set on average. The increase is observed because – even though new sets over time are increasing (below) – new parts are increasing faster (as above).

Chart showing new Lego sets over time

Therefore, the existence of an ever-increasing number of reusable parts doesn’t preclude an increase in the count of new parts in new sets. If LEGO products are our benchmark for software reuse, we should expect to continue introducing new components in new products, possibly at an increasing rate.

This is only part of the picture, though. I have been careful to specify count of new parts in new sets only, as that data is easy to come by (from the basic Bricklink catalogue view by year). The average count of new parts in new sets is simply the number of new parts each year divided by the number of new sets each year.

We also need to understand how existing parts are reused in new sets. For instance, is the total part count increasing in new products and are are new parts therefore a smaller component of new products? Which existing parts see the most reuse? We might also want to consider the varied nature and function of new and existing parts in new products, and the interaction between products. These deeper analyses could be the focus of future posts.

Provisional Conclusions

As a LEGO product aficionado, I’ve observed an explosion of new parts and sets since I was a kid, but I was surprised by the size of the increase revealed by the data.

There is more work to do on this data, and LEGO products may be a flawed metaphor for software engineering. We may make stronger claims about software reuse with other lines of reasoning and evidence. However, if LEGO products are to be used as a benchmark, the data dispels some misconceptions, and suggests that for functional software product development:

  • Reusable components don’t eliminate the need for new components
  • The introduction of new components may increase over time, even in the presence of reusable components
  • New products may contain more new components than existing products, even in the presence of reusable components

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

The Lockdown Wheelie Project, Part 3

In Melbourne’s COVID-19 lockdown, I’ve wheelied over 17km. Not all at once, though.

Over three months, I’ve spent 90 minutes with my front wheel raised. I’d like to keep it up, but as lockdown has gradually relaxed, and routines have changed, so have I landed the wheelie project, for now.

Read the full article over on Medium at The Lockdown Wheelie Project, Part 3.