Synthesising Semantle Solvers

Picking up threads from previous posts on solving Semantle word puzzles with machine learning, it’s time to compare how different versions of solvers might play along with people, and play against the game online.

Animation of a Semantle game from initial guess to completion

Substitute semantics

The solvers are designed around an encapsulated similarity model that we can easily change to give them a different view of words and how they relate. To date, we’ve used the same model as live Semantle, which is word2vec. But as this might be considered cheating, we can now also use a model based on the Universal Sentence Encoder (USE), to explore how the solvers perform with separated semantics.

Solver spec

The key elements of the solver ecosystem are now:

  • SimilarityModel – choice of word2vec or USE as above,
  • Solver methods (common to both gradient and cohort variants):
    • make_guess() – return a guess that is based on the solver’s current state, but don’t change the solver’s state,
    • merge_guess(guess, score) – update the solver’s state with information about a guess and a score,
  • Scoring of guesses by either the simulator or a Semantle game, where a game could also include guesses from other players.
Diagram illustrating elements of the solver ecosystem. Similarity model initialises solver state used to make guesses, which are scored by game and update solver state with scores. Other players can make guesses which also get scored

It’s a simplified reinforcement learning setup. Different combination these elements allow us to explore different scenarios.

Solver suggestions

I’m not advocating for using these in a game of course, but it’s interesting to consider how the solvers would play along with people in hypothetical scenarios :). The base scenario friends is the actual history of a game played with people, completed in 109 guesses.

Word2Vec similarity

The first scenario is showing how to complete a puzzle from an initial sequence of guesses from friends. Both solvers in this configuration generally easily better the friends result when primed with the first 10 friend guesses.

Line chart comparing three irregular but increasing lines that represent the sequence of scores for guesses in a semantle game. The three lines are labelled friends, cohort, and gradient. Cohort finishes with fewest guesses, then gradient, then friends, with clear separation.

The second scenario is making a suggestion for the next guess only, but based on the game history up to that point. Both solvers may permit a finish in slightly fewer guesses. The conclusion is that these solvers are good for hints, especially if they are followed!

Line chart comparing three irregular but increasing lines that represent the sequence of scores for guesses in a semantle game. The three lines are labelled friends, cohort, and gradient. Cohort finishes with fewest guesses, then gradient, then friends, with marginal differences.

Maybe these solvers using word2vec similarity do have an unfair advantage though – how do they perform with a different similarity model? Using USE instead, I expected the cohort solver to be more robust than the gradient solver…

USE similarity

… but it seems that the gradient descent solver is more robust to a disparate similarity model, as one example of the completion scenario shows.

Line chart comparing three irregular but increasing lines that represent the sequence of scores for guesses in a semantle game. The three lines are labelled friends, cohort, and gradient. Gradient finishes with fewest guesses, then friends, then cohort, and the separation is clear.

The gradient solver also generally offers some benefit making a suggestion for just the next guess, but the cohort solver’s contribution is marginal at best.

Line chart comparing three irregular but increasing lines that represent the sequence of scores for guesses in a semantle game. The three lines are labelled friends, cohort, and gradient. Gradient finishes with fewest guesses, then friends, and cohort doesn't finish, but the differences are very minor.

These are of course only single instances of each scenario, and there is significant variation between runs. It’s been interesting to see this play out interactively, but a more comprehensive performance characterisation – with plenty of scope for understanding the influence of hyperparameters – may be in order.

Solver solo

The solvers can also play part or whole games solo (or with other players) in a live environment, using Selenium WebDriver to submit guesses and collect scores. The leading animation above is gradient-USE and a below is a faster game using cohort-word2vec.

Animation of a Semantle game from initial guess to completion

Seeing in space

Through all this I’ve considered various 3D visualisations of search through a semantic space with hundreds of dimensions. I’ve settled on the version below, illustrating a search for target “habitat” from first guess “megawatt”.

An animated rotating 3D view of an semi-regular collection of points joined by lines into a sequence. Some points are labelled with words. Represents high-dimensional semantic search in 3D.

This visualisation format uses cylindrical coordinates, broken out in the figure below. The cylinder (x) axis is the projection of each guess to the line that connects the first guess to the target word. The cylindrical radius is the distance of each guess in embedding space from its projection on this line (cosine similarity seemed smoother than Euclidian distance here). The angle of rotation in cylindrical coordinates (theta) is the cumulative angle between the directions connecting guess n-1 to n and n to n+1. The result is an irregular helix expanding then contracting, all while twisting around the axis from first to lass guess.

Three line charts on a row, with common x-axis of guess number, showing semi-regular lines, representing the cylindrical coordinates of the 3D visualisation. The left chart is x-axis, increasing from 0 to 1, middle is radius, from 0 to ~1 and back to 0, and right is angle theta, increasing from 0 to ~11 radians.

Second Semantle Solver

In the post Sketching Semantle Solvers, I introduced two methods for solving Semantle word puzzles, but I only wrote up one. The second solver here is based the idea that the target word should appear in the intersection between the cohorts of possible targets generated by each guess.

Finding the semantle target through overlapping cohorts. Shows two intersecting rings of candidate words based on cosine similarity.

To recap, the first post:

  • introduced the sibling strategies side-by-side,
  • discussed designing for sympathetic sequences, so the solver can play along with humans, with somewhat explainable guesses, and
  • shared the source code and visualisations for the gradient descent solver.

Solution source

This post shares the source for the intersecting cohorts solver, including notebook, similarity model and solver class.

The solver is tested against the simple simulator for semantle scores from last time. Note that the word2vec model data for the simulator (and agent) is available at this word2vec download location.

Stylised visualisation of the search for a target word with intersecting  cohorts. Shows distributions of belief strength at each guess and strength and rank of target word

The solver has the following major features:

  1. A vocabulary, containing all the words that can be guessed,
  2. A semantic model, from which the agent can calculate the similarity of word pairs,
  3. The ability to generate cohorts of words from the vocabulary that are similar (in Semantle score) to a provided word (a guess), and
  4. An evolving strength of belief that each word in the vocabulary is the target.

In each step towards guessing the target, the solver does the following:

  1. Choose a word for the guess. The current choice is the word with the strongest likelihood of being the target, but it could equally be any other word from the solver’s vocabulary (which might help triangulate better), or it could be provided by a human player with their own suspicions.
  2. Score the guess. The Semantle simulator scores the guess.
  3. Generate a cohort. The guess and the score are used to generate a new cohort of words that would share the same score with the guess.
  4. Merge the cohort into the agent’s belief model. The score is added to the current belief strength for each word in the cohort, providing a proxy for likelihood for each word. The guess is also masked from further consideration.

Show of strength

The chart below shows how the belief strength (estimated likelihood) of the target word gradually approaches the maximum belief strength of any word, as the target (which remains unknown until the end) appears in more and more cohorts.

Intersecting cohorts solver. Line chart showing the belief strength of the target word at each guess in relation to the maximum belief strength of remaining words.

We can also visualise the belief strength across the whole vocabulary at each guess, and the path the target word takes in relation to these distributions, in terms of its absolute score and its rank relative to other words.

Chart showing the cohort solver belief strength across the whole vocabulary at each guess, and the path the target word takes in relation to these distributions, in terms of its absolute score and its rank relative to other words

Superior solution?

The cohort solver can be (de)tuned to almost any level of performance by adjusting the parameters precision and recall, which determine the tightness of the similarity band and completeness of results from the generated cohorts. The gradient descent solver has potential for tuning parameters, but I didn’t explore this much. To compare the two, we’d therefore need to consider configurations of each solver. For now, I’m pleased that the two distinct sketches solve to my satisfaction!

Sketching Semantle Solvers

Semantle is an online puzzle game in which you make a series of guesses to discover a secret word. Each guess is scored by how “near” it is to the secret target, providing guidance for subsequent guesses, but that’s all the help you get. Fewer guesses is a better result, but hard to achieve, as the majority of words are not “near” and there are many different ways to get nearer to the target.

You could spend many enjoyable hours trying to solve a puzzle like this, or you could devote that time to puzzling over how a machine might solve it for you. Guess what I did…

Scoring system

Awareness of how the nearness score is calculated can give some ideas for potential solutions. The score is based on a machine learning model of language; how frequently words appear in similar contexts. These models convert each word into a unique point in space (also known as an embedding) in such a way that similar words are literally near to one another in this space, and therefore the similarity score is higher for points that are nearer to one another.

Diagram of a basic semantic embedding example. The words "dog" and "cat" are shown close together, while the word "antidisestablishmentariansim" is shown distant from both.

We can reproduce this similarity score ourselves with a list of English words and a trained machine learning model, even though these models use 100s of dimensions rather than two, as above. Semantle uses the word2vec model but there are also alternatives like USE. Comparing these results to the scores from a Semantle session could guide a machine’s guesses. We might consider this roughly equivalent to our own mental model of the nearness of any pair of words, which we could estimate if we were asked.

Sibling strategies

Two general solution strategies occurred to me to find the target word guided by similarity scores: intersecting cohorts and gradient descent.

Intersecting cohorts: the score for each guess defines a group of candidate words that could be the target (because they have the same similarity with the guessed word as the score calculated from the target). By making different guesses, we get different target cohorts with some common words. These cohort intersections allow us to narrow in on the words most like to be the target, and eventually guess it correctly.

Diagram showing two similarity cohorts. These form halos around the axis of guess direction, based on dot product similarity, and intersect in the direction of the target word.

Gradient descent: each guess gives a score, and we look at the difference between scores and where the guesses are located relative to each other to try to identify the “semantic direction” in which the score is improving most quickly. We make our next guess in that direction. This doesn’t always get us closer but eventually leads us to the target.

Diagram showing a number of nodes and gradient directions between nodes. One is highlighted showing the maximum gradient and direction of the next guess, which is a node close to the extension of the direction vector.

I think human players tend more towards gradient descent, especially when close to the target, but also use some form of intersecting cohorts to hypothesise potential directions when uncertain. For a machine, gradient descent requires locations in embedding space to be known, while intersecting cohorts only cares about similarity of pairs.

Sympathetic sequences

Semantle is open source and one could create a superhuman solver that takes unfair advantage of knowledge about the scoring system. For instance, 4 significant figures of similarity (as per semantle scores) allows for pretty tight cohorts. Additionally, perfectly recalling large cohorts of 10k similar words at each guess seems unrealistic for people.

I was aiming for something that produced results in roughly the same range as a human and that could also play alongside a human should they want a helpful suggestion. Based on limited experience, the human range seems to be – from exceptional to exasperated – about 20 to 200+ guesses.

This lead to some design intents:

  • that the solving agent capabilities were clearly separated from the Semantle scoring system (I would like to use a different semantic model for the agent in future)
  • that proposing the next guess and incorporating the results from a guess would be decoupled to allow the agent to play with others
  • that the agent capabilities could be [de]tuned if required to adjust performance relative to humans or make its behaviour more interpretable

Solution source

This post includes the source for the gradient descent solver and a simple simulator for semantle scores. Note that the word2vec model data for the simulator (and agent) is available at this word2vec download location.

I have also made a few iterations on the intersecting cohorts approach, which also works. The current iteration uses a Bayesian model for likelihood that each word is the target, based on the cohorts it has been observed in, and simulated annealing to balance exploration of less likely words and exploitation of more likely words.

Seeking the secret summit

The gradient descent (or ascent to a summit) approach works pretty well by just going to the most similar word and moving a random distance in the direction of the steepest known gradient. The nearest not previously guessed word to the resultant point is proposed as the next guess. You can see a gradual but irregular improvement in similarity as it searches.

Line chart of similarity score to target for each word in a sequence of guesses. The line moves upwards gradually but irregularly for most of the chart and shoots up at the end. The 46 guesses progress from thaw to gather.

I addressed the high dimensionality of the embedding space by discretising it with a network (or graph) of “nodes” representing words and their similarity to the target, and “spokes” representing the direction between nodes and the gradient of similarity in that direction. This network is initialised with a handful of random guesses before the gradient descent begins in earnest. Below I’ve visualised the search in this space with respect to the basis – the top node and spoke with best gradient – of each guess.

Chart showing progession of basis of guessing the target word. The horizontal axis is current best guess. The vertical axis is current reference word. A line progresses in fewer hops horizontally and more hops vertically from bottom left to top right.

The best results are about 40 guesses and typically under 200, though may blow out on occasion. I haven’t really tried to optimise the search; as above, the first simple idea worked pretty well. To [de]tune or test the robustness of this solution, I’ve considered adding more noise to the search for the nearest word from the extrapolated point, or compromising the recall of nearby words, or substituting a different semantic model. These things might come in future. At this stage I just wanted to share a sketch of the solver rather than a settled solution.

Postscript: after publishing, I played with the search visualisation in an attempt to tell a more intuitive story (from literally to nobody).

Line chart showing the similarity of each of a sequence of 44 guesses to a semantle target. The line is quite irregular but trends up from first guess “literally” at bottom left to target “nobody” at top right. The chart is annotated with best guess at each stage and reference words for future guesses.

Stop the sibilants, s’il vous plaît

C’est suffit! I’m semantically sated. After that sublime string of subheadings, the seed of a supplementary Wordle spin-off sprouts: Alliteratle anyone?

Visualising System Dynamics Models

Simulation is a powerful tool for understanding and solving complex problems, and visualisation is key to doing simulation well. Visualisation helps to communicate function, understand results, validate and tune implementation and diagnose errors, at every stage of development and operation of a simulator. System dynamics can be used to implement a certain class of simulators, and helpfully provides a visual language for defining models. While many commercial tools support visualisation of models, I haven’t found as much support for visualisation as I expected in open source system dynamics tools.

If I’m missing something, please let me know! But the upshot is that I wrote a basic visualisation module for BPTK_Py models, which I’ve found quite useful. This isn’t a visual design environment, but it supports visualisation of models defined in code.

The model visualised above is a simple river system, where seasonal stream inflow feeds a pond (the arrow from inflow to pond). Water in the pond is lost to evaporation, and the rate at which this happens depends on how much water there is in the pond (hence arrows in both directions between evaporation and pond). If the water in the pond reaches a certain level it is drained by outflow (hence pond level depends on outflow, and outflow depends on pond level and when overflow occurs).

I targeted BPTK_Py as the simulation framework because I liked its Python DSL for model definition. For visualisation, the model is represented as a graph, with nodes for each class of system dynamics object defined: stocks, flows, constants and converters. Where these objects are related by equations, edges are added to the graph to show the dependencies.

Flows are typically drawn connected to sources or sinks, but I decided to leave that construct implicit. The direction of dependency, rather than the (nominal) direction of flow is shown between stocks and flows. To see the detail of dependencies, the equations can be overlaid on each node. Networkx is used to model and render the graph.

The code could benefit from: further testing, additional support for all the equations types in BPTK_Py.sd_functions, and better layout support. But maybe it helps fill a gap that would otherwise exist.

This final example also shows the visual representation of converters, and you can compare this generated visualisation to the visual design in the BPTK introductory tutorial.

Slackometer Hello World

Project Slackpose gives me one more excuse for hyperlocal exercise and number crunching in lockdown. Last time, I briefly touched on balance analysis. This time, I look at tracking slackline distance walked with my newly minted slackometer.

Inferring 3D Position

I’m working only with 2D pose data (a set of pixel locations for body joints) from my single camera angle, but I can infer something about my 3D location – distance from the camera (d) – using the pinhole camera model and further data such as:

  1. Camera lens & sensor geometry, plus known real world distance between pose keypoints (geometric approach)
  2. Consistent pose data (eg same person, similar pose) acquired at known distances from the camera (regression approach)

In this first iteration shown in the notebook, I boil all the pose data down into a single feature: the vertical pixel distance (v) between the highest tracked point (eg, an eye) and the lowest tracked point (eg, a toe). I base the calculation of distance d on this measure. This measure may be shorter than my full height by crown-to-eye height and a typically lower pose when balancing on a slackline, or an elevated arm joint might make it taller.

Body pose in pixels with vertical extent indicated

Geometric Approach

The geometric approach uses a similar triangles formula where one triangle has sides lens-sensor distance (known) and object pixel height (v) scaled to sensor height (known), the other has lens-object distance (d) and object height (roughly known from my height, as above). The equation has the form d = F / v, where F is determined by those known or estimated factors. These geometry factors will be specific to each physical camera device and slackline walker combination.

Regression Approach

The regression approach uses pose data collected at known distances from the slackline anchor – 2m, 4m, 6m, and 8m – as shown in the image below. I marked the calibration distances and walked to each point, balancing there for a few seconds, then turned around and walked back through each point to collect data from both back and front views. This approach is consistent across cameras and walkers, and works with varied sets of calibration distances.

Person standing on a slackline with known distances marked

Plotting the value of v against the reciprocal of distance (1 / d), we see a fairly linear relationship. Fitting a line (with 0 intercept) to the data gives a value for 1 / F, which produces a value for F that is very close to that from the geometric approach – a neat result, or convenient choice of parameters!

charts showing correlation of vertical extent and reciprocal of distance from camera

So we have two methods for calculating distance from pose data, which produce very similar results.

Does It Work?

The figures you see in the video above match up pretty well with the reality. The video shows approximately 10m accumulated distance, while in reality I started about 2m from the camera, walked to just under 8m from the camera, then returned almost to my starting point (say 11.5m max). The discrepancy is most likely explained by an under-estimate of peak distance (at ~7m) due to decrease in precision of pixel measures for distant objects, and noise/pose-dependence in the vertical extent measure.

So this first iteration of the slackometer would be useful for approximating distance walked, and could possibly be improved with higher resolution video and by tracking more body segments, which may also reduce the dependence on smoothing. It would also be useful for comparing distances or speeds. The millimetre precision, however, is misleading, I just chose the unit so it looked better on the odometer display! (I spent some time tuning this for those cascading transitions…)

A mechanical odometer showing wheels cascading over the numbers 1998, 1999, 2000, 2001

There are certainly are other ways you could track distance, starting from pacing out the line and keeping count of laps, to using image segmentation rather than pose estimation to calculate v, to alternative sensor setups and models, but it was fun to do it this way and helped pass a few more days, and nights, in lockdown.

Light trace in a backyard; a long exposure photo at night that looks like daytime
slacklining at night under a full moon

Project Slackpose

Another lockdown, another project for body and mind. Slackpose allows me to track my slackline walking and review my technique. Spending 5 minutes on the slackline between meetings is a great way to get away from my desk!

I had considered pose estimation for wheelies last year, but decided slackline walking was an easier start, and something the whole family could enjoy.


I mount my phone on a tripod at one end of the slackline and start recording a video. This gives a good view of side-to-side balance, and is most pixel-efficient in vertical orientation.

Alternatively, duct tape the phone to something handy, or even hand-hold or try other angles. The 2D pose estimation (location of body joints in the video image) will be somewhat invariant to the shooting location, but looking down the slackline (or shooting from another known point) may help reconstruct more 3D pose data using only a single camera view. Luckily, it’s also invariant to dogs wandering into frame!

I use OpenPose from the CMU Perceptual Computing Lab to capture pose data from the videos. See below for details of the keypoint pose data returned and some notes on setting up and running OpenPose.

I then analyse the keypoint pose data in a Jupyter notebook that you can run on Colab. This allows varied post-processing analyses such as the balance analysis below.

Keypoint Pose Data

OpenPose processes the video to return data on 25 body keypoints for each frame, representing the position of head, shoulders, knees, and toes, plus eyes, ears, and nose and other major joints (but mouth only if you explicitly request facial features).

These body keypoints are defined as (x, y) pixel locations in the video image, for one frame of video. We can trace the keypoints over multiple frames to understand the motion of parts of the body.

Keypoints also include a confidence measure [0, 1], which is pretty good for the majority of keypoints from the video above.

Balance Analysis

I wanted to look at balance first, using an estimate of my body’s centre of mass. I calculated this from the proportional mass of body segments (sourced here) with estimates of the location centre of mass for each segment relative to the pose keypoints (I just made these up, and you can see what I made up in the notebook).

This looks pretty good from a quick eyeball, although it’s apparent that it is sensitive to the quality of pose estimation of relatively massive body segments, such as my noggin, estimated at 8.26% of my body mass. When walking away, OpenPose returns very low confidence and a default position of (0, 0) for my nose for many frames, so I exclude it from the centre of mass calculation in those instances. You can see the effect of excluding my head in the video below.

Not much more to report at this point; I’ll have a better look at this soon, now that I’m up and walking with analysis.

OpenPose Notes


I ran OpenPose on my Mac, following the setup guide at, and also referred to two tutorials. These instructions are collectively pretty good, but note:

  • 3rdparty/osx/ doesn’t exist, you will instead find it at scripts/osx/
  • I had to manually pip install numpy and opencv for OpenPose with pip install 'numpy<1.17' and pip install 'opencv-python<4.3‘, but this is probably due to my neglected Python2 setup.
  • Homebrew cask installation syntax has changed from the docs; now invoked as brew install --cask cmake


Shooting iPhone video, I need to convert format for input to OpenPose. Use ffmeg as below (replace with the name of your video).

ffmpeg -i -vcodec h264 -acodec mp2 slackline.mp4

I then typically invoke OpenPose to process video and output marked up frames and JSON files with the supplied example executable, as below (again replace input video and output directory with your own):

./build/examples/openpose/openpose.bin --video slack_video/slackline.mp4 --write_images slack_video/output --write_json slack_video/output

LEGO and Software – Part Roles

This is the fifth post in a series exploring LEGO® as a Metaphor for Software Reuse. A key consideration for reuse is the various roles that components can play when combined or re-combined in sets. Below we’ll explore how we can use data about LEGO parts and sets to understand the roles parts play in sets.

I open a number of lines of investigation, but this is just the start, rather than any conclusion, of understanding the roles parts play and how that influences outcomes including reuse. The data comes from the Rebrickable data sets, image content & API and the code is available at

Hero Parts

Which parts play the most important roles in sets? Which parts could we least easily substitute from other sets?

We could answer this question in the same way as we determine relevant search results from documents, for instance with a technique called TFIDF (term frequency-inverse document frequency). We can find hero parts in sets with set frequency-inverse part frequency, which in the standard formulation requires a corpus of parts “documents” listing sets “terms” for each set that includes that part, as below.

part 10190: "10403-1 10403-1 10404-1 10404- ... "
part  3039: "003-1 003-1 003-1 003-1 021-1  ... "
part  3023: "021-1 021-1 021-1 021-1 021-1  ... "

Inverse part frequency is closely related to the inverse of the reuse metric from part 4, hence we can expect it will find the least reused parts. Considering again our sample set 60012-1, LEGO City Coast Guard 4×4, (including 4WD, trailer, dingy, and masked and flippered diver), we find the following “hero” parts.

Gallery of hero parts from LEGO Coast Guard set (60012-1) including stickers, 4WD tyres, a dinghy, flippers and mask

This makes intuitive sense. These “hero parts” are about delivering on the specific nature of the set. It’s much harder to substitute (or reuse other parts) for these hero parts – you would lose something essential to the set as it is designed. On the other hand, as you might imagine, the least differentiating parts (easiest to substitute or reuse alternatives) overlap significantly with the top parts from part 4. Note while mechanically – in a sense of connecting parts together – it may not be possible to replace these parts, these parts don’t do much to differentiate the set from other sets.

Gallery of least differentiated parts from LEGO Coast Guard set (60012-1) including common parts like plates, tiles, blocks and slopes.

Above, we consider sets as terms (words) in a document for each part. We can also reverse this by considering a set as a document, and included parts as terms in that document. Computing this part frequency-inverse set frequency measure across all parts and sets gives us a sparse matrix.

Visualisation of TFIDF or part-frequency-invers-set-frequency as a sparse 2D matrix for building a search engine for sets based on parts

This can be used as a search engine to find the sets most relevant to particular parts. For instance, if we query the parts "2431 2412b 3023" (you can see these in Recommended Parts below), the top hit is the Marina Bay Sands set, which again makes intuitive sense – all those tiles, plates, and grilles are the essence of the set.

Recommended Parts

Given a group of parts, how might we add to the group for various outcomes including reuse? For instance, if a new set design is missing one part that is commonly included with other parts in that design, could we consider redesigning the set to include that part to promote greater reuse?

A common recommendation technique for data in the shape of our set-part data is Association Rule Learning (aka “Basket Analysis”), which will recommend parts that are usually found together in sets (like items in baskets).

An association rule in this case is an implication of the form {parts} -> {parts}. Multiple of these rules form a directed graph, which we can visualise. I used the Efficient Apriori package to learn rules. In the first pass, this gives us some reasonable-looking recommendations for many of the top parts we saw in part 4.

Visualisation of discovered association rules as a directed graph showing common parts

You can read this as the presence of 2431 in a set implies (recommends) the presence of 3023, as does 2412b, which also implies 6141. We already know these top parts occur in many sets, so it’s likely they occur together, but we do see some finer resolution in this view. The association rules for less common parts might also be more insightful; this too may come in a future post.

Relationships Between Parts

How can we discover more relationships between parts that might support better outcomes including reuse?

We can generalise the part reuse analysis from part 4 and the techniques above by capturing the connections between sets and parts as a bipartite graph. The resultant graph contains about 63,000 nodes – representing both parts and sets – and about 633,000 edges – representing instances of parts included in sets. A small fragment of the entire graph, based on the flipper part 10190, the sets that include this part, and all other parts included in these sets, is shown below.

Visualisation of set neighbours (count 79) of flipper 10190 and their part neighbours (count 1313) as two parallel rows of nodes with many connections between them
Visualisation of selected set neighbours (count 3) of flipper 10190 and selected of their part neighbours (count 14) as two parallel rows of nodes with some connections between them

This bipartite representation allows us to find parts related by their inclusion in LEGO sets using a projection, which is a derived graph that only includes parts nodes, linked by edges if they share a set. In this projection, our flipper is directly linked to the 1312 other parts with which it shares any set.

Visualisation of 1312 immediate neighbours of flipper 10190 in the part projection of the set-part graph, shows only 1% of connections but this is very dense nonetheless

You can see this is a very densely connected set of parts, and more so on the right side, from 12 o’clock around to 6 o’clock. We could create a similar picture for each part, but we can also see the overall picture by plotting degree (number of connections to parts with shared sets) for all part, with a few familiar examples.

Degree of nodes in part projection, with plate 1x2, slope 45 2x2 and flipper highlighted. Steep drop-off from maximum and long flat tail

This is the overall picture of immediate neighbours, and it shows the familiar traits of a small number of highly connected parts, and a very long tail of sparsely connected parts. We can also look beyond immediate neighbours to the path(s) through the projection graph between parts that don’t directly share a set, but are connected by common parts that themselves share a set. Below is one of the longest paths, from the flipper 10190 to multiple Duplo parts.

Visualisation of a path through the part connection graph spanning 7 nodes, with some neighbouring nodes also shown and parts drawn on

With a projection graph like this, we could infer that parts that are designed to be used together are closer together. We could use this information to compile groups of parts to specific ends. Given some group of parts, we could (1) add “nearby” missing parts to that group to create flexible foundational groups that could be used for many builds, or we could (2) add “distant” parts that could allow us to specialise builds in particular directions that we might not have considered previously. In these cases, “nearby” and “distant” are measured in terms of the path length between parts. There are many other ways we could use this data to understand part roles.

(When I first plotted this, I thought I had made a mistake, but it turns out there are indeed sets including both regular and Duplo parts, in this case this starter kit.)

The analysis above establishes some foundational concepts, but doesn’t give us a lot of new insight into the roles played by parts. The next step I’d like to explore here is clustering and/or embedding the nodes of the part graph, to identify groups of similar parts, which may come in a future post.


As I said above, there are no firm conclusions in this post regarding reuse in LEGO or how this might influence our view of and practices around reuse in software. However, if we have data about our software landscape in a similar form to the set-part data we’ve explored here, we might be able to conduct similar analyses to understand the roles that reusable software components play in different products, and, as a result, how to get better outcomes overall from software development.

Coming Next

I think the next post, and it might just be the last in this series, is going to be about extending these views of the relationships between parts and understanding how they might drive or be driven by the increase in variety and specialisation discussed in part 2.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Part Reuse

This is the fourth post in a series exploring LEGO® as a Metaphor for Software Reuse. The story is evolving as I go because I keep finding interesting things in the data. I’ll tie it all up with the key things I’ve found at some point.

In this post we’re looking from the part perspective – the reusable component – at how many sets or products it’s [re]used in, and how this has changed over time. All the analysis from this post is available at

Inventory Data

The parts inventory data from the Rebrickable data sets, image content & API gives us which parts appear in which sets, in which quantities and colours. For instance, if we look at the minifigure flipper from part 3, we can chart its inclusion in sets as below.

The parts inventory data includes minifigures. I may later account for the effect of including or excluding minifgures in these various analyses. We can also tell if a part in a set is a spare; according to the data, 2% of LEGO parts in sets are spares.

Parts Included in Sets

The most reused parts are included in thousands of sets. Here is a gallery of the top 25 parts over all time – from number 1 Plate 1 x 2 (which is included in over 6,200 sets), to our favourite from last time, the venerable Slope 45° 2 x 2 (over 2,700 sets).

These parts appear in a significant fraction of sets, but we can already see a 50% reduction in the number of sets in just the top 25 used parts. Beyond this small sample, we can plot the number of sets that include a given part (call it part set count), as below.

This time we have used log scales on both axes, due to the extremely uneven distribution of part set counts. In contrast to the thousands of sets including top parts, a massive 60% of parts are included in only one set and only 10% of parts are included in more than ten sets. The piecewise straight line fit indicates a power law applies, for instance approximately count = 10000 / sqrt(rank) for the top 100 ranked parts.

Uneven distributions are often expressed in terms of the Pareto principle or 80/20 rule. If we define reuse instances as every time a part is included in a set, after the first set, then we can plot the contribution of each part to total reuse instances and see if this is more or less uneven that the 80/20 rule.

This shows us that reuse of LEGO parts in sets is much more uneven than the 80/20 rule. While the 80/20 rule says 80% of reuse would be due to 20% of parts, in fact we find by this definition that 80% of reuse is due to only 3% of parts, and 20% of parts account for 98% of reuse!

We find a similar phenomenon if we consider the quantities of parts included in sets, rather than just the count of sets per part. We could repeat the whole analysis based on quantities (and the notebook has some options for doing this), but I was fairly satisfied the results would be similar given the degree of correlation we find, below.

I was intrigued, though, by the part that appeared many times in a single set, then never again (the uppermost point of the lower left column). It turns out it is a Window 1 x 2 x 1 (old type) with Extended Lip included in a windows set from 1966 that probably looked a bit like this one.

This brings us neatly to the “long tail” to round out this view of reuse as parts included in multiple sets. As per the distribution above, the tail (of parts that belong to only one set) is really long and full of curiosities. The tail is over 22,000 parts long, though these belong to only about 10,000 unique sets. The parts belong about 60/40 to sets proper vs minifigure packs. Here’s a tail selection – you’ll see they are fairly specialised items like stickers, highly custom parts, minifigure items with unique designs, and even trading cards!

There’s even, perhaps my nemesis, a minifigure set called “Chainsaw Dave”!

Parts Included in Sets Over Time

Part reuse might vary with time, and might be dependent on time. In previous posts we’ve seen an exponential increase in new parts and sets over time and an exponential decay in part lifetimes. We can plot part reuse (set count) against lifespan, as below.

This shows some correlation – which we might expect as longievity is due to reuse and vice versa – but I was also intrigued by the many relatively short-lived parts with high set counts (in the mid-top left region). Colouring points by the year released shows that these are relatively recent parts (at the yellow end of the spectrum). This shows that, as well as long-lived parts, more recent parts also appear in many sets, which is good news for reuse.

However, it’s hard to determine the distribution of set count from the scatter plot, and hence how significant the reuse is. We can see the distribution better with a violin plot, which shows the overall range (‘T’ ends), the distribution of common values (shading), and the median (cross-bar), much like the box plot.

We see that although many parts released in the last few decades are reused in 100s or even 1000s of sets, the median or typical part appears in only a handful of sets. With 100s to 1000s of sets released each year recently, the sets are reusing both old and new parts, but the vast majority of parts are not significantly reused.

Top Parts for Reuse Over Time

Above, we introduced the top 25 parts by set count, based on all time set count, but how has this varied over time? We can chart – for each of the top 25 parts – how many sets they appeared in each year. However, as the number of sets released each year has increased exponentially, we see a clearer pattern if we chart the proportion of sets released each year including the top 25 parts, as below.

This shows variation in proportional set representation for the top parts from about 10% to 50% over the last 40 years. However, there was a very noticeable drop around the year 2000 to a maximum of only 20% of sets including top reused parts. Interestingly, this corresponds to a documented period of financial difficulty for The LEGO Group, but further research would be required to demonstrate any relationship. Prior to 1980, the maximum proportional representation was even higher, indicating a major intention of sets was to provide reusable parts.

From 2000 onwards, the all time ranking becomes more apparent, as the lines generally spread to match the rankings. We can also see this resolution of ranks in a bump chart for the top 4 parts over the time period from just before 2000 to the present.

Lessons for Software

As in previous posts, here’s where I speculate on what this data-directed investigation of LEGO parts and sets over the years might mean for software development. This is only on the presumption that – as often cited informally in my experience – LEGO products are a good metaphor for software. I take this as a given for this series, and as an excuse to play with some LEGO data, but I don’t really test it.

Given the reuse of LEGO parts across sets and time, we might expect that for software:

  • Most reuse will likely come from a small number of components, and this may be far more extreme than the 80/20 heuristic
  • If this is the case, then teams should build for use before reuse, or design very simple foundational components (see the top 25) for widespread reuse
  • Building for use before reuse means optimising for fast development and retirement of customised components that won’t be reusable
  • Reuse may vary significantly over time depending on your product strategy
  • It’s possible to introduce new reusable components at any time, but their impact might not be very noticeable with a product strategy that drives many customised components

Next Time

I plan to look further into the relationship between parts and sets, and how this has evolved over time.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Lifespans

This is the third post in a series exploring LEGO as a Metaphor for Software Reuse through data (part 1 & part 2).

In this post, we’ll look at reuse through the lens of LEGO® part lifespans. Not how long before the bricks wear out, are chewed by your dog, or squashed painfully underfoot in the dark, but for what period each part is included in sets for sale.

This is a minor diversion from looking further into the reduction in sharing and reuse themes from part 2, but lifespan is a further distinct concept related to reuse, worthy I think of its own post. All the analysis from this post, which uses the the Rebrickable API, is available at

Ages in a Sample Set

To understand sets and parts in the Rebrickable API, I first raided our collection of LEGO build instructions for a suitable set to examine, and I came up with 60012-1, LEGO City Coast Guard 4×4.

Picture of LEGO build instructions for Coast Guard set
The sample set

In the process I discovered parts data contained year_from and year_to attributes, which would enable me to chart the ages of each part in the set when it was released, as a means of understanding reuse.

Histogram of ages of parts in set 60012-1, with 12 parts of <1 year age, gradually decreasing to 7 parts 50-55 years of age

In line with the exponential increase of new parts we’ve already seen, the most common age bracket is 0-5 years, but a number of parts in this set from 2013 were 50-55 years old when it was released! Let’s see some examples of new and old parts…

Image of a new flipper (0-1 yrs age) and sloping brick (55 years age)

The new flipper is pretty cool, but is it even worthy to be moulded from the same ABS plastic as Slope 45° 2 x 2? The sloping brick surely deserves a place in the LEGO Reuse Hall of Fame. It has depth and range – from computer screen in moon base, to nose cone of open wheel racer, to staid roof tile, to minifigure recliner in remote lair. Contemplating this part took me back to my childhood, all those marvellous myriad uses.

And yet I also recalled the slightly unsatisfactory stepped profile that resulted from trying to build a smooth inclined surface with a stack of these parts. As such, this part captures the essence and the paradox of reuse in LEGO products; a single part can do a lot, but it can’t do everything. Let’s look at lifespan of parts more broadly.

Lifespans Across All Parts

The distribution of lifespan across LEGO parts is very uneven.

Distribution of lifespans of LEGO parts - histogram

The vast majority of parts are in use less than 1 year, and only a small fraction are used for more than 10 years. Note I calculate lifespan = year_to + 1 - year_from, and this is using strict parts data, rather than also including minifigures.

Distributions of lifespans of LEGE parts, pie chart with ranges

Exponential Decay

This distribution looks like exponential decay. To understand more clearly, it’s back to the logarithmic domain, where we can fit approximations in two regimes; for the first five years (>80% of parts) and then for the remaining life.

LEGO part lifespans, counts plotted on vertical log scale

The half-life of parts in the first 5 years is about 1 year over that period. So each of the first five years, only about half the parts live to the next year. However, the first year remains an outlier, as only 34% of parts make it to their second year and beyond. After 5 years, just under 1,000 survive compared to the 25,000 at 1 year, and from that point the half-life extends to about 7 years. So, while some parts like Slope 45° 2 x 2 live long lives, the vast majority are winnowed out much earlier.


We can also look at the count of parts released (year_from) and the count of parts retired (year_to + 1) each year.

LEGO parts released and retired each year - line chart

As expected, parts released shows exponential growth, but parts retired also grows, and almost in synchrony, so the net change is small compared to the total parts in and out. By summing up the difference each year, we can chart the number of active parts over time.

Active LEGO parts by year and change by year - line and column plot

Active parts are a small proportion of all parts released to date; they represent about one seventh of all parts released, approximately 5,500 of 36,500. Comparing total changes to the active set size each year also shows a high and increasing rate of churn.

LEGO part churn by year - line chart

So even as venerable stalwarts such as Slope 45° 2 x 2 persist, in recent years about 80% of active LEGO parts have churned each year! Interestingly, the early 1980s to late 1990s was a period of much lower churn. Note also churn percentages prior to 1970 were high and varied widely (not shown before 1965 for clarity), probably reflective of a much smaller base of parts and maybe artefacts with older data.

Lifespans vs Year Release and Retired

We’ve got a lot of insight from just year_from and year_to. One last way to look at lifespans is how they have changed over time, such as lifespan vs year released or retired.

LEGO part lifespan scatter plots

Obvious in these charts is that we only have partial data on the lifespan of active parts (we don’t know how much longer until they’ll be retired), but as above, they are a small proportion. We can discern a little more by using a box plot.

LEGO part lifespan box plots

The plot shows, for each year, median lifespans (orange), the middle range (box), the typical range (whiskers) and lifespan outliers (smoky grey). We see here again that the 1980s and 1990s were a good period in relative terms for releasing long-lived parts that have only just been retired. However, with the huge volume of more short-lived parts being retired in recent years, we don’t see their impact in the late 2010s on the retired plot, except as outliers. In general, the retired (left) plot, like the later years of the released (right) plot shows lower lifespan distributions because the long-lived parts are overwhelmed by ever-increasing numbers of contemporaneous short-lived parts.

Lessons for Software Reuse

If LEGO products are to be a metaphor and baseline for reuse in software products, this analysis of part lifespans is consistent with the observations from part 1, while further highlighting:

  • Components that are heavily reused may be a minority of all components, and in an environment of frequent and increasing product releases, many components may have very short lifetimes, driven by acute needs.
  • There may be a “great filter” for reuse, such as the one year or five year lifespan for LEGO parts. This may also be interpreted as “use before reuse”, or that components must demonstrate exceptional performance in core functions, or support stable market demands, before wider reuse is viable.
  • Our impressions and expectations for reuse of software components may be anchored to particular time periods. We see that the 1980s and 1990s (when there were only ~10% of the LEGO parts released to 2020) were a period of much lower churn and the release of relatively more parts with longer lifespans. The same may be true for periods of software development in an organisation’s history.
  • Retirement of old components can be synchronised with introduction of new components, and in fact, this is probably essential to effectively manage complexity and to derive benefits of reuse without the burden of legacy.

Further Analysis

We’ll come back to the reduction in sharing and reuse theme, and find a lot more interesting ways to look at the Rebrickable data in future posts.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

LEGO and Software – Variety and Specialisation

Since my first post on LEGO as a Metaphor for Software Reuse, I have done some more homework on existing analyses of LEGO® products, to understand what I could myself reuse and what gaps I could fill with further data analysis.

I’ve found three fascinating analyses that I share below. However, I should note that these analyses weren’t performed considering LEGO products as a metaphor or benchmark for software reuse. So I’ve continued to ask myself: what lessons can we take away for better management of software delivery? For this post, the key takeaways are market and product drivers of variety, specialisation and complexity, rather than strategies for reuse as such. I’m hoping to share more insight on reuse in LEGO in future posts, in the context of these market and product drivers.

I also discovered the Rebrickable downloads and API, which I plan to use for any further analysis – I do hope I need to play with more data!

Reuse Concepts

I started all this thinking about software reuse, which is not an aim in itself, but a consideration and sometimes an outcome in efficiently satisfying software product objectives. As we think about reuse and consider existing analyses, I found it helpful to define a few related concepts:

  • Variety – the number of different forms or properties an entity under consideration might take. We might talk about variety of themes, sets, parts, and colours, etc.
  • Specialisation – of parts in particular, where parts serve only limited purposes.
  • Complexity – the combinations or interactions of entities, generally increasing with increasing variety and specialisation.
  • Sharing – of parts between sets in particular, where parts appear in multiple sets. We might infer specialisation from limited sharing.
  • Reuse – sharing, with further consideration of time, as some reuse scenarios may be identified when a part is introduced, some may emerge over time, and some opportunities for future reuse may not be realised.

Considering these concepts, the first two analyses focus mainly on understanding variety and specialisation, while the third dives deeper into sharing and reuse.

Increase in Variety and Specialisation

The Colorful Lego

Visualisation of colours in use in LEGO sets over time
Analysis of LEGO colours in use over time. Source: The Colorful Lego Project

Great visualisations and analysis in this report and public dashboard from Edra Stafaj, Hyerim Hwang and Yiren Wang, driven primarily by the evolving colours found in LEGO sets of over time, and considering colour as a proxy for complexity. Some of the key findings:

  • The variety of colours has increased dramatically over time, with many recently introduced colours already discontinued.
  • The increase in variety of colours is connected with growth of new themes. Since 2010, there has been a marked increase in co-branded sets (“cooperative” theme, eg, Star Wars) and new in-house branded sets (“LEGO commercial” theme, eg, Ninjago) as a proportion of all sets.
  • That specialised pieces (as modelled by Minifig Heads – also noted as the differentiating part between themes) make up the bulk of new pieces, compared to new generic pieces (as modelled by Bricks & Plates).

Colour is an interesting dimension to consider, as it may be argued an aesthetic, rather than mechanical, consideration for reuse. However, as noted in the diversification of themes, creating and satisfying a wider array of customer segments is connected to the increasing variety of colour.

So I see variety and complexity increasing, and more specialisation over time. The discontinuation of colours suggests reuse may be reducing over time, even while generic bricks & plates persist.

67 Years of Lego Sets

Visualisation of the LEGO, in the LEGO, for the LEGO people. Source 67 Years of Lego Sets

An engaging summary from Joel Carron of the evolution of LEGO sets over the years, including Python notebook code, and complete with a final visualisation made of LEGO bricks! Some highlights:

  • The number of parts in a set has in general increased over time.
  • The smaller sets have remained a similar size over time, but the bigger sets keep getting bigger.
  • As above, colours are diversifying, with minor colours accounting for more pieces, and themes developing distinct colour palettes.
  • Parts and sets can be mapped in a graph or network showing the degree to which parts are shared between sets in different themes. This shows some themes share a lot of parts with other themes, while some themes have a greater proportion of unique parts. Generally, smaller themes (with fewer total parts) share more than larger themes (with more total parts).

So here we add to variety and specialisation with learning about sharing too, but without the chronological view of that would help us understand more about reuse – were sets with high degrees of part sharing developed concurrently or sequentially?

Reduction in Sharing and Reuse

LEGO products have become more complex

A comprehensive paper, with dataset and R scripts, analysing increasing complexity in LEGO products, with a range of other interesting-looking references to follow up on, though acknowledgement that scientific investigations on the development of the LEGO products remain scarce.

This needs a thorough review in its own post, with further analysis and commentary on the implications for software reuse and management. That will be the third post of this trilogy in N parts.

Lessons for Software Reuse

If we are considering LEGO products as a metaphor and benchmark for software reuse, we should consider the following.

Varied market needs drive variety and specialisation of products, which in turn can be expected to drive variety and specialisation of software components. Reuse of components here may be counter-productive from a product-market fit perspective (alone, without further technical considerations). However, endless customisation is also problematic and a well-designed product portfolio will allow efficient service of the market.

Premium products may also be more complex, with more specialised components. Simpler products, with lesser performance requirements, may share more components. The introduction of more premium products over time may be a major driver of increased variety and specialisation.

These market and product drivers provide context for reuse of software components.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.