LEGO and Software – Part Roles

This is the fifth post in a series exploring LEGO® as a Metaphor for Software Reuse. A key consideration for reuse is the various roles that components can play when combined or re-combined in sets. Below we’ll explore how we can use data about LEGO parts and sets to understand the roles parts play in sets.

I open a number of lines of investigation, but this is just the start, rather than any conclusion, of understanding the roles parts play and how that influences outcomes including reuse. The data comes from the Rebrickable data sets, image content & API and the code is available at https://github.com/safetydave/reuse-metaphor.

Hero Parts

Which parts play the most important roles in sets? Which parts could we least easily substitute from other sets?

We could answer this question in the same way as we determine relevant search results from documents, for instance with a technique called TFIDF (term frequency-inverse document frequency). We can find hero parts in sets with set frequency-inverse part frequency, which in the standard formulation requires a corpus of parts “documents” listing sets “terms” for each set that includes that part, as below.

part 10190: "10403-1 10403-1 10404-1 10404- ... "
part  3039: "003-1 003-1 003-1 003-1 021-1  ... "
part  3023: "021-1 021-1 021-1 021-1 021-1  ... "

Inverse part frequency is closely related to the inverse of the reuse metric from part 4, hence we can expect it will find the least reused parts. Considering again our sample set 60012-1, LEGO City Coast Guard 4×4, (including 4WD, trailer, dingy, and masked and flippered diver), we find the following “hero” parts.

Gallery of hero parts from LEGO Coast Guard set (60012-1) including stickers, 4WD tyres, a dinghy, flippers and mask

This makes intuitive sense. These “hero parts” are about delivering on the specific nature of the set. It’s much harder to substitute (or reuse other parts) for these hero parts – you would lose something essential to the set as it is designed. On the other hand, as you might imagine, the least differentiating parts (easiest to substitute or reuse alternatives) overlap significantly with the top parts from part 4. Note while mechanically – in a sense of connecting parts together – it may not be possible to replace these parts, these parts don’t do much to differentiate the set from other sets.

Gallery of least differentiated parts from LEGO Coast Guard set (60012-1) including common parts like plates, tiles, blocks and slopes.

Above, we consider sets as terms (words) in a document for each part. We can also reverse this by considering a set as a document, and included parts as terms in that document. Computing this part frequency-inverse set frequency measure across all parts and sets gives us a sparse matrix.

Visualisation of TFIDF or part-frequency-invers-set-frequency as a sparse 2D matrix for building a search engine for sets based on parts

This can be used as a search engine to find the sets most relevant to particular parts. For instance, if we query the parts "2431 2412b 3023" (you can see these in Recommended Parts below), the top hit is the Marina Bay Sands set, which again makes intuitive sense – all those tiles, plates, and grilles are the essence of the set.

Recommended Parts

Given a group of parts, how might we add to the group for various outcomes including reuse? For instance, if a new set design is missing one part that is commonly included with other parts in that design, could we consider redesigning the set to include that part to promote greater reuse?

A common recommendation technique for data in the shape of our set-part data is Association Rule Learning (aka “Basket Analysis”), which will recommend parts that are usually found together in sets (like items in baskets).

An association rule in this case is an implication of the form {parts} -> {parts}. Multiple of these rules form a directed graph, which we can visualise. I used the Efficient Apriori package to learn rules. In the first pass, this gives us some reasonable-looking recommendations for many of the top parts we saw in part 4.

Visualisation of discovered association rules as a directed graph showing common parts

You can read this as the presence of 2431 in a set implies (recommends) the presence of 3023, as does 2412b, which also implies 6141. We already know these top parts occur in many sets, so it’s likely they occur together, but we do see some finer resolution in this view. The association rules for less common parts might also be more insightful; this too may come in a future post.

Relationships Between Parts

How can we discover more relationships between parts that might support better outcomes including reuse?

We can generalise the part reuse analysis from part 4 and the techniques above by capturing the connections between sets and parts as a bipartite graph. The resultant graph contains about 63,000 nodes – representing both parts and sets – and about 633,000 edges – representing instances of parts included in sets. A small fragment of the entire graph, based on the flipper part 10190, the sets that include this part, and all other parts included in these sets, is shown below.

Visualisation of set neighbours (count 79) of flipper 10190 and their part neighbours (count 1313) as two parallel rows of nodes with many connections between them
Visualisation of selected set neighbours (count 3) of flipper 10190 and selected of their part neighbours (count 14) as two parallel rows of nodes with some connections between them

This bipartite representation allows us to find parts related by their inclusion in LEGO sets using a projection, which is a derived graph that only includes parts nodes, linked by edges if they share a set. In this projection, our flipper is directly linked to the 1312 other parts with which it shares any set.

Visualisation of 1312 immediate neighbours of flipper 10190 in the part projection of the set-part graph, shows only 1% of connections but this is very dense nonetheless

You can see this is a very densely connected set of parts, and more so on the right side, from 12 o’clock around to 6 o’clock. We could create a similar picture for each part, but we can also see the overall picture by plotting degree (number of connections to parts with shared sets) for all part, with a few familiar examples.

Degree of nodes in part projection, with plate 1x2, slope 45 2x2 and flipper highlighted. Steep drop-off from maximum and long flat tail

This is the overall picture of immediate neighbours, and it shows the familiar traits of a small number of highly connected parts, and a very long tail of sparsely connected parts. We can also look beyond immediate neighbours to the path(s) through the projection graph between parts that don’t directly share a set, but are connected by common parts that themselves share a set. Below is one of the longest paths, from the flipper 10190 to multiple Duplo parts.

Visualisation of a path through the part connection graph spanning 7 nodes, with some neighbouring nodes also shown and parts drawn on

With a projection graph like this, we could infer that parts that are designed to be used together are closer together. We could use this information to compile groups of parts to specific ends. Given some group of parts, we could (1) add “nearby” missing parts to that group to create flexible foundational groups that could be used for many builds, or we could (2) add “distant” parts that could allow us to specialise builds in particular directions that we might not have considered previously. In these cases, “nearby” and “distant” are measured in terms of the path length between parts. There are many other ways we could use this data to understand part roles.

(When I first plotted this, I thought I had made a mistake, but it turns out there are indeed sets including both regular and Duplo parts, in this case this starter kit.)

The analysis above establishes some foundational concepts, but doesn’t give us a lot of new insight into the roles played by parts. The next step I’d like to explore here is clustering and/or embedding the nodes of the part graph, to identify groups of similar parts, which may come in a future post.

Lessons

As I said above, there are no firm conclusions in this post regarding reuse in LEGO or how this might influence our view of and practices around reuse in software. However, if we have data about our software landscape in a similar form to the set-part data we’ve explored here, we might be able to conduct similar analyses to understand the roles that reusable software components play in different products, and, as a result, how to get better outcomes overall from software development.

Coming Next

I think the next post, and it might just be the last in this series, is going to be about extending these views of the relationships between parts and understanding how they might drive or be driven by the increase in variety and specialisation discussed in part 2.

LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this site.

Rebooting AI Review

I was excited to read Rebooting AI (website), to find inspiration and tools for doing things better. Here is the book in one great quote:

For now, we are in a kind of interregnum: narrow but networked intelligences with autonomy, but too little genuine intelligence to be able to reason about the consequences of that power.

There is a lot to like. Marcus & Davis clearly map out the history and landscape of AI challenges, plus plausible elements of future solutions. They provide useful tools for thinking about problems with partial solutions to intelligence, such as the “fundamental over attribution error” and the “illusory progress gap”. They show how current ML solutions based on big data can be opaque and brittle. They demonstrate how key attributes of human intelligence instead allow the development of rich cognitive models – such as how language and the real world work – and how solutions incorporating such models would address current shortcomings, enabling AI to tackle open-ended tasks. This is great material for a general reader.

Where I felt the book fell short was that it didn’t build many bridges between our current “narrow but networked intelligences” and the authors’ posited future state capabilities. The future state reads like Artificial General Intelligence (AGI) by another name, fleshed out by scenarios that are short on implementation detail. Though sometimes mundane, from our current perspective, Arthur C Clarke might describe as them “indistinguishable from magic” and hence Rodney Brooks would say they are “no longer falsifiable”.  We know there’s a massive chasm between current ML solutions and AGI, but I didn’t find much to close or bridge the chasm in Rebooting AI.

Some of these future capabilities are illustrated by domain-specific modelling techniques – like formal logic – that would be familiar to many computer science students. But I found this a little incongruous because these techniques have also failed to deliver on promises of realising intelligence, and not done any more to squash the “long tail of edge cases” than other narrow intelligences. Given the diverse facets of intelligence, maybe the paradigm of “narrow but networked intelligences” is the best way to achieve or approximate intelligence, or maybe it’s ultimately illusory progress, but these illustrations didn’t help me resolve that.

There is undeniable value in the current generation of ML solutions. How do we build on these? A detailed analysis of a number of key avenues of short to medium term progress was lacking. For instance, starting with current ML solutions, the authors could have explored:

  • various designs of hybrid human-machine decision-making systems that augment human abilities while remaining resilient to new scenarios that stump machines;
  • transfer learning, few-shot learning and sophisticated representation learning like transformers, that have potential to increase the representative and reasoning power of solutions;
  • the role of ecosystem design and governance, including ongoing monitoring and data curation to correct issues (for instance bias testing, CD4ML, etc).

Instead, ML was stereotyped as fully automated, tabula rasa, E2E.

Finally, to know things are getting better, we need the right baseline and measures. While the language examples clearly demonstrated superficial artificial understanding, and self-driving vehicles have a ways to go, some issues raised were not assessed against incumbent human capabilities on narrow tasks in a like-for-like comparison, but rather against posited capabilities of a future AGI system. I would agree that humans can individually reflect and introspect to recognise their mistakes, but it is still the case that, in operational scenarios, humans make mistakes like artificial systems do. These operational mistakes are moderated by the wider ecosystem in which humans operate, in the same way as predictive inference mode is moderated by a wider human-machine ecosystem. I felt the core issue in some instances was structurally unsound or concentrated decision-making without proper governance, rather than whether or not mistakes were made, and this confounded the analysis. I would have liked to have seen these factors teased out so comparisons could be made in a way that would help to measure progress.

Marcus & Davis do lay out a helpful framework for building trust in AI systems, including stress testing, understanding costs of failures, building in modularity and maintainability, etc. This is good guidance but it would be really helpful to see more detail or case studies under these headlines, to the specificity of other works like Weapons of Math Destruction and Made by Humans.

So, maybe I was hoping for “Refactoring AI” rather than “Rebooting AI”. The book certainly clearly describes problems with the current state, and desirable characteristics of the future state. On balance, the technical arguments may indeed be sounder than my concerns. If you’re curious, I would encourage you to read it and draw your own conclusions. Ultimately, however, I’m disappointed because I didn’t leave inspired and equipped with new insight and new tools for improving AI today, tomorrow, and the day after.

The Lockdown Wheelie Project, Part 3

In Melbourne’s COVID-19 lockdown, I’ve wheelied over 17km. Not all at once, though.

Over three months, I’ve spent 90 minutes with my front wheel raised. I’d like to keep it up, but as lockdown has gradually relaxed, and routines have changed, so have I landed the wheelie project, for now.

Read the full article over on Medium at The Lockdown Wheelie Project, Part 3.

More Sankey for Less Confusion?

Confusion Matrixes are essential for evaluating classifiers, but for some who are new to them, they can cause, well, confusion.

Sankey Diagrams are an alternative way of representing matrix data, and I’ve found some people – who are new to matrix data, like business domain experts who are not experienced data scientists – find them easier to understand. Also, some machine learning researchers find Sankey diagrams useful for analysing data and classifiers.

So, I have posted simple code for visualising classifier evaluation or comparisons as Sankey diagrams. Maybe it will be useful for others, as well as fun for me.

The code combines large portions of Plotly Sankey Diagrams with essence of scikit-learn confusion matrix and a lashings of list comprehension code golf.

The scenarios supported are:

  1. Evaluating a binary classifier against ground truth or as champion-challenger,
  2. Evaluating a multi-class classifier against ground truth or as champion-challenger,
  3. Comparing multiple stages of a decision process, or multiple versions of a binary classifier, for instance over time, or hyper-parameter sweeps, and
  4. Comparing multiple versions of a multi-class classifier.
Example confusion matrixes as Sankey diagrams

See the code on Github.

Melbourne Data Visualisation Meetup – October 2020

I presented at the Melbourne Data Visualisation Meetup along with Ned Letcher, who gave an awesome overview of Python Libraries for Building Data Apps (an analytics superpower).

The topic was Data Visualisation – Good for Business.

Data Visualisation is key for gaining new knowledge, better engaging audiences, and driving meaningful action. We’ll share bespoke data visualisation case studies from our work at ThoughtWorks and examine their business impact. We’ll also discuss the ongoing role of the art and science of data visualisation in a world driven by machine learning.

Applying Software Engineering Practices to Data Science

I had fun recording this podcast on Applying Software Engineering Practices to Data Science with Zhamak Dehghani, Mike Mason and Danilo Sato.

The need for high quality information at speed has never been greater thanks to competition and the impact of the global pandemic. Here, our podcast team explores how data science is helping the enterprise respond: What new tools and techniques show promise? When does bias become a problem in data sets? What can DevOps teach data scientists about how to work?

ThoughtWorks Technology Podcasts