Category: LLM
-

Antifragile AI Architectures
AI is full of contradictions: capable but unreliable, local improvements create externalities, generalist models are evaluated against specific criteria, and so on. Antifragility is a framework that deals in contradictions too, and seems an appropriate lens through which to explore AI systems architecture, as I had used it in an earlier era to explore hand-crafted…
-

LLMs are lineage black holes
Data lineage is important to most organisations, even if they don’t make use of it. Systematically capturing the upstream provenance and downstream consumers of any piece of data is critical to trusting the utility of that data and understanding its impacts, at any scale beyond a handful of excel spreadsheets. The nature of lineage When…
-

LLMs as text simulators
I’ve often written here about developing systems that leverage simulation. Simulation combining physical processes, information systems, and crowd behaviours. Simulations that support organisational decision making, customer experiences, and learning. And in late 2025 we were having a moment where more people were starting to describe Large Language Models (LLMs) as simulators of text. Simulation of…
-

GenAI in Data Platforms
I was part of a panel on the Impact of GenAI on Modern Data Platforms recently, hosted by the Data Engineering Melbourne meetup. It was great to chat with MC Ryan Collingwood and fellow panellists Rahul Trikha, Peter Barnes and Tony Nicol in front of a large and curious crowd. Like the crowd, I felt…
-

GenAI stone soup
GenAI (typically as an LLM) is pretty amazing, and you can use it to help with tasks or rapidly build all kinds of things that previously weren’t feasible. Things that work some of the time. The soup But do you find yourself reworking large chunks of generated content, or face major hurdles in getting a…
-

A gentle introduction to embeddings at the inaugural GenAI Network Melbourne meetup
I was thrilled to help kick-off the GenAI Network Melbourne meetup at their first meeting recently. I presented a talk titled Semantic hide and seek – a gentle introduction to embeddings, based on my experiments with Semantle, other representation learning, and some discussion of what it means to use Generative AI in developing new products…
-

LLM WTF
What Token Follows (WTF) when generating text with a Large Language Model (LLM)? This notebook (you can run in Colab) and companion slide deck is my perfunctory (don’t say tokenistic) attempt to demystify GenAI for a general technology audience, specifically: how text is generated by LLMs. The premise of the notebook is to demonstrate and…