I was thrilled to help kick-off the GenAI Network Melbourne meetup at their first meeting recently. I presented a talk titled Semantic hide and seek – a gentle introduction to embeddings, based on my experiments with Semantle, other representation learning, and some discussion of what it means to use Generative AI in developing new products and services. It was a pleasure to present alongside Rajesh Vasa from A2I2 at Deakin University.
Thanks to Ned, Orian, Scott, Alex, Leonard & co for organising. Looking forward to more fun events in this series!
Check out the slides.
Outline
Background on embeddings
- What are embeddings? Vector spaces, similarity with cosine, Word2Vec for text
- An example of image embedding and constrastive learning with triplet loss – Making use of all your data, labelled or otherwise
- An example of representation learning for time-series data with autoencoders – “this wheelie does not exist [2020]” aka “WheelieGPA [2023]”
- Embeddings in LLMs tracing their lineage from Word2Vec
- Semantic algebra, sentiment vectors and ISMs
The game Semantle and my solvers
- About the game, and playing with friends
- Live online solver demo!
- Solver project aims: experiment with embeddings, automate solutions, explore how people and machines work together on problems
- Modular solver design and search strategies, illustrated below
Reflections on people and machines working together
- Working in high-dimensional search spaces, it’s easy to get stuck [on local extrema] – people and machines can unstick one another
- A brief review of No Smooth Path to Good Design
- Discussion of the role of social machines in the Nonaka SECI cycle which I’d originally raised in Reasoning About Machine Intuition