Confusion Matrixes are essential for evaluating classifiers, but for some who are new to them, they can cause, well, confusion.
Sankey Diagrams are an alternative way of representing matrix data, and I’ve found some people – who are new to matrix data, like business domain experts who are not experienced data scientists – find them easier to understand. Also, somemachine learningresearchers find Sankey diagrams useful for analysing data and classifiers.
Time to make for a home for those occasional mathematical coding curios. I’ve kicked off with an analysis, using various Numpy approaches, of the gravity field around a square (or cubic) planet, inspired by a project my children were working on.
If you’ve ever wondered, this is what gravity looks like on the surface of a square planet (20 length units long, arbitrary gravitational units) …
… even though the surface would appear visually flat, it would only feel level in the centre of the face. Near a corner, you would feel like you were standing on a 45 degree slope, and because the surface would be visually flat, it would look like you could slide off the far end of it – weird and cool.
I imagine I’ll add to this over time. The bulk of learning to code for me through high school involved mathematical simulations of all kinds: motion of planets under gravity, double pendulums, Mandelbrot sets, L-systems, 3D projections, etc, etc. All that BASIC (and some C) code lost now, but I’ll keep my eye out for more interesting problems and compile them here.
See also ThoughtWorks “Shokunin” coding problems of a mathematical nature that have piqued my interest over time:
Once upon a time, scaling production may have been enough to be competitive. Now, the most competitive organisations scale change to continually improve customer experience. How can we use what we’ve learned scaling production to scale change?
Metaphors for scaling
I recently presented a talk titled “Scaling Change”. In the talk I explore the connections between scaling production, sustaining software development, and scaling change, using metaphors, maths and management heuristics. The same model of change applies from organisational, marketing, design and technology perspectives. How can factories, home loans and nightclubs help us to think about and manage change at scale?
Read on with the spoiler post if you’d rather get right to the heart of the talk.
When software engineers think about scaling, they think in terms of the order of complexity, or “Big-O“, of a process or system. Whereas production is O(N) and can be scaled by shifting variable costs to fixed,I contend that change is O(N2) due to the interaction of each new change with all previous changes. We could visualise this as a triangular matrix heat map of the interaction cost of each pair of changes (where darker shading is higher cost).
Change interaction heatmap
Therefore, a nightclub, where each new patron potentially interacts with all other denizens is an appropriate metaphor. Many of us can also relate to changes that have socialised about as well as drunk nightclub patrons.
Socialisation failures [BBC News]The thing about change being O(N2) is that the old production management heuristics of shifting variable cost to fixed no longer work, because the dominant mode is interaction cost. The nightclub metaphor suggests the following management heuristics:
Socialise
Socialising change
We take a variable cost hit for each change to help it play more nicely with every other change. This reduces the cost coefficient but not the number of interactions (N2).
Screen
Screening change
We only take in the most valuable changes. Screening half our changes (N/2) reduces change interactions by three quarters (N2/4).
Seclude
Secluding change
We arrange changes into separate spaces and prevent interaction between spaces. Using n spaces reduces the interactions to N2/n.
Surrender
Surrendering change
Like screening, but at the other end. We actively manage out changes to reduce interactions. Surrendering half our changes (N/2) reduces change interactions by three quarters (N2/4).
Scenarios
Where do we see these approaches being used? Just some examples:
Start-upsscreen or surrender changes and hence are more agile than incumbents because they have less history of change.
Product managersscreen changes in design and seclude changes across a portfolio, for example the separate apps of Facebook/ Messenger/ Instagram/ Hyperlapse/ Layout/ Boomerang/ etc
To manage technical debt, good developers socialise via refactoring, better seclude through architecture, and the best surrender
In hiring, candidates are screened and socialised through rigorous recruitment and training processes
Brand architectures also seclude changes – Unilever’s Dove can campaign for real beauty while Axe/Lynx offends Dove’s targets (and many others).
We’ll explore the development of the Fireballs in the Sky app, designed for citizen scientists to record sightings of meteorites (“fireballs”) in the night sky. We’ll introduce the maths for AR on a mobile device, using the various sensors, and we’ll throw in some celestial mechanics for good measure.
We’ll discuss the prototyping approach in Processing. We’ll describe the iOS implementation, including: libraries, performance tuning, and testing. We’ll then do the same for the Android implementation. Or maybe the other way around…
So, you want your mobile or tablet to know where in the world you’re pointing it for a virtual reality or augmented reality application?
To draw 3D geometry on the screen in OpenGL, you can use the rotation matrixes returned by the respective APIs (iOS/Android). The APIs will also give you roll, pitch and yaw angles for the device.
What’s not easy to do through the APIs is to get three angles that tell you in general where the device is pointing – that is, the direction in which the rear camera is pointing. You might want this information to capture the location of something in the real world, or to draw a virtual or augmented view of a world on the screen of the phone. The Fireballs in the Sky app (iOS, Android) does both, allowing you to capture the start and end point of a “fireball” (meteor/ite) by pointing your phone at the sky, while drawing a HUD and stars on the phone screen during the capture process, so you’re confident you’ve got the right part of the sky.
Azimuth and elevation
Roll, pitch and yaw tell you how the device sees itself – they are rotations around lines that go through the device (device axes). But in this case we want to know how the device sees the world – we need rotations around lines fixed in the real world (world axes). To know where the device is pointing, we actually want azimuth, elevation and tilt, as shown.
The azimuth, elevation pair of angles gives you enough information to define a direction, and hence capture objects in the real world (assuming the distance to the object does not need to be specified). However, if you want to draw something on the screen of your device, you need to know whether the device is held in landscape orientation, portrait orientation, or somewhere in-between; thus a third angle – tilt – is required.
Azimuth is defined as the compass angle of the direction the device is pointing. Elevation is the angle above horizontal of the direction the device is pointing. Tilt is the angle the device is rotated around the direction in which it is pointing (the direction defined by azimuth and elevation angles).
We can get azimuth, elevation and tilt with the following approach:
Define a world reference frame
Obtain the device’s rotation matrix with respect to this frame
Calculate the azimuth, elevation and tilt angles from the rotation matrix
It will really help to be familiar with the mathematical concept of a vector (three numbers defining a point or direction in 3D space), and be able to convert between radians and degrees, from here on in. Sample code may be published in future.
Define a World Reference Frame
World reference frame
We’re somewhere in the world, defined by latitude, longitude and altitude. We’ll define a reference frame with its origin at this point. For convenience, we’d like Z to point straight up into the sky, and X to point to true north. Therefore, Y points west (for a right-handed frame), as shown here. We define unit vectorsi, j, k in the principal directions (or axes) X, Y, Z, and we’ll use them later.
What we want eventually is an rotation matrix that is made up of the components of the device axes a, b, c, (also unit vectors) with reference to the world frame we defined. This matrix will allow us to convert a direction in the device frame into a direction in the world frame, and vice versa. This gives us all the information we need to derive azimuth, elevation and tilt angles.
We’ll describe the device axes as:
a is “screen right”, the direction from the centre to the right of the screen with the device in portrait
b is “screen top”, the direction from the centre to the top of the screen with the device in portrait
c is “screen normal”, the direction straight out of the screen (at right angles to the screen, towards the viewer’s eye)
We can write each device axis as a vector sum of the components in each of the principal world frame directions, or we can use the shorthand of a list of numbers:
To get a matrix of this form in iOS, just use reference CMAttitudeReferenceFrameXTrueNorthZVertical and get the rotation matrix. However, the returned matrix will be the transpose of the matrix above, so you will need to transpose the result of the API call.
In Android, you will need to correct for magnetic declination and a default frame that uses Y as magnetic north, and therefore X as east. Both corrections are rotations about the Z axis. The matrix will similarly be transposed.
Calculate View Angles
Device elevation angle
We can calculate the view angles with some vector maths. The easiest angle is elevation, so let’s start there. We find the angle that the screen normal (c) makes with the vertical (k) using the dot product cosine relationship.
Elevation is in the range [-90, 90]. Note also from the definitions above that such dot products can be extracted directly from the rotation matrix, as we can write:
\[\vect{c} \cdot \vect{k} = c_k \]
Device azimuth angle
Next, we calculate azimuth, for which we need the horizontal projection (cH) of the screen normal (c). We use Pythagoras’ theorem to calculate cH:
\[1 = c_H^2 + c_V^2\]
\[c_H = \sqrt{1 – c_k^2}\]
We then define a vector cP in the direction of c, such that the horizontal projection of this vector is always equal to 1, so we can use this horizontal projection to calculate angles with the horizontal vectors i & j.
\[\vect{c}_P = \frac{\vect{c}}{c_H}\]
Horizontal projection of device screen normal
We then calculate the angle the horizontal projection of the screen normal (cP) makes with the north axis (i). We get the magnitude of this angle from this dot product with i, and we get the direction (E or W of north) from the dot product with the west axis (j).
Note that because we’ve only used screen normal direction up until now, we don’t care how the phone is tilted between portrait and landscape.
Device tilt angle
Last, we calculate tilt. For this calculation we also need to ensure the projection of the screen right vector aP onto the vertical axis (k) is always equal to 1. As above, we divide a by cH.
\[\vect{a}_P = \frac{\vect{a}}{c_H}\]
We take the angle between aP and the world frame vertical axis k.
Note that as the elevation gets closer to +/-90, both the azimuth value and the tilt value will become less accurate because the horizontal projection of the screen normal approaches zero, and the vertical projection of the screen right direction approaches zero. How to handle elevation +/-90 is left as an exercise to the reader.
Sample Code
Sample code may be available in future. However, these calculations have been verified in iOS and Android.
Stop testing on humans! Auto manufacturers have greatly reduced the harm once caused by inadvertently crash-testing production cars with real people. Now, simulation ensures every new car endures thousands of virtual crashes before even a dummy sets foot inside. Can we do the same for software product delivery?
Simulation can deliver faster feedback than real-world trials, for less cost. Simulation supports agility, improves quality and shortens development cycles. Designers and manufacturers of physical products found this out a long time ago. By contrast, in Agile software development, we aim to ship small increments of real software to real people and use their feedback to guide product development. But what if that’s not possible? (And can we still benefit from simulation even when it is?)
The goal of trials remains the same: get a good product to market as quickly as possible (or pivot or kill a bad product as quickly as possible). However, if you have to wait for access to human subjects or real software, or if it’s too costly to scale to the breadth and depth of real-world trials required to optimise design and minimise risk, consider simulation.
Learn why simulation was chosen for the design of call centre services (and compare this with crash testing cars), how a simulator was developed, and what benefits the approach brought. You’ll leave equipped to decide whether simulation is appropriate for your next innovation project, and with some resources to get you started.
Discover:
How and when to use simulation to improve agility
The anatomy of a simulator
A lean, risk-based approach to developing and validating a simulator
Techniques for effectively visualising and communicating simulations