With the current interest in generative AI, I wanted to write a short post updating the framing I took in my older talk Reasoning About Machine Intuition (2017), which was intended for broad audiences to understand the impact and best application of AI solutions from multiple digital delivery perspectives.
Bicycles and automobiles share some features and are used for many of the same tasks, but have important differences that must be considered by transport planners. Recently, electric bikes have created another distinct mobility category that nonetheless shares some elements with existing categories. So it is with AI solutions. While AI may share some features of human intelligence and be suitable for some of the same tasks, understanding the differences is crucial for digital professionals to be able to reason about their capabilities and applicability.
Considering that products and features introduced in the ML boom of the late 2010s allowed sufficiently good decisions to be made on complex data without precisely specified rules (e.g., image classification), I chose to characterise these solutions as “machine intuition”, in order to highlight that their narrow artificial intelligences were most comparable to human intuition. However, important differences remain. And of course I used “reasoning” in the title to highlight the capability of human intelligence that wasn’t present in these solutions.
Similarities to human intuition
Opportunities, tasks or problems amenable to both approaches share these characteristics:
- Good decisions will be made, based on ambiguous inputs, but mistakes will also be made,
- The approach is useful if solutions make enough good decisions in aggregate for a given context, and the volume and nature of mistakes is tolerable,
- The decisions may have limited explainability, even if explainability important,
- The decisions are based on past experience and therefore subject to bias.
(NB there are many examples of particularly egregious, discriminatory and harmful mistakes that were not detected or considered prior to release of AI solutions, and that the understanding of what constitutes a mistake, in addition to whether the decision itself is structurally discriminatory, must consider many ethical dimensions.)
Differences from human intuition
If a machine intuition approach looks suitable based on the characteristics above, we must also consider the differences below:
- The artificial intelligence remains narrow – it can only perform one specific task and only to the degree permitted by its training data. This is different to a human, who can easily generalise to a related task or accommodate new data. However, the same or similar data may be sliced multiple ways to support multiple related narrow tasks, and individual solutions are composable – maybe embarrassingly so – and composable with other digital services, all of which may substitute as a limited form of generality.
- Machine intuition requires vastly more training instances (many more even than any human expert might see in a lifetime) and concomitantly more computing power than human intuition. NB. These training instances also must be presented in a specific format and are also typically labelled by humans! In contrast, human intuition may only need a handful of examples, and can fall back on reasoning or inference from related experience if direct intuition fails (generalisation again). However, machines may be trained on a volume of data that no human could consume, and any trained model can be reproduced and deployed almost infinitely, so at some scale, low variable cost may compensate for high fixed cost.
- Machine intuition is possible at superhuman scales, in particular volume of data or requests, and speed of inference. For instance, translating all of Wikipedia in fractions of a second. Machine intuition may also exceed functional human performance at the relevant task, though effective measurement of this must carefully consider the task definition and potential for bias.
- Machine intuition will fail in some proportion of predictions as a matter of course (though we assume this is manageable) and is also subject to weird/trivial (adversarial) failure modes, such as changing a single pixel, that humans are generally robust to. Mistakes at scale from a single centralised ML solution may also be less acceptable than the aggregate mistakes made by many independent humans.
Anyone involved in delivery of AI solutions should keep these basic factors in mind in order to reason about product and engineering concerns. There is more to consider, but this is a good starting point.
Considering the current generative AI boom, I think of these solutions as “machine creativity” in order to highlight that their narrow artificial intelligences are most comparable to human creativity in a given medium. However, important differences remain.
Creativity for our purposes is taking some simple input and creating a complex output from the input, an output that also incorporates other ideas, knowledge and techniques beyond the input. That output may be almost any form of digital content, from natural language text, to code, to images, to music, to movies, to 3D scenes, to animated 3D movies. AI that is embodied or with access to manufacturing may also exhibit creativity in the real world, through the materialisation of digital designs.
Some applications of generative AI look more like search, databases, or even back-ends, but they are like our creative reference in that they produce complex outputs from simple inputs, and by similar mechanisms.
(NB legal and ethical issues remain to be resolved with respect to some current mechanisms available to machine creativity to incorporate external ideas, knowledge and techniques. These include: copyright and potential for plagiarism, safety of input and output content and safety of human moderators, attribution and compensation for original creators, and so on.)
Similarities to human creativity
Opportunities, tasks or problems amenable to both approaches share these characteristics:
- There is not a single “right” answer, multiple answers will suffice and may even be desirable to generate valuable options to pursue,
- Assessing the goodness of the outputs may include some degree of subjectivity,
- There may be surprising or non-obvious elements in the output, and again this may be desirable, or risky, or both,
- The process is likely iterative, with multiple rounds of review and editing.
Differences from human creativity include
If a machine creativity approach looks suitable based on an application being sympathetic to the characteristics above, we must also consider the differences below:
- The artificial intelligence has no agency or intent in its creativity, it simply processes inputs to generate outputs that are likely or typical based on its training data, described as “next token prediction” (where a token is an element of text, or patch of an image, etc). This may also appear as misalignment or the generation of unsafe content, which can be difficult to detect or control currently.
- The artificial intelligence has no logically consistent model of the world. The outputs it generates have a high probability of following the prompt, but are not necessarily logically consistent with the prompt or even internally consistent, which can lead to articulate but nonsensical, incorrect or harmful answers. (i.e., It’s also missing the reasoning which is absent from intuition.)
- The artificial intelligence remains narrow. It performs one generative task but it does not subject the output to a reasoned review or critique, as might be performed by a human to detect error. However, it is again composable, and tests could be applied after the generative step, though these too are fallible. There are many examples of creative AI tool-chains being shared by human creators to support complex creative workflows.
- Machine creativity also requires more training instances, but is similarly almost infinitely reproducible for creating outputs. Leveraging current tools which include third party training data, it is important to understand the provenance of those training instances – whether they were used with permission, whether they were curated in an ethical manner, and so on.
- There is by default no explicit attribution of influences on an output, although this is an area of focus and may be improved directly in creative systems or by hybrid means.
- Machine creativity is also possible at superhuman scales of speed and volume
- Machine creativity is also subject to weird/trivial adversarial attacks, such as prompt injection
As I’ve been guided by the set of machine intuition considerations above for a number of years, this is the initial set of considerations that I will take forward when considering applications for machine creativity, though I will continue to review their relevance in light of future developments.
In future, I’d like to address out these considerations more specifically by the various roles in a digital delivery organisation, as per the original talk.