Why the Australian COVIDSafe App Failed

It was a pleasure to collaborate with some of my colleagues on this article in The Australian newspaper, which I was able to put my name to. The article is titled Why the COVIDSafe App Failed, and may need a subscription to access.

The tl;dr

The COVIDSafe experience has been an education and if one thing is clear, it is that if we are going to pin all our hopes on a piece of public health technology, it must be built on sound health evidence and a solid platform of trust for it to have any real value in protecting the communities it serves.

Data Visualisation Podcast

It was fun to join the ThoughtWorks Tech Podcast again, with Zhamak Dehghani, Alexey Boas, and Ned Letcher, this time to talk about Getting to grips with data visualization.

A vast array of powerful data visualization tools are gaining traction in enterprises looking to make sense of their data sets, for instance D3, Bokeh, Shiny and Dash. In this episode, our team explores to concept of data visualization as part of a complete digital experience, with the workflows and journeys of a wide variety of users.

ThoughtWorks Technology Podcasts

Applying Software Engineering Practices to Data Science

I had fun recording this podcast on Applying Software Engineering Practices to Data Science with Zhamak Dehghani, Mike Mason and Danilo Sato.

The need for high quality information at speed has never been greater thanks to competition and the impact of the global pandemic. Here, our podcast team explores how data science is helping the enterprise respond: What new tools and techniques show promise? When does bias become a problem in data sets? What can DevOps teach data scientists about how to work?

ThoughtWorks Technology Podcasts

Step Up on AI

I provided commentary on our need to step on AI capability and governance in Australia on this story in The Australian newspaper.

I was quoted extensively in the article but I wrote a bunch more notes which might be of interest.

Further Commentary

  • In the regrettable case of the Knightscope security robot and the curious case of Facebook bots, we should consider governance of product development as well as unexpected behaviours of AI systems. Governance in product development means introducing beneficial innovations in a safe manner. In self-driving vehicles, we could contrast the measured approach of Waymo (Google) – incorporating test tracks, an extensive simulation program and human supervision – to the cavalier approach of Uber – flouting state regulations to rush vehicles onto public streets. The Knightscope case may be a failure to discover, design against, and test the product’s potential failure modes in a safe environment, rather than an inherent failing of AI. The Facebook case demonstrates the value of being able to actively discontinue product research without any wider adverse effects. This is not to deny sometimes unexpected behaviours of AI systems, or deny the risk posed by poorly governed product development, but rather to focus the conversation on how to safely harness the benefits of AI products.
  • Ethical and regulatory frameworks are valuable and necessary, as AI is among our most powerful technologies. There are a number of valid concerns based on bad or worst case scenarios for weaponisation, mass displacement of workers, systemic data-driven discrimination, the erosion of democratic society, hostile self-improving systems, etc. With agreed ethics and global frameworks for AI research and development, customer and citizen data regulations, and active governance of wider societal change, we can benefit from AI without exposing ourselves to worst case scenarios. Given that ethical and regulatory frameworks and broader policy changes will take some effort to establish and incentives will remain for actors to circumvent these frameworks, I also think that education and the private sector have major roles to play to improve understanding and bridge gaps short term.
  • Education is key to an informed discussion about the benefits and risks of AI. The challenge is for institutions is to keep up with the state of the art. We need to create forums for public and private researches and product developers to engage with policy makers and other public institutions. And then we need evidence-based policy formulation. Australian primary schools already teach cybersecurity in early years; let’s bring AI into the curriculum too. Education at all levels should go beyond a fundamental understanding of AI to developing the skills needed to contribute to, thrive in, and continue to shape a workplace and society where many routine cognitive tasks are automated.
  • The private sector is already providing some responses to ethical challenges in the absence of regulation. For instance, Volvo has stated it will “accept full liability whenever one if its cars is in autonomous mode”. Technology companies – aware their social license is being eroded by issues central to AI such as “filter bubbles”, programmatic advertising alongside objectionable content, and mass data collection – are introducing features designed to benefit users and citizens. Examples include BuzzFeed’s Outside Your Bubble, Google’s recent YouTube ad restrictions, and Apple’s CoreML support for AI on-device to maintain data privacy. We should also be encouraging the Australian private sector to take a leadership role and developing the technology and governance expertise to enable this.w
  • Australia could be a leader in this field. We do have many of the right ingredients. However, the current reality is that the EU is leading regulatory change, with 2018’s General Data Protection Regulation set to extend existing data provisions to effectively create a “right to explanation” for users about whom algorithmic decisions are made. GDPR is already affecting Australian organisations with European operations. GDPR could require a complete overhaul of common AI approaches and is driving research into making AI systems more understandable. For Australia to take the lead in setting the global AI agenda would further require a different type of domestic politics to what we have seen over the last decade in respect of pressing global and technology issues, such as climate change and energy.