Thoughts from New Zealand

We were delighted to have Kate MacDonald ((New Zealand Government Fellow to the World Economic Forum and Special Digital Adviser, Ministry for Business, Innovation and Employment, Hikina Whakatutuki) was able to join our launch event as a panellist on the AI Beyond Scotland panel on Monday 22 March 2021 even though it was the middle of the morning for her.

Unfortunately due to some unforeseen technical difficulties, the session was cut a few minutes short and we didn’t get a chance to get to the questions from the audience.

So we asked our panellists if they could answer some of the questions we received.

Kate (and her team) came back (just before our pre-election period!) and answered the following question from Alastair Philp:

Albert’s emphasis of human explainability seems important. If AI is seen as the product of ‘black boxes’ where people aren’t sure what has happened inside I suspect there’ll be apprehension. What can we learn about this from around the world? How can we engage the citizen rather than experts on this to build a ‘social licence’

And here is what Kate (with input from her wider team) said in response.

I think there are a few things that can be done to address the issue of black boxes and explainability and build social licence for AI use.

Firstly, recognising and addressing the concerns raised by the public.

Fundamentally, for most people, it’s a question of assurance – confidence that someone, somewhere, has checked it.  Any decision making given to an algorithm should be able to be explained by those using it.  This assurance and explanation can occur at different levels and in different ways – a government regulatory agency, a stamp of approval from a centre of excellence etc. 

Getting social licence means we have to explain the assurance to people and this means finding the right words that work for each audience.  This going to vary by technical expertise as well as a range of different perspectives.  

As well as this, we need to be realistic about the limits of explainability.  AI gets a bit of a tough ride. We don’t have the same expectations of “explainability” when it comes to human decision-making – and there is a raft of evidence to demonstrate human decision making is almost always biased in some form or another.  While the decision-maker can account for the decision – we don’t know whether they are being truthful about how they made a decision, let alone unconscious biases or what physically happened to arrive at the decision.  If a decision making AI has been designed with excellence, following ethical, human-centred design practices and an inclusive team has surrounded the development and use, there is a significantly higher chance that the decision will be less biased than those made by the human process alone.

In my view, we need to start building stories about how AI is assisting in achieving outcomes so that people can start building trust.  And, in some ways the term “black box” actually encourages fear so the language we use to describe AI and its processes is very important. 

We have done a little bit of this in New Zealand – our Digital Council went out across New Zealand last year to talk to a range of people about the impacts automated decision-making (ADM) might have on their lives.  The aim of the work was to get a sense of what people perceive ADM to be, how much trust there was for it, and what might need to happen for people to develop greater trust in its use. This report is due to be launched by our Digital Economy minister shortly.

The other point is that it's often the data that is biased and an algorithm just represents or exposes that bias. Good data health and variety is far more important when being used and this should apply to all decision making scenarios, whether human or other. If we are to build social licence, we should be building that licence into the organisation as a whole and all elements of the process, not just the code.  This may take a while as much data collection is flawed and many agencies and organisations do not have good data governance practice.  Making sure you have the right environment, including good data governance, should be the first action undertaken before AI is introduced.

Along with good data practice, we also need to establish the right enabling foundations and be open and transparent about them. As a start, we need to make sure we have the necessary ethical principles embedded into the design and implementation of any AI system – and make sure these principles have been co-designed with as many people and communities as possible.  We also need to make sure that the assurance processes are backed by ways for people to raise any perceived issues, or unintended consequences.  There are different ways to do this - have some sort of hub or centre, where issues such as bias could be raised and looked at impartially; provide channels for feedback and input; hold forums or events on the processes.

Actively seeking views and input from people, being open about what we’re doing, putting checks and balances in place, and providing opportunities to comment through the process should build trust in the system and how it is being used, creating social licence.  And who knows – maybe one day we may find that AI is less biased than humans and we prefer AI to make decisions over humans because of our inherent bias.

If you missed our launch event on Monday, you can catch it again online.

Previous
Previous

Invitation to Quote - Stakeholder engagement

Next
Next

Recording from launch event now available