What’s the point of AI standards?

 
 

Authors

Guest blog authored by:

  • Callum Sinclair, Partner and Head of Technology and Commercial at Burness Paull LLP and Member of the Scottish AI Alliance Leadership Team.

  • Ansgar Koene, Global AI Ethics & Regulatory Leader at EY.

With research from Avery Epstein, Summer Intern at Burness Paull LLP

Powerful potential

The regulation of artificial intelligence (AI) systems is a complex business.

Research and development into AI systems has been under way since the 1940s, but there have been great leaps forward over the past decade with the explosion in computational power and enormous data sets now available.  AI systems have the potential to transform our lives for the better, but such an outcome is far from inevitable. Without the necessary checks and safeguards AI systems equally have the potential to cause untold damage to people and society.

No surprise then that governments, regulators and standards bodies are hard at work developing codes of practice, international standards, and even drafting new or updated legislation (most notably the EU draft AI Act). The focus of these efforts is generally on ensuring that AI is developed and deployed in an ethical, trustworthy and inclusive way.  This, for example, is the mission of the Scotland AI Alliance with which we are involved.

Developing global standards

In highly technical domains, such as AI, the role of standards bodies, including the British Standards Institute (BSI) and the International Standards Organization (ISO) is especially important to help translate high-level concepts like ‘trustworthiness’ into practical requirements that can guide the implementation and use of technology.

ISO has been independently generating global industry guidelines since 1946, and since then, their membership has increased from 27 national bodies to 167. Although the organisation itself possesses no power of enforcement, its standards have historically carried significant weight and influenced industry practices (e.g. in the automotive and aerospace industry). In the EU’s draft AI Act, ‘harmonised’ technical standards originating from recognised bodies like ISO are explicitly mentioned as recommended methods for achieving compliance with regulatory requirements of that Act.

In June 2022, two new standards, described as a landmark by some commentators, were published by the ISO/IEC Joint Technical Committee for AI standards, ISO/IEC JTC/1 SC42. These new standards, ISO/IEC 22989:2022 (Artificial Intelligence Concepts and Technology) and ISO/IEC 23053 (Framework for AI Systems using Machine Learning),  are the spearhead of an array of more than 25 AI standards that are currently under development by ISO/IEC JTC 1/SC 42.  These standards are being worked on by 5 working groups, focussing on areas like foundational definitions (which are the focus of the two recently published standards referenced above); data quality and life cycle; trustworthiness, including ethics, safety, and explainability; use cases and applications; and computational approaches and other more technical aspects.  And that’s before you add in a plethora of data standards published and in development.

But why?

Why is this work (and my goodness it is a lot of work!) so important?

Well, as above, it is clear that international standards can be influential on industry development of technology, encouraging good and harmonised practices. Foundational standards like 22989 and 23053 lay down a common language and frame of reference, providing more certainty of definition (what do we mean when we talk about “machine learning” and “AI systems”?), and a basis upon which other standards can be better developed and understood.  Standards provide a baseline against which systems and organisations can be benchmarked and audited, both for conformity assessment before high risk AI systems are put on to the market, and for post-market monitoring to analyse performance of those systems.  All of this should bring assurance and benefit, not only developers of AI systems, but also for consumers, and for those whose data is processed by, or who are subject to decision-making by, those systems.

The standards can be difficult reading, being highly technical in places, and it can be difficult to find and access the standards that are most relevant to a particular product or service.  However, they are an important part of the assurance and risk management framework, and help to ensure that all of our rights are protected and that AI systems continue to be developed and rolled out in an ethical, trustworthy and inclusive way.

Steven Scott

We are twofifths design agency. We design logos, create unforgettable brands, design & build beautiful websites, and bring stories to life through animated motion graphics films.

http://www.twofifthsdesign.com
Previous
Previous

AI in the Scottish Gaming Industry

Next
Next

We Need to Talk – and Laugh – About AI!