Skip to main content

Blog

applied intelligence Live

12 Jun 2023

The importance of governance, regulation and enforcement of AI

Guest Writer: Dr Jason Grant Allen, Associate Professor of Law Director, Centre for AI & Data Governance
The importance of governance, regulation and enforcement of AI

Dr Jason Grant Allen, Associate Professor of Law Director, Centre for AI & Data Governance, Singapore Management University, Yong Pung How School of Law, shares his insight with us on the outlook for AI in 2023, and approaches to governance and policy.

What are they key issues around AI Governance in your opinion?

They key issues in AI Governance include the following:

  • Ensuring AI systems are designed and deployed in a way that respects human values like fairness. 
  • Ensuring transparency in how AI algorithms make decisions (to the extent possible) and determining who is responsible for AI decisions (especially where transparency is not possible).
  • Safeguarding personal data and preventing unauthorized access or misuse.
  • Addressing biases that may be present in AI systems (for example, because of bias in the training data set) and preventing discriminatory outcomes.
  • Mitigating risks of AI systems being hacked, manipulated, or used for malicious purposes (in the age of Generative AI, this includes things like “deep fakes”).
  • Ensuring that the pace of AI development does not outstrip our understanding or control of AI capabilities (the “AI safety” problem, which has been getting a lot of press including from some very prominent scientists lately).
  • Ensuring that AI deployment takes place in a manner that augments human intelligence and capability rather than just displacing humans (for example, in the workforce).

The key issue is really a meta-issue: how we move from these broad and sometimes pretty bland statements towards effective governance structures and mechanisms? What will they look like and how do we implement them?

 

What is your view on AI Governance and what are the different approaches companies can take on this?

AI Governance can and should take many different forms, including private and voluntary initiatives such as voluntary codes of conduct, responsible contractual arrangements and terms of use, or even market-based discipline such as competition and consumer choice.

However, while it is important to look at AI Governance from the perspective of companies, it is essential to stress that AI Governance is not just a matter for companies: despite some claims to the contrary, companies should not be left entirely to their own devices in choosing the approach to AI Governance they take, and we cannot rely on market dynamics (for example) to get us home. AI Governance touches all our interests, and governments clearly have skin in the game. Recent developments have brought us squarely into the realm of talking about “regulation proper” as a central aspect of the AI Governance landscape.

There is a continuum of regulatory approaches that can be taken, from self-regulation by industry (for example through standards promulgated by self-regulatory bodies) to what we might call quasi-regulation to regulation. The right approach to take will depend on the context and any given jurisdiction is likely to take a mixed approach, for example imposing “hard” regulations on certain aspects of AI development and deployment and relying on softer approaches in other aspects. One major issue with hard regulation is that it often depends on hard definitions and concepts like “artificial intelligence” can be very hard to define. The EU AI Act is likely to face problems of this nature because it’s definition of “AI” is very broad. It's also important to remember that sometimes “softer” forms of regulation like industry standards can, despite their softness, actually be very effective in other dimensions, such as international harmonisation and flexibility.

In other areas such as international financial law, we have seen soft law play an important systemic role and there is a body of thought that sees this as a special kind of “law” made by custom rather than legislatures. Recently, we have seen calls for the creation of an international body invested with authority by the participating states to oversee AI regulation, using the examples of atomic energy. 

The other essential point to mention is that different aspects of “AI” need regulation and may benefit from a different approach. In the first instance, there are various layers to any AI tech stack—from hardware to the training data and the model and the software at the application layer. Regulations directed at the training data might look very different to regulations directed at the application, and if we start designing regulation with this in mind we may produce a mor effective suite of regulations.

Further, any AI system actually implemented will include social and human as well as technical components—whether it is a predictive or generative system, it is used by people (whether to replace other people or augment their abilities) and those people can and should interact with it. The more complex socio-technical assemblage is actually the most relevant object of regulation, not the “tech stack”. 

 

Why is there a driving need for AI Governance in business at the present time?

Every business needs to have an AI Governance concept that is appropriate to its domain and profile—there is no one size fits all. This is important for reasons ranging from good corporate citizenship to a proper liability risk management strategy. At the current time, I think we all need further clarity and certainty on AI regulation, too, so that businesses can implement these exciting new tools within acceptable parameters and with confidence that they are behaving lawfully.

Hopefully, the substance of AI regulations will also help to protect us as a community (including businesses) from harm, as it has done with other beneficial but potentially risky technologies such as nuclear energy.

 

Do you have any lessons learned you could share on this when working with companies/organisations to implement regulations? What are the common issues faced, and what can be done to overcome them?

There are clear problems that may result when a regulatory definition is uncertain or overbroad: it is difficult to know when something falls under the definition or not, and as soon as some hard edges begin to emerge, technologies and business models are created to circumvent it. One approach to combat this dynamic is to adopt principles or outcomes instead of standards or rules, but that can cause problems of its own.

One common problem is to determine which aspects of “AI Governance” should be translated into hard-edged regulation, which aspects should be left to “soft law” (for example, international standards or industry codes of conduct), and how the various aspects of the AI Governance landscape fit together.

Common issues include the complexity of AI technologies, the pace of innovation outpacing regulatory efforts, and the need for interdisciplinary collaboration. Companies may face challenges in interpreting and complying with regulations, particularly when they operate in multiple jurisdictions.

Sometimes, no regulation may even be better than bad regulation. Building internal expertise, conducting thorough impact assessments, and engaging in dialogue with regulators can help companies overcome these challenges.

 

What does the future of AI look like in terms of regulation and the enforcement of this in the long and short term?

The future is going to bear the imprint of movements towards regulation by major players, particularly the EU and China at the present time. There is the danger, at present, of a race to be “first past the post” and to set the tone for AI regulation globally, for example amplifying the “Brussels Effect” that has been felt since GDPR.

Aside from this, we can expect increased efforts to establish regulations that address ethical, privacy, and bias concerns associated with AI at various levels. Several middle powers, including Singapore, Canada, UK and Japan—which are significant players in the space, albeit not at the scale of the US or China—are promulgating an approach to AI as well. The first big test of those is whether they are broad and flexible enough to cover Generative AI as well as predictive systems, and whether they are calibrated properly towards AI applications in context.

Long term I think we will we move away from “AI regulation” per se towards the situation where our legal system and regulations are simply well-calibrated to cover to scenarios that involve AI systems. This is inevitably a longer and slower process and can actually be hampered by too much regulatory “clutter”. There are, of course, aspect of the AI supply chain and tech stack that will require targeted regulation, but as a general-purpose cluster of technologies, “AI” is a very difficult object to regulate effectively.

 

Dr. Jason Grant Allen joined us at the recent AI Summit TechXLR8 Asia, where his panel took a deep dive into ‘AI Risk Policy & Regulation – What to Look Out For in 2023, as well as a panel on ‘International Dialogue – Global Comparative Perspectives on Regulating AI’.

If you want to find out more about key topic, then make sure you join us at The AI Summit London, 14-15 June 2023, where we will be exploring this with the leading experts in the field of AI. Check out the agenda and see what you can be part of.

 

 

Explore Our Full Blog Library
Loading