Skip to main content

Blog

applied intelligence Live

24 May 2023

The emergence of CHATGPT and regulatory approaches to governance and disinformation in Singapore

Lian Jye Su, Chief Analyst, Omdia
The emergence of CHATGPT and regulatory approaches to governance and disinformation in Singapore

The explosion of ChatGPT is redefining the digital landscape, but with this comes a new set of challenges that need to be addressed, from how to manage disinformation, to regulatory approaches and governance. All to be balanced with allowing the creative and innovative nature of technology like ChatGPT to drive forward change.

Lian Jye Su, Chief Analyst, Omdia talked to Simon Chesterman, Professor at the National University of Singapore, where he is also Senior Director of AI Governance at AI Singapore to explore this further and see what procedures are being put in place, and the impact of these for both the long and short term.

In your opinion, how do you think ChatGPT exacerbates the speed and penetration of disinformation?

Large language models like ChatGPT are very good at producing human-like text and that's obviously beneficial in many ways — but it also raises the risk of disinformation, partly because of the volume and partly because of the persuasiveness.

On the volume side, there is now a capacity to produce enormous amounts of text that can flood the information space. That text is going to be ever more personalized, ever more human like, so now it might be obvious when you see a phishing exercise or some sort of disinformation campaign, but as that becomes more persuasive, potentially even more directly targeted, it's going to get harder and harder to tell that falsity from truth. Given the volume that could be produced, there's a real issue you might have trouble spotting this.

Can ChatGPT be used to counter disinformation?

Of course. The dilemma with all these tools is how they're going to be used. They can be used for good or for bad.

In terms of countering disinformation, I think there’s the possibility that these large language models could enhance human understanding by giving good answers to interesting questions, but the challenge is going to be the ability to distinguish what is true from what is false, but also what is AI-generated from what is not AI-generated. One idea here is the possibility of watermarking, so that at least you would know when you're interacting with a generative AI model as opposed to a human.

It looks like that's going to be extremely hard to implement, however, at least without some kind of hard regulation on the part of states. Even then, the ability for these models to be replicated, to go out into the wild, means it's going be harder and harder to keep them under control.

What is the best regulatory approach for an emerging technology like ChatGPT, considering too much regulation may lead to stifling innovation and too little may cause misuse?

When thinking about regulation, many people go immediately into looking at the question of what you want to regulate. One of the challenges this brings up is if you say you want to regulate “artificial intelligence”, that refers to a whole suite of technologies, referring really to statistical methods that underpin a lot of machine learning.

The difficulty in the way which it is being deployed is that the you get down the line, the people who are actually using some of these technologies in consumer-facing activities really don't understand how the underlying model works.

We've confronted this in the past in product liability. As industrial processes have got more sophisticated, we moved from an idea of buyer beware to product liability and I think we might see something like that in the area of artificial intelligence, including large language models. In many ways a more interesting question is not what to regulate, but when, and that’s the real dilemma we face.

 Many jurisdictions around the world, including small ones like Singapore, have a real wariness of overregulating and driving innovation elsewhere. The problem here is that what's called the Collingridge dilemma. This is the idea that at an early stage of innovation, it's actually quite easy to regulate new technologies — the costs and barriers are low. The longer you wait, the clearer those problems become, but also the cost of regulation goes way up.

Another thin to consider is why we want to regulate and here I do think it's important that we focus on the harms that we're trying to prevent and why we think regulation is important and what we think it's going to achieve.

Any other challenges when trying to identify the best approach for AI governance?

Among the challenges confronting anyone looking at AI governance is, of course, the problem of how fast the technology is changing.

People felt that the problem we needed to solve was to come up with new rules that would apply and new laws that could govern these technologies.

This misunderstands the problem as being both too hard and too easy. Too hard in that it assumes that we need to come up with entirely new rules when most laws can cover most AI use cases. Too easy because it assumes that if only you had the right set of rules, it would be easy to implement.

What we need to focus on is how to apply existing rules to new use cases and where there are gaps in those rules. The two things we really need to focus on are human control and transparency, human control — meaning it should be difficult, it should be impossible, it should be illegal to produce uncontrollable or uncontainable AI. We also need appropriate levels of transparency, we're going to need to know not just how some decisions are made, but why. Focusing on those two things of human control and transparency, I think will solve many of the problems we're looking at.

What is the approach taken by the Singapore government in AI governance? What can the rest of the world learn from Singapore?

Singapore is in many ways an ideal place to think about AI governance - a small, technically competent, reasonably well educated, politically uncontroversial jurisdiction with strong ties with all the major stakeholders, Europe, the United States, China, many other countries.

The government has tried to navigate an appropriate line between working out how do we get the benefits of these new technologies while also minimising or mitigating the risks. The government has also been wary about overregulating. In 2019 there was a review of the Penal Code that explicitly said we're not going regulate AI through the criminal law because all that would do is drive innovation elsewhere.

There's been real engagement with many stakeholders, not just companies and countries, but individuals looking at what things we need to preserve in terms of values and engagement. Two things that we've really emphasized in the model AI governance is this question of human centricity, appropriate level of transparency and explainability.

The government's also been quite active in its engagement with industry generally, while also looking at particular use cases such as:

  • Having regulatory sandboxes where you can experiment with new FinTech applications overseen by the Monetary Authority of Singapore.
  • Engagement with car manufacturers and autonomous vehicle manufacturers so that we could test bed autonomous vehicles in Singapore.

This practice of active engagement, thoughtful, deliberative approaches to regulation, starting with principles and then going slowly down the path towards regulation is one of the reasons why Singapore is a great forum to have global conversations about how we're all going to solve these problems in our different countries and different companies moving forward.

Explore the developments in ChatGPT further, and join Simon at Asia Tech x Singapore where his session will cover 'How to Combat Disinformation in the Age of ChatGPT' on June 8 at 11am in the Applied Intelligence stage. Find out more.

 

Explore Our Full Blog Library
Loading