We ask ChatGPT: Should AI be regulated?

We ask ChatGPT: Should AI be regulated?

We asked ChatGPT whether AI should be regulated, and its answer was YES! Although we thought that ChatGPT provided a very 'thoughtful' answer, it missed some interesting economic considerations around AI regulation.

By now, we have all heard about the AI boom. One rising star that has been making headlines since its launch in November 2022 is OpenAI's ChatGPT – an AI based on the GPT-3 family of large language models. These have been developed using supervised and reinforcement learning based on text datasets from the internet dating to the end of 2021.

The AI market is projected to surge to $641.3bn by 2028 from only $51.1bn in 2020, according to one forecaster. Given the speedy uptake, the discussion about the need for AI regulation can only intensify.

Who better to quiz about the outlook for AI regulation, we thought, than ChatGPT itself? Its answer was much more mellow than we expected.

We asked: Should AI be regulated?

ChatGPT reply:

'Yes, AI should be regulated to some extent. The increasing use of artificial intelligence (AI) technology has led to concerns about its potential impacts on society, including issues related to privacy, bias, accountability, and transparency. It is important to establish clear guidelines for the development and deployment of AI to ensure that its benefits are realized while minimizing any negative consequences.

Regulation can help ensure that AI is used in an ethical and responsible manner and that the rights of individuals are protected. This may include guidelines on data privacy and protection, standards for algorithmic bias and transparency, and requirements for accountability in the event of AI-related harm.

However, it is important to strike a balance between regulation and innovation. Overly strict regulations could stifle the development and deployment of AI, while a lack of regulation could result in its misapplication and misuse.

In summary, AI regulation is necessary to mitigate potential negative consequences and ensure that the benefits of AI are realized in an ethical and responsible manner.'

As economic advisers ourselves, we were impressed by the measured tone of the first sentence: 'AI should be regulated to some extent' – spoken like a true consultant!

More substantively, ChatGPT made some crucial points.

First, it picked up on the increasing use of AI and the concerns being raised around it. This issue has been discussed across the internet prior to the end of 2021 – ChatGPT's available database timeframe.

Second, it identified the most important, albeit most self-evident, harms that may arise from the use of AI:

  • privacy;
  • bias; and
  • lack of transparency.

Third, it recognised some more subtle risks that regulation can alleviate: the ability to standardise codes to address and mitigate bias, ensuring AI is deployed ethically and holding AI developers and providers to account.

Finally, ChatGPT spotted that overregulation could potentially have a chilling impact on innovation, although it failed to spell out exactly how.

Overall, it was quite an impressive answer from an AI that covered most of the obvious aims, benefits and risks of AI regulation.

But, let's remember that ChatGPT was trained on pre-2022 data. The debate over AI regulation has quickened considerably since then, so it's perhaps not surprising that ChatGPT missed several important issues that AI regulation can, and should, address.

First, ChatGPT did not discuss the potential ability of regulation to build users’ trust in AI, meaning users share more data with AI, improving the accuracy and efficiency of AI models. A Frontier evidence review found that an increase of 1 point on a 5-point trust scale leads to a significant rise in data sharing. This can generate an 8% gain in firm productivity with associate economy wide benefits.

Second, although ChatGPT identified that AI regulation could hurt innovation, it did not pinpoint how this might happen. AI regulation would create compliance costs for AI companies, diverting future R&D investments. Indeed, if regulation is overly complex, the costs could be so excessive that some companies are forced to exit the market. Furthermore, given that AI technology is hard to define, ambiguous regulation could catch many firms in its net, including non-AI companies with small in-house AI developments. This can lead to compliance costs that are not proportionate to the value those non-AI firms extract from their in-house AI, dampening innovation.

Third, ChatGPT did not tackle the impact of regulatory asymmetry across major jurisdictions. If AI firms are operating in countries with different regulatory structures, their compliance costs will be multiplied. In that case, they might decide to operate only in regions where they can earn a higher ROI. Alternatively, they might choose to align with a regulatory regime that, because of its comprehensive nature, serves as a template for other countries with similar regulatory set-ups.

Overall, we thought ChatGPT's reply was decent, given that it has not been exposed to new data since the end of 2021. The conversation about AI regulation has progressed since then. However, many areas are yet to be decided. The EU AI regulation (the draft of which is part of ChatGPT's dataset) is still in its conception stages, and other countries are having similar debates. Indeed, it would be interesting to have another chat with ChatGPT once its dataset has been updated. Maybe then its answer would be more complex – although perhaps not as comprehensive and nuanced as the response an economic consultant would come up with.