May 5, 2023
The Regulation of ChatGPT
Garrett Burnett

UK Regulation
As discussed in my previous post (here), the UK has been slow to regulate artificial intelligence. The late March white paper has only indicated that a major overhaul of the regulation is not coming anytime soon. The rather new Department for Science, Innovation and Technology has outsourced the regulation to already existing bodies in hopes that the respective departments will be able to successfully regulate AI specific to the needs of businesses that fall within their jurisdiction. Unfortunately, this is hopeful at best. Without a central, strong set of regulations or regulatory body, the UK cannot expect AI to be properly and fairly regulated across every sector. The UK government has set what they consider the “guiding principles”, but these principles are up to the interpretation and discretion of the reader. Will AI be fairly regulated in the UK? As it stands, no.
Troubles in Italy
ChatGPT has been banned in Italy due to mounting privacy concerns. This marks the first western country that has outlawed the AI. ChatGPT is unavailable in China, Iran, North Korea, and Russia. This was a choice made by OpenAI, the company responsible for developing ChatGPT.
The Italian SA has temporarily banned the popular AI chat platform until OpenAI complies with Italy’s request for the development of an age verification system, a reduction in inaccuracies, and increase in privacy protections for users. According to the Italian SA, there was a “data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service” on March 20th. This order goes on to address how there is “no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
The Italian SA has imposed a 20 day deadline for OpenAI to address the concerns that they have raised, and if they do not comply with that order OpenAI will face “a fine up to EUR 20 million or 4% of the total worldwide annual turnover.”
According to a The New York Times article by Adam Satariano, OpenAI says that:
“We actively work to reduce personal data in training our A.I. systems like ChatGPT because we want our A.I. to learn about the world, not about private individuals. We also believe A.I. regulation is necessary.”
EU Regulation
A POLITICO article from 3 March 2023, highlighted the concerns that ChatGPT raised when the EU was developing their AI regulations. ChatGPT has led to new debates in the European Parliament. Prior to the rise of OpenAI’s ChatGPT, the European Parliament had developed the Artificial Intelligence Act. This act banned several applications of AI, such as social scoring and facial manipulation, but it also aimed to categorize different AI applications based on their level of risk. The introduction of ChatGPT made risk assessment particularly difficult because ChatGPT can be used in both negative and positive ways.
The leading lawmakers of the proposed AI Act encouraged categorizing ChatGPT as a “high-risk” application, but the further right-leaning groups opposed this general categorization. They argue that ChatGPT has many functions that should not be considered “high-risk” whatsoever. Despite these opposing views, the EU Parliament must work together to support and develop a set of regulations that are widely accepted and easily implemented.
In all, regulations are coming for AI, and ChatGPT has been a prominent conversation point. With the most recent temporary ban from Italy, you can expect major shifts in the way that ChatGPT operates in Europe and around the globe.
