Italy Became the First Western Country to Ban ChatGPT. Here's What Other Countries Are Doing

This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. 
Stefani Reynolds | AFP | Getty Images
  • Italy last week became the first Western country to ban ChatGPT, the popular AI chatbot.
  • ChatGPT has both impressed researchers with its capabilities while also worrying regulators and ethicists about the negative implications for society. 
  • The move has highlighted an absence of any concrete regulations, with the European Union and China among the few jurisdictions developing tailored rules for AI.
  • Various governments are exploring how to regulate AI, and some are thinking of how to deal with general purpose systems such as ChatGPT.

Italy has become the first country in the West to ban ChatGPT, the popular artificial intelligence chatbot from U.S. startup OpenAI.

Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily cease processing Italian users' data amid a probe into a suspected breach of Europe's strict privacy regulations.

The regulator, which is also known as Garante, cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot.

There "appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies," Garante said in a statement Friday.

Garante also flagged worries over a lack of age restrictions on ChatGPT, and how the chatbot can serve factually incorrect information in its responses.

OpenAI, which is backed by Microsoft, risks facing a fine of 20 million euros ($21.8 million), or 4% of its global annual revenue, if it doesn't come up with remedies to the situation in 20 days.

Italy isn't the only country reckoning with the rapid pace of AI progression and its implications for society. Other governments are coming up with their own rules for AI, which, whether or not they mention generative AI, will undoubtedly touch on it. Generative AI refers to a set of AI technologies that generate new content based on prompts from users. It is more advanced than previous iterations of AI, thanks in no small part to new large language models, which are trained on vast quantities of data.

There have long been calls for AI to face regulation. But the pace at which the technology has progressed is such that it is proving difficult for governments to keep up. Computers can now create realistic art, write entire essays, or even generate lines of code, in a matter of seconds. 

"We have got to be very careful that we don't create a world where humans are somehow subservient to a greater machine future," Sophie Hackford, a futurist and global technology innovation advisor for American farming equipment maker John Deere, told CNBC's "Squawk Box Europe" Monday.

"Technology is here to serve us. it's there to make our cancer diagnosis quicker or make humans not have to do jobs that we don't want to do."

"We need to be thinking about it very carefully now, and we need to be acting on that now, from a regulation perspective," she added.

Various regulators are concerned by the challenges AI poses for job security, data privacy, and equality. There are also worries about advanced AI manipulating political discourse through generation of false information.

Many governments are also starting to think about how to deal with general purpose systems such as ChatGPT, with some even considering joining Italy in banning the technology.

Britain

Last week, the U.K. announced plans for regulating AI. Rather than establish new regulations, the government asked regulators in different sectors to apply existing regulations to AI.

The U.K. proposals, which don't mention ChatGPT by name, outline some key principles for companies to follow when using AI in their products, including safety, transparency, fairness, accountability, and contestability.

Britain is not at this stage proposing restrictions on ChatGPT, or any kind of AI for that matter. Instead, it wants to ensure companies are developing and using AI tools responsibly and giving users enough information about how and why certain decisions are taken.

In a speech to Parliament last Wednesday, Digital Minister Michelle Donelan said the sudden popularity of generative AI showed that risks and opportunities surrounding the technology are "emerging at an extraordinary pace."

By taking a non-statutory approach, the government will be able to "respond quickly to advances in AI and to intervene further if necessary," she added.

Dan Holmes, a fraud prevention leader at Feedzai, which uses AI to combat financial crime, said the main priority of the U.K.'s approach was addressing "what good AI usage looks like."

"It's more, if you're using AI, these are the principles you should be thinking about," Holmes told CNBC. "And it often boils down to two things, which is transparency and fairness."

The EU

The rest of Europe is expected to take a far more restrictive stance on AI than its British counterparts, which have been increasingly diverging from EU digital laws following the U.K.'s withdrawal from the bloc.

The European Union, which is often at the forefront when it comes to tech regulation, has proposed a groundbreaking piece of legislation on AI.

Known as the European AI Act, the rules will heavily restrict the use of AI in critical infrastructure, education, law enforcement, and the judicial system.

It will work in conjunction with the EU's General Data Protection Regulation. These rules regulate how companies can process and store personal data.

When the AI act was first dreamed up, officials hadn't accounted for the breakneck progress of AI systems capable of generating impressive art, stories, jokes, poems and songs.

According to Reuters, the EU's draft rules consider ChatGPT to be a form of general purpose AI used in high-risk applications. High-risk AI systems are defined by the commission as those that could affect people's fundamental rights or safety.

They would face measures including tough risk assessments and a requirement to stamp out discrimination arising from the datasets feeding algorithms. 

"The EU has a great, deep pocket of expertise in AI. They've got access to some of the top notch talent in the world, and it's not a new conversation for them," Max Heinemeyer, chief product officer of Darktrace, told CNBC.

"It's worthwhile trusting them to have the best of the member states at heart and fully aware of the potential competitive advantages that these technologies could bring versus the risks."

But while Brussels hashes out laws for AI, some EU countries are already looking at Italy's actions on ChatGPT and debating whether to follow suit.

"In principle, a similar procedure is also possible in Germany," Ulrich Kelber, Germany's Federal Commissioner for Data Protection, told the Handelsblatt newspaper.

The French and Irish privacy regulators have contacted their counterparts in Italy to learn more about its findings, Reuters reported. Sweden's data protection authority ruled out a ban. Italy is able to move ahead with such action as OpenAI doesn't have a single office in the EU.

Ireland is typically the most active regulator when it comes to data privacy since most U.S. tech giants like Meta and Google have their offices there.

U.S.

The U.S. hasn't yet proposed any formal rules to bring oversight to AI technology.

The country's National Institute of Science and Technology put out a national framework that gives companies using, designing or deploying AI systems guidance on managing risks and potential harms.

But it runs on a voluntary basis, meaning firms would face no consequences for not meeting the rules.

So far, there's been no word of any action being taken to limit ChatGPT in the U.S.

Last month, the Federal Trade Commission received a complaint from a nonprofit research group alleging GPT-4, OpenAI's latest large language model, is "biased, deceptive, and a risk to privacy and public safety" and violates the agency's AI guidelines.

The complaint could lead to an investigation into OpenAI and suspension of commercial deployment of its large language models. The FTC declined to comment.

China

ChatGPT isn't available in China, nor in various countries with heavy internet censorship like North Korea, Iran and Russia. It is not officially blocked, but OpenAI doesn't allow users in the country to sign up.

Several large tech companies in China are developing alternatives. Baidu, Alibaba and JD.com, some of China's biggest tech firms, have announced plans for ChatGPT rivals.

China has been keen to ensure its technology giants are developing products in line with its strict regulations.

Last month, Beijing introduced first-of-its-kind regulation on so-called deepfakes, synthetically generated or altered images, videos or text made using AI.

Chinese regulators previously introduced rules governing the way companies operate recommendation algorithms. One of the requirements is that companies must file details of their algorithms with the cyberspace regulator.

Such regulations could in theory apply to any kind of ChatGPT-style of technology.

- CNBC's Arjun Kharpal contributed to this report

Copyright CNBC
Contact Us