5 October 2023FeaturesCopyright ChannelSarah Speight

GDPR is the foundation for the EU’s AI rulebook

Following its first proposal by the European Commission more than two years ago, the landmark EU AI Act is just months away from adoption.

EU lawmakers green-lighted the draft text in June 2023, and adoption of the finalised text is forecast for early 2024, with an approximate two-year transition period before the Act takes effect.

The European Parliament says its priority is to “ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.

“AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

The aim of the game is harmonisation, much like the transformative General Data Protection Regulation (GDPR) enacted more than five years ago.

In fact, the EU AI Act shares other parallels with its data-privacy predecessor, having built upon its basic framework such as the goals of accountability and governance, using a risk-based approach, and having extraterritorial reach.

Parallel concepts

According to Ashley Williams, a partner at Mischon de Reya, the Act is “dubbed by a lot of people as ‘GDPR 2.0’, or ‘GDPR Turbo’, because it's building on a lot of concepts that the GDPR already had.”

The comparison between the two pieces of legislation is striking, he observes.

“Conceptually and politically, there's so much crossover between the GDPR and the new AI Act,” Williams tells WIPR.

For instance, there is the ‘Brussels effect’ of wanting to be the high watermark for protecting fundamental rights and freedoms for data subjects.

“AI raises much more fundamental and broader questions than privacy. To some extent, privacy is almost easy.” — Charles Helleputte, Squire Patton Boggs

Looking at the details of the draft text, Williams says the concepts are “very similar” to GDPR and that some of the wording is “identical”, such as transparency, explainability, and conformity assessments.

“It’s a copy and paste,” he says.

He also notes that there is an automatic assumption that companies will be “up to par” and compliant with GDPR before they are compliant with the AI Act.

“Now, we're building on top of those principles.”

AI is ‘much bigger’ than privacy

For Charles Helleputte—a partner at Squire Patton Boggs and head of the firm’s EU data privacy, cybersecurity & digital assets practice—there is one big difference between privacy and AI.

“AI raises much more fundamental and broader questions than privacy. To some extent, privacy is almost easy.”

While GDPR “made things more complicated” for organisations, “it's a relatively contained field of law that applies, as we all know, to personal data,” he tells WIPR.

Helleputte—who sits on the European Advisory Board of the International Association of Privacy Professionals (IAPP), and advised on both the GDPR and the AI Act—explains that “AI is much more broad because if you want to capture AI, you need to know how privacy works.”

There are other factors to consider too, such as ethics, he adds. “You have a lot of emphasis on the quality of the dataset, for example, how to make sure that outputs are not biased.

“I think that's something which is fundamentally different and which makes the upcoming regulations even more special, in that it seeks to govern something which is much bigger than just privacy.”

The risk pyramid

The focus of each regulation is different, with GDPR obligating data controllers and data processors, and the AI Act obligating providers and users of AI systems.

Accordingly, organisations, companies and organisations will need to navigate the two regulations carefully to identify whether AI Act or GDPR requirements—or both—apply.

Similar to GDPR, the AI Act ranks AI applications and systems in a ‘pyramid’ of three main risk categories: unacceptable, high, limited, and low/minimal.

“I think we should have a clear definition of what AI is because it is currently very wide.” — Mareike Gehrmann, Taylor Wessing

“Assessing how much control or how many obligations you need to comply with depends on where you fall on that risk matrix,” explains Williams.

As with GDPR, “you're having to balance rights against the risks and fundamental freedoms of individuals”.

Generative AI: Under GDPR’s shadow

Generative AI tools, like ChatGPT, would fall into the ‘high risk’ category under the AI Act and would be obliged to comply with transparency requirements according to copyright law.

These include disclosure that the content was generated by AI, and publishing summaries of copyrighted data used to train large language models (LLMs).

But this provision was a late amendment to the proposed text, and undoubtedly a response to generative AI’s rampant growth.

According to Helleputte, the initial version of the AI Act “didn't consider generative AI whatsoever”—but that was because it didn’t exist.

Since then, generative AI has ballooned, attracting many column inches and several lawsuits brought by authors and creators against the likes of ChatGPT owner OpenAI, Meta and Google.

The European Commission had no choice but to adapt, and whether or not there is unanimous agreement on provisions for generative AI, a text must be agreed—even an imperfect one, says Helleputte.

When asked if the proposed legislation will go far enough to help protect against the misuse of data, he is sagacious.

“Between now and December, roughly, the discussion will be pretty intense. And we can probably answer your question by then,” he laughs.

And again, as Williams points out, the grounding for this debate isn't actually the AI Act—it's GDPR, he says.

“It's using data for a purpose that you don't have a lawful basis for, or reusing it for a secondary purpose to improve your algorithms—that is already protected from a personal data perspective by the GDPR.”

AI definition ‘a must’

Independent bodies have been invited to submit their recommendations to the text.

For example, the European DIGITAL SME Alliance has suggested that conformity assessment standards may be written in a way that is impractical for SMEs.

Mareike Gehrmann, a specialist IT lawyer and partner at Taylor Wessing in Dusseldorf, Germany, believes that any amendments to the draft text should first address clarity of definitions, particularly over what constitutes ‘AI’.

“First, I think we should have a clear definition of what AI is because [it is currently] very wide,” she says.

Smart meters, for instance, could be AI systems under the current text.

Also, the Act should be made more usable in practice, she notes. “When you cannot ‘live’ the law, and it's not practicable, then it will not be accepted by the market because the requirements are too difficult to implement.”

Benedikt Kohn, an associate in the technology, media and telecoms practice who works with Gehrmann at Taylor Wessing, points out that with the forthcoming European Parliament elections, such clarifications may be elusive.

“People say it will be a very fast process because everyone wants to get the AI Act before the election. So it's hard to say if there will be a good definition of AI.”

Trouble over transparency

Gehrmann says that while GDPR has very strict requirements regarding transparency, the AI Act is a different proposition.

“Very often our clients don't have all the information yet to be as transparent with the AI Act as you should be with GDPR,” she tells WIPR.

“The best approach at the moment is to take a kind of ‘own risk’ approach, and define which risks with respect to AI are acceptable for you because at the moment, you cannot be compliant 100%.

“You can reduce the risk, but you cannot reduce it to zero.”

That said, the two laws will complement each other, and if you’re up to speed with GDPR, you’re one step ahead, she adds.

GDPR: A good foundation

Kohn notes that GDPR provides a good foundation for the AI Act, because if you have good processes in place for that, then you can build on these.

For example, documentation, responsibilities, assignments will be the same for both laws and “if you have good processes for GDPR compliance, then it will be easier for you to comply with the AI Act”.

The two laws will largely work in tandem, with some exceptions.

With both AI and personal data, you are clearly intersecting, but there will be instances when the AI Act applies, Kohn explains.

“For example, if you have a truly anonymised data set to train AI—something that doesn’t really exist yet, but that might exist—or if you're using synthetic data to train your AI model, you might fall outside of GDPR and just be covered by the AI Act.”

However, he maintains that most importantly, privacy—or GDPR compliance—is only one element of AI compliance under the Act.

“It's an important element, but it's not the only one. You will have many other things to cope with and do by the time you are within the Act’s remit.”

Difficulty of ‘whole-market’ AI rules

One advantage of the AI Act is, of course, that it will be one law applicable to any country operating or selling AI systems within the EU.

But as with all broad-ranging regulation, there won’t be a one-size-fits-all. This means that the AI Act might not be ideal for AI companies, which may be operating in multiple sectors, and having to work with multiple regulators too.

Gehrmann adds that as with cybersecurity the EU can only regulate AI systems at a very high level because if too much detail might impede technical developments.

“You would have to amend the AI Act again and again,” she explains.

“You can only have basic requirements for the whole market. And that's why you cannot be as clear as you should be.

“But it must be practicable. That's the challenge.”

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at

More on this story

11 April 2018   There are fears among IP owners that the pending GDPR in Europe will make it difficult to track down infringers online, undoing years of good work by the Whois system. Alexander Heirwegh of law firm Petillion reports.
14 June 2019   The Spanish Data Protection Agency has hit the organisers of Spain’s top football division La Liga with a €250,000 fine for breaching EU data protection law when monitoring online piracy.
7 December 2023   European Council and European Parliament provisionally agree a revision of the design protection package | Update brings new life to 20-year-old legislation | Repair clause ‘significant’ with protections extended to store design, heritage costumes and designs of virtual products.