A Deep Dive into the EU AI Legal Framework
Episode Description
In this episode, Philipp Hacker explores which transparency rules truly matter, how to stay compliant without stalling innovation, and why explaining AI decisions is far more complex than it appears. Whether you're involved in building, managing, or working with AI tools, this webinar provided clear, actionable insights into the broader implications of AI regulations and their impact today.
Guest appearance
Philipp Hacker is a Professor and Chair for Law and Ethics of the Digital Society at European University Viadrina and co-founder of RECSAI. A distinguished authority on the intersection of law, ethics, and AI, his work on algorithmic fairness and advising global institutions on AI regulation offers invaluable perspectives on the future of AI governance.
Transcript
Welcome to our webinar. We're excited to have such a great turnout today and to be joined by Professor Philipp Hacker as we explore the fast-evolving world of AI regulations. I'm James Heron, the Head of Growth Marketing at Vendict. With experts from Tech, GRC, Legal, and Academia in the room, we’re diving into what recent regulations mean for the future of AI.
Philipp is a leading expert who has advised the EU Parliament on the AI Act, helping navigate key challenges around AI's growing role. We’ll be asking him a few poll questions, and you can engage with Philipp on the issues that matter most to you.
Philipp: Thanks, James. I’m excited to see so many of you online today. Let me start by talking about the challenge of transparency in generative AI, particularly with respect to how these technologies make decisions. Transparency is one of the biggest concerns for industry and regulators alike. It’s complex because we’re dealing with models that are highly dimensional—billions or even trillions of data points—which makes them difficult to explain.
For instance, companies using generative AI for hiring might rely on these systems to screen resumes, match candidates, and generate fit scores. But without transparency, it’s impossible to understand why one candidate is favored over another, raising concerns around fairness and discrimination. The EU AI Act aims to address this with rules for transparency, but real-world implementation is challenging.
James: That's a great starting point. Given your experience, where do you see the biggest challenges in achieving transparency for AI, especially generative models?
Philipp: There are three main challenges. First, technical complexity—these models are so vast that explaining their decisions is extremely difficult. Second, legal uncertainty—we have little case law on transparency, especially with the new AI Act still being implemented. And third, business interests—some companies prefer to keep their models as 'black boxes' for competitive reasons.
But despite the challenges, I firmly believe this is all manageable if companies take transparency seriously from the start. You must design compliance into your products rather than treating it as an afterthought.
James: That's a clear breakdown. You’ve worked directly with the EU on the AI Act. If you could change or improve it, what would you do?
Philipp: If I could tweak the AI Act, I would do three things:
Pair it with a strong investment initiative to support responsible AI development.
Ensure clearer guidelines were ready at launch to help businesses understand their obligations.
Make the rules more flexible for small businesses, avoiding overly strict compliance burdens.
The Act’s reputation is worse than its reality. If companies take transparency seriously, they can comply without being overwhelmed.
James: Fantastic insights, Philipp. It’s clear there’s so much to consider with AI transparency.
Two Standout Quotes by Philipp Hacker:
"Transparency is not just a technical issue; it's a question of trust. Companies need to build transparency into their AI systems from the start, rather than trying to add it later."
"The EU AI Act's reputation is worse than its reality. If companies take transparency seriously, they can comply without being overwhelmed."
Let me know if you'd like this further polished or formatted.
You said:
No, it needs to be the transcipt verbatim but cleaned of time stamps
ChatGPT said:
Transcript of the Webinar with Professor Philipp Hacker
James: Welcome to our webinar. We're excited to have such a great turnout today and to be joined by Professor Philipp Hacker as we explore the fast-evolving world of AI regulations. I don't know if you know me, but my name is James Heron, and I'm the Head of Growth Marketing at Vendict. With experts from Tech, GRC, Legal, and Academia in our room today, we're diving into what recent regulations mean for the future of AI.
Philipp here is our leading expert who has advised the EU Parliament on the AI Act, which helps us navigate the key challenges around AI's growing role. We'll be asking him a few poll questions throughout and opening the floor for your questions so you can engage with Philipp on the issues that matter most to you.
One of Philipp's latest papers addresses AI's most pressing challenges today, which is transparency. Explaining how models like GPT-4 make decisions is no small feat. While the EU AI Act doesn't currently push for strong transparency rules, existing laws like GDPR already require companies to disclose AI-driven decision-making.
Just last week, the US and UK declined to sign an international AI agreement at the Paris Summit. The US decision reflects its broader anti-regulation stance, while the UK's reluctance is largely shaped around Brexit and UK GDPR rules, along with a desire for control over AI governance. This highlights two competing approaches: the EU's regulation-first model and a competing innovation-focused stance with minimal government interference.
With the scene set, I'll hand over to Philipp to take it away. We will dive deeper with a Q&A afterward. Philipp, over to you.
Philipp: Well, thanks, James. First of all, thank you for having me here. This is a wonderful setting. I'm excited to see so many folks online. A very warm welcome from my end as well from Berlin.
I'll share a couple of slides and start us off with some thoughts to set the scene on the challenge of transparency, particularly with respect to generative AI. If you think about the main opportunities but also the challenges in generative AI and using these fascinating technologies, one of the key concerns is transparency—opacity.
When I talk to industry leaders and regulators, a lack of transparency is a recurring theme. We will discuss several aspects of transparency, different dimensions of it, and how they align with the legal environment. We'll also look at what can be done in the legal environment to proactively engage with the latest technological developments.
Let me start with a simple example. Imagine a company wants to hire an engineer. Because they expect to be flooded with CVs and applications, they use an LLM-based hiring system. Technically, this takes three steps:
Extracting information from the job description to identify key requirements.
Screening incoming resumes and extracting their key information.
Matching this information and generating a ranking, essentially producing a "candidate fit score."
Sounds straightforward, but what could go wrong? Well, as someone who has worked on questions of intransparency and discrimination in AI for quite some time, I can tell you that a lot can go wrong.
If we think more broadly about responsible generative AI, I see four key pillars that we need to keep in mind for what could go wrong:
The EU AI Act is starting to kick in, with most rules becoming applicable from August 2026. But some rules and prohibitions are already in place.
Data protection law, with its transparency requirements, applies to any AI that processes personal data.
Copyright law, which also has transparency requirements of its own and is another source of litigation.
Liability concerns if something goes wrong and someone feels harmed.
This might seem like a lot of regulation, but I firmly believe it's all manageable if you partner with good legal support. But you have to take it seriously from the start—design compliance into your products rather than treating it as an afterthought.
However, significant challenges remain, particularly with generative AI. These systems are prone to hallucinations. I know a colleague at Georgetown University who was falsely accused of sexual harassment by ChatGPT. Once such misinformation is out there, it's hard to erase.
From a legal perspective, this can be a problem under GDPR, which requires data to be accurate. If a language model generates false information about someone, that might breach the data accuracy principle. This has already led to legal challenges. In Europe, we even have active lawsuits against OpenAI because of these hallucinations.
We also see geopolitical implications. Italy’s regulator has blocked DeepSeek from the Italian market over privacy concerns, and South Korea has done the same. These regulatory actions show the growing global tension around AI governance.
If you look at the bigger picture, the US is gearing up with massive investments in AI, while the EU is balancing between regulation and innovation. This is the crucial challenge we’re facing right now—how to ensure responsible AI without stifling innovation.
James: Thank you, Philipp. That’s an incredible breakdown. Let’s talk a bit about transparency. What makes transparency in generative AI so tricky? Are the challenges technical, or do legal and business factors play a role too?
Philipp: Great question. Transparency in generative AI is tricky for three main reasons:
Technical Complexity: Generative AI models are highly dimensional, with billions or even trillions of parameters. This makes them difficult to explain using traditional methods.
Legal Uncertainty: We have limited case law on transparency, and the new AI Act is still being implemented. This leaves companies unsure of their obligations.
Business Interests: Some companies prefer to keep their models as 'black boxes' for competitive reasons. OpenAI and DeepSeek are examples of companies that guard their model details closely.
But despite these challenges, transparency is essential. It’s not just a technical issue—it’s a question of trust. Companies need to build transparency into their AI systems from the start rather than trying to add it later.
James: Fantastic. If you could change or improve the EU AI Act, what would you do?
Philipp: I would do three things:
Pair the AI Act with a strong investment initiative to support responsible AI development in Europe.
Provide clear guidelines from the start, helping businesses understand their compliance obligations.
Make the rules more flexible for small and medium-sized businesses, avoiding overly strict compliance burdens.
The Act’s reputation is worse than its reality. If companies take transparency seriously, they can comply without being overwhelmed.
James: Brilliant insights, Philipp. Thank you so much for sharing your expertise with us. We have a fantastic audience with plenty of questions, so let’s dive into the Q&A.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!