Navigating AI governance shouldn't be complicated!
Episode Description
In this episode Ravit Dotan reveals how to simplify AI policy with just 4 critical gap analysis questions, showing how US & EU standards can work as one streamlined framework.
Guest appearance
Mads Nielsen is a specialist in risk analysis, dedicated to transforming cyber risk analysis from theoretical discussions into practical solutions that enhance the risk exposure and response of top-performing businesses. With expertise in Cyber Risk Management and Quantitative Risk Analysis, Mads excels at clearly explaining cyber risk exposure, tolerance, and security decisions.
Transcript
Um, looks like we already have a great attendance. I'd like to introduce everybody and get started. We're just about on time, so I will start the introductions.
I'm James Heron, I'm the Head of Marketing here at VCT, and I'm thrilled to be hosting our latest webinar. Today, we have an incredible guest joining us, someone truly fascinating in the world of AI ethics—Ravit Dotan. Ravit is an AI ethics advisor, researcher, and speaker who has made it her mission to transform AI ethics into a powerful business asset for tech companies, security teams, GRC teams, and procurement teams. She has an impressive background, holding a PhD in Philosophy, and she describes herself as a former mic philosopher who has spent years researching the social and political impact of science and machine learning. This curiosity and deep thinking led her to the AI industry with one main goal—to make AI responsible.
Ravit didn’t stop there. She founded Tech Better to put her ideas into practice, and through Tech Better, she’s helped organizations embed AI best practices into their business strategies, ultimately boosting reputation, compliance, and even revenue. Ravit has built an impressive LinkedIn community, which I'm sure many of you are already a part of, with her recent post on aligning the EU AI Act and NIS2 gaining thousands of engagements in its own right. Her insights into a unified framework have sparked meaningful discussions, emphasizing the need for cohesive AI and cybersecurity strategies in today’s digital landscape.
On a personal note, what sparked my interest in Ravit is her unique dual approach. At Vendict, we are focused on automating compliance intelligently, and I saw the same philosophy with Ravit. She’s not only focused on what responsible AI should look like in theory but also on how AI should work in the real world—in businesses, within teams, and across departments. As AI becomes a part of our everyday work life, she’s here to help us understand how to make it work responsibly and effectively.
So, welcome to the webinar, Ravit.
Hi, what a great intro, thank you.
No problem. And also joining us today, to further enrich this conversation, we’re honored to have Mads Neelson co-hosting. He’s a risk lead with a profound understanding of information security, risk identification, exposure analysis, and mitigation strategy effectiveness—and don’t ask me to repeat that again. Many of you remember Mads from our previous webinars, it’s fantastic to have you both with us.
Hey Mads.
So, enough of the intros. Let's get started. We have a large audience and we’re ready to kick off. To get us started, Ravit, how did your journey in philosophy and AI ethics bring you to the work you’re doing today?
Yeah, that’s a great question because people sometimes think that you should come with a computer science background into this space. But actually, I mean, you can, but not necessarily. For me, within philosophy, I was studying science, and I was specifically studying all the ways that social and political values are integrated into science. And I was thinking, what about this new discipline of machine learning? People think it's somehow devoid of those social and political influences, but it’s really not.
Mads: But what I’ve experienced is that we’ve been confronted with a growing number of regulations—especially coming from the European context. We have the NIS2 directive, we had GDPR before that, now we have the AI Act, and even DORA (Digital Operational Resilience Act). We’re facing a flood of new regulations around the integrity, reliability, and fairness of our digital systems.
This means we are often asked some pretty difficult questions—questions that we’re probably not used to answering. So I’m looking forward to hearing some practical answers from Ravit today.
James: Exactly. At Vendict, we specialize in answering questions when it’s hard to answer them in the first place. I can attest to that, especially around the world of AI regulation right now.
Throwing it back to you, Ravit, perhaps you can explain—along with AI regulations worldwide—how we got here. The journey from where we started to today’s current AI policies, how do you see it?
Ravit: So, how did we get to the current landscape of AI regulations? I think it began with some scandals around 2016. There were a couple of high-profile stories—one was the COMPAS recidivism algorithm, exposed by ProPublica for being discriminatory. Another was Amazon’s hiring algorithm, which was biased. Those stories really highlighted the harm AI could cause, and that’s when people realized something needed to be done.
Initially, it started with journalism exposing the issues. Then academia got involved, and we saw an explosion of AI ethics principles and frameworks. Around 2020, it became mainstream, and that started to trickle down to regulators.
Companies began to feel uncomfortable as they realized this would impact them, and they started lobbying. And so regulations began to take shape, though the approach varies by region.
James: And we’ve recently seen the introduction of the EU AI Act. If we’re looking at the EU and the US together, their roles in developing separate regulations around AI—could you talk a bit about those separate approaches and how they could impact companies operating across borders?
Ravit: Sure. I’d actually add the UK to that discussion, because we’re seeing three distinct approaches.
The EU has an AI-specific, overarching regulation approach. They set rules like "AI systems should do this, should not do that." The UK has chosen a different route—they don’t want a centralized AI law because they feel it might stifle innovation. So they focus on sector-specific and voluntary frameworks.
The US is different again. People think it's similar to the UK, but it’s not. The US leans on existing laws, like non-discrimination laws, rather than creating new ones. They leverage federal agencies to enforce these rules and provide guidance. So you have federal agencies like NIST (National Institute of Standards and Technology) creating guidelines, and then you have state laws and even city laws adding their own layers.
James: So as a business, if you’re US-based but working with EU and UK clients, you’re facing this explosion of different policies. How do you manage that?
Ravit: That’s the challenge. Because of this diversity in regulations, I would recommend not sticking to a narrow compliance mindset. Don’t think, "All I need to do is comply with this one law." It’s never just one law. There are AI laws, non-AI laws, discrimination laws, consumer protection laws, privacy laws—so instead of trying to tick every box, I suggest creating a unified compliance framework.
Take the core principles from each regulation, because they often align, and build something that supports compliance broadly. This way, you’re not just trying to keep up with every single change. You’re creating a strategy that is flexible.
Mads: Yeah, I completely agree with that. Trying to meet every individual regulation is a headache, but finding common ground and creating a unified approach is really the only way forward.
James: That’s a great insight. Just checking out our polls here—it’s interesting to see that at least half of our attendees today have seen your posts about the EU and US AI frameworks, Ravit. So I think we can dive deeper into that.
If we’re looking at small and medium-sized companies, many of our audience likely represent that demographic. For them, bringing on a CISO (Chief Information Security Officer) is often the first big step towards security, compliance, and governance. But then they’re tasked with navigating a maze of frameworks and policies. Ravit, you’ve developed a gap analysis document that I’d love for you to walk us through.
Ravit: Absolutely. So, this document is my attempt to take something incredibly complex and make it manageable. I created a unified framework that aligns the EU AI Act and the NIST AI Risk Management Framework. But I know that can be overwhelming for small to medium-sized enterprises (SMEs).
So I simplified it—boiling it down to just four key questions. And then if you want to take it a step further, it can expand to twelve. But we start with the basics.
The first question is about Discovery: How do you understand the AI’s purpose, its impacts, and its requirements? Surprisingly, this is often a weak point. Companies think they have it covered, but they overlook critical aspects—fairness, privacy, explainability. So it’s about making sure you’re aware of those blind spots.
The second question is Measurement: How do you know how well you’re doing? Are you discriminating or not? The only way to know is to measure it. And don’t just measure once—measure before you deploy and monitor as you go.
The third question is Mitigation: How do you minimize negative impacts? Whether they’re already happening or just risks, how do you address them? And finally, Accountability: How do you ensure internal and external accountability? This is about policies, roles, compliance, audits, transparency—all of it.
At the bottom of the document, I’ve also outlined maturity levels, from no activities at all to ad hoc, some structure, and eventually a fully adaptive, responsive approach.
James: That’s a really clear framework. And what you’ve just mentioned—start small, build from there—feels like a really practical approach.
Ravit: Exactly. Start with something manageable. Don’t try to be perfect. Just get started.
James: Absolutely. Mads, anything you want to add?
Mads: I think that’s spot on. Trying to do everything at once is a recipe for failure. But starting with those key questions and then building out as you gain confidence—that’s the way to go.
James: Perfect. We’ve had some great insights so far, and the audience is definitely engaged. I see some questions coming in, and we’ll try to answer as many as we can as we go along.
So I started with just—can we just expose this a little bit? Can we see exactly how this discipline is actually very much politically loaded? And then, after a while, I realized I didn’t want to be just pointing it out—I wanted to be changing it. So that’s when I decided, I’m going to go into the industry.
A bit of a eureka moment for you.
Yeah.
If we’re looking at frameworks and regulations, which I guess falls into some of your remit in AI ethics, it’s always been central to compliance and security. But with the rise of AI, it feels like industry regulators are rushing to put a whole array of regulations in place. On one hand, that’s really encouraging—it shows regulatory bodies are trying to take the lead, something they’re often criticized for. But at the same time, there seems to be a lack of coordination or a unified approach. Would you say that’s a fair assessment?
Yeah, it’s a very fair assessment. And it’s true and problematic on different levels. One is the global level, because different countries have different approaches. But also within each country, there might be sub-approaches. In the US, for example, different states, different cities have different approaches. But because AI is not a technology confined to a location, it gets really problematic because then companies have to deal with multiple regulations in multiple jurisdictions.
Absolutely. Okay, so that’s very interesting in itself. Especially Mads and I coming from a European approach, it’s good to hear your perspective over the pond, as we say. But what are the leading approaches to AI regulation worldwide right now? And perhaps you can explain how we got here—the journey from where we started to today’s AI policies.
Yes, sorry, I’m dealing with my internet. I’m trying to get rid of this lag.
No problem.
Mads, perhaps you want to speak a little bit about your background while we wait for Ravit to reconnect. We met previously, but it would be good for the audience to understand a bit about where you’re coming from, so when we talk more to Ravit, they can understand a bit more about your perspective, especially from the risk side of things.
Sure. So I come from an information security, risk, and compliance background. The reason I think there’s an intersection between me and Ravit’s work is that a lot of these regulations fall into existing information security and compliance departments—also legal.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!