AI Governance Is an Illusion Without This
Episode Description
In this episode of ChatGRC, we sit down with AI governance and ethics expert Fabrizio Degni for a sobering look at the unseen risks in generative AI. From flawed assumptions about trust to ethical frameworks that lag behind technology, Fabrizio warns us: the biggest dangers are the ones no one’s talking about.
Guest appearance
Fabrizio Degni is one of Europe’s leading voices in ethical AI. He heads up digital transformation at Italy’s National AI Agency, serves as president of the Italian Chamber of the Global AI Council, and researches governance frameworks at Politecnico di Milano. His work bridges emerging tech, institutional policy, and human responsibility.
Transcript
[00:01–00:03] Hello, everyone. Sarkas here with
[00:03–00:06] Verdict, and with me today in
[00:06–00:09] GRC Talks, Fabrizio Degni. He's
[00:09–00:12] an AI governance and ethics expert,
[00:12–00:14] and he leads AI and digital
[00:14–00:17] transformation in Italy's national agency
[00:17–00:19] for AI, and he also serves as the
[00:19–00:22] president of the Italy Chamber of Global
[00:22–00:25] Council and responsible for AI. Also,
[00:25–00:27] he's a researcher at the Politecnno di
[00:27–00:29] Milano and focuses on ethical and
[00:29–00:31] regulation frameworks for emerging
[00:31–00:33] markets. So if there's anyone who is
[00:33–00:35] thinking about uh the ethical
[00:35–00:37] aspects of AI, it's
[00:38–00:40] Fabrizio. So Fabrizio, thanks so much for
[00:40–00:43] joining us. Thank you so much for this
[00:43–00:46] space. Brilliant.
[00:47–00:49] Fabrizio, so we connected
[00:50–00:52] Primarily because of a post that you've
[00:53–00:55] made on LinkedIn, talking
[00:55–00:57] about a
[00:58–01:01] framework that maps out the major
[01:01–01:04] risks and concerns with with generative
[01:04–01:07] AI. And that post was was
[01:07–01:09] quite popular. And I would love to
[01:10–01:12] hear from you, what sort of made you
[01:13–01:16] write and be vocal about this?Yeah, I
[01:16–01:16] believe
[01:19–01:21] uhFor those of us involved
[01:22–01:24] in the daily
[01:26–01:28] technology and
[01:29–01:32] frontier innovation, before
[01:32–01:35] to be hyped or enthusiastic
[01:35–01:37] for the progress we see
[01:38–01:40] on a daily basis, we should be
[01:41–01:44] scared of what we
[01:44–01:47] don't see, of what nobody speaks about.
[01:48–01:50] BecauseEverything has pro and
[01:50–01:53] cons. If from a
[01:53–01:56] side there are benefits,
[01:56–01:59] uh we should ask which
[01:59–02:01] one is the cost?Nothing in
[02:01–02:03] life is free,
[02:04–02:05] unfortunately.
[02:07–02:10] For the cost we are paying
[02:10–02:13] to innovate on the AI front,
[02:13–02:15] I believe we should question
[02:16–02:19] each one if it is really worth it.
[02:19–02:21] So So what are some of the costs that
[02:21–02:23] you're seeing that that we're paying
[02:23–02:26] right now?I'm I'm reading a lot and I'm
[02:26–02:28] seeing a lot of predictions around what
[02:28–02:31] might happen. Uh What What do you think
[02:31–02:33] we're already seeing?There is a
[02:34–02:36] wonderful study called the 2027
[02:36–02:39] AI. Maybe we can
[02:39–02:41] later share with the
[02:42–02:45] people. Well, some
[02:45–02:48] researcher did some
[02:48–02:51] hypothesis for what could be
[02:51–02:54] this endless run to the
[02:54–02:57] AGI. AGI and
[02:58–03:01] ASI are the two steps for what the
[03:01–03:03] ANI, where we are currently now where
[03:04–03:06] we are supposed to have an intelligence
[03:07–03:09] with the same capabilities of the humans
[03:09–03:11] and with the superintelligence with the
[03:13–03:16] capability over humans. Now,
[03:17–03:19] just this comparison is pretty freaky
[03:19–03:21] because how you can compare the
[03:22–03:24] human mind, we still don't know how it
[03:24–03:27] works with something we are trying to
[03:28–03:29] compare with
[03:31–03:34] a different domain. Just this is strange.
[03:34–03:36] Okay, let's put on the same
[03:36–03:39] table humans live
[03:39–03:41] inside the context.
[03:42–03:45] what we are is what we
[03:45–03:47] feel. Every day we
[03:47–03:50] are different. Every
[03:50–03:53] moment we feel different because
[03:53–03:56] we are all of us. It's not just Brian.
[03:56–03:58] It's sense, it's
[04:00–04:02] memories, it's maybe
[04:03–04:05] emotion, it's empathy. It's
[04:06–04:08] millions of stuff that's happening
[04:09–04:12] in the same moment. And from the
[04:12–04:13] other side, we have an artificial
[04:13–04:16] intelligence trying to match us
[04:17–04:18] in the mechanical activities.
[04:19–04:21] Okay. But now the
[04:22–04:24] next step is to use
[04:25–04:28] artificial intelligence exactly as a pro.
[04:30–04:32] There are many applications already able
[04:32–04:35] to be like your psychologist.
[04:35–04:38] So ohh you see recently
[04:38–04:41] a big companyis
[04:42–04:45] planning to have inside the toys
[04:46–04:49] AI capabilities for skills. This is
[04:50–04:52] pretty dangerous because in my opinion we
[04:52–04:55] don't have any study on the long-term
[04:55–04:57] effect of this. When you
[04:57–05:00] look at the healthcare and
[05:00–05:03] there are some medicine
[05:03–05:06] to distribute before to go on the market,
[05:08–05:10] You wait years. You do some studies, you
[05:10–05:13] do research. Yeah, you wait years because
[05:13–05:15] it's not immediate. Maybe after
[05:15–05:18] two, three years something happens.
[05:20–05:22] But if you don't have this delay,
[05:22–05:24] decantation, time, you know, like the
[05:24–05:27] wine, if you don't have the time to
[05:27–05:29] understand what's happened,
[05:30–05:32] you simply get the side effects.
[05:33–05:35] And the side effects are
[05:36–05:37] unexpected. Now
[05:38–05:41] you understand that it's like
[05:41–05:44] every day we have like a gun
[05:45–05:48] because we are many companies on the
[05:48–05:50] same research
[05:51–05:53] fields and there is no a
[05:53–05:56] global policy or regulation
[05:56–05:59] or way we can
[05:59–06:02] say ethical way to approach on the
[06:02–06:05] research. hmm So
[06:05–06:07] I think it's a it's a really interesting
[06:07–06:09] point. I mean, you mentioned toy
[06:09–06:12] designers adding AI
[06:13–06:15] and and it sounds like a great product. I
[06:15–06:17] mean, I've got kids, it would be pretty
[06:17–06:19] cute to see my daughter playing with
[06:20–06:23] a doll that she can talk to and maybe
[06:23–06:25] could help her or teach her things. But
[06:25–06:28] obviously it would also collect a lot
[06:28–06:31] of data and information that that I
[06:31–06:33] probablyNot probably, I definitely don't
[06:33–06:36] want to be out there. So, you know,
[06:37–06:39] what what do you think, okay, let's say
[06:39–06:42] there's a designer, a toy company
[06:42–06:45] out there is probably working on it. What
[06:45–06:48] What do they actually need to do right
[06:48–06:50] now?Can they just release a toy like that
[06:50–06:52] that's maybe connected to Open AI, then
[06:52–06:53] everything that the child
[06:55–06:58] interacts goes online?Is there anything
[06:58–07:00] now preventing them or limits what they
[07:00–07:02] can do?Like today, I'm talking about you
[07:02–07:04] know recording this in July 25.
[07:05–07:07] Is there anything preventing them or
[07:07–07:09] limits them in any sort of way?My
[07:09–07:12] opinion, because this is a simple, my
[07:12–07:15] opinion, the time are not
[07:15–07:17] ready. There is a time for
[07:17–07:20] everything. This time is not the
[07:20–07:23] time, simple death. We cannot
[07:23–07:26] enforce something becausethe
[07:26–07:29] market or the
[07:30–07:32] the people want that because this is
[07:32–07:35] another problem, no?Why
[07:35–07:37] everything go?Because there is
[07:37–07:40] someone ask. And
[07:41–07:44] this problem is like a dependency.
[07:44–07:47] There is a phenomenon is
[07:47–07:50] called the FOMO, fear of missing out.
[07:51–07:54] You must be. If
[07:54–07:56] you are not there, you are not.
[07:57–07:59] And this is the same with now this chat,
[07:59–08:02] but everybody chat. If you don't chat,
[08:02–08:04] you are a loser, or you are
[08:05–08:06] something with
[08:07–08:10] less opportunity from your side than
[08:10–08:13] another, but this is not true at all.
[08:15–08:17] Even more so, I mean, even you see all
[08:17–08:20] the CEOs that are basically telling their
[08:20–08:22] you have to use AI or you're not going to
[08:22–08:24] be here becauseThey expect higher
[08:24–08:27] productivity from their from their staff.
[08:27–08:29] Correct. But yeah
[08:30–08:30] I
[08:33–08:35] like to say the AI is not
[08:36–08:39] the solution. It's like a tool,
[08:39–08:41] as many others we already have.
[08:42–08:45] If you have not a clear idea which one
[08:45–08:48] are the problem, which one is the method,
[08:48–08:51] which one is the process behind
[08:52–08:55] what you want to fix, you cannot simply
[08:55–08:58] expect, okay, I get the AI inside,
[08:59–09:01] it will be fixed at once.
[09:02–09:05] Not at all, because if you want
[09:05–09:08] to onboard the AI, your process,
[09:08–09:11] you must be aware you need first
[09:11–09:14] to have a process of assessment
[09:14–09:17] of your resources, just to see
[09:17–09:20] some technical aspects like
[09:20–09:23] oversharing of the files, people see only
[09:23–09:25] what they can see, nothing more.
[09:26–09:28] And the data, data
[09:29–09:31] are good quality because if the data are
[09:31–09:34] not in the good or you are not
[09:35–09:37] the way to collect in the digital way the
[09:37–09:39] data, AI can do very,
[09:40–09:43] very poor on that or can do a
[09:43–09:45] side effect, because
[09:46–09:48] companiesnow are
[09:49–09:52] trying to shift from a human approach to
[09:52–09:54] data-driven approach. Okay, Alura, we
[09:54–09:56] trust, you cannot trust people in data
[09:57–09:59] because at the end you must decide all
[09:59–10:01] the data on people. Okay, let's move to
[10:01–10:04] data. You focus all
[10:04–10:07] your business on data, but if you don't
[10:07–10:10] have a core strategy on the data,
[10:11–10:14] ohh it becomes a problem because
[10:14–10:17] you know exactly the garbage in, garbage
[10:17–10:20] out. If the data are not good,
[10:20–10:22] the output cannot be
[10:23–10:25] different. I want to
[10:27–10:29] just make sure, and I said, really
[10:29–10:32] because of your work with, maybe we can
[10:32–10:34] talk specifically about Italy, but I
[10:34–10:37] assume you you understand the
[10:37–10:39] European framework. Europe has been
[10:39–10:42] fairly fast in the past for
[10:42–10:44] adopting privacy. laws,
[10:45–10:47] GDPR, I think in 2018,
[10:47–10:50] and it was always fairly
[10:50–10:53] advanced. Even Even with fines and
[10:53–10:56] and enforcement, do you see the same
[10:57–11:00] approach being taken right now?I mean,
[11:00–11:02] if I'm if i'm an operator, if I'm a
[11:02–11:04] software company in Italy,
[11:05–11:08] are there any guidelines or limitations
[11:09–11:12] or laws or even policies that
[11:12–11:15] that have been implementedthat
[11:15–11:17] forces me maybe as a software company
[11:18–11:20] to act differently with my AI
[11:20–11:22] capabilities?I believe
[11:23–11:25] in Italy, but also in the other countries
[11:25–11:28] in Europe, this is enforced
[11:28–11:31] because we have authority and
[11:31–11:33] we have the famous European
[11:34–11:36] Regulation Act, where
[11:36–11:39] companies have some mandatory
[11:39–11:41] steps to follow up. So you must
[11:41–11:44] respect the rule. I am more impressed
[11:45–11:48] outside Europe, where there is not this
[11:48–11:51] regulation. Because the problem, and this
[11:51–11:53] is really my opinion, is not
[11:53–11:56] Europe has regulation, it is other
[11:56–11:59] countries has not. It's not
[11:59–12:01] the problem that companies are troubling
[12:01–12:04] to jump in Europe, because it's a market
[12:04–12:07] where you must be... No, No it's the
[12:07–12:09] opposite of the reason. All the countries
[12:09–12:12] must be ruled with rules
[12:13–12:16] And in this way you have also a way
[12:16–12:18] to have a competitive market. Otherwise
[12:18–12:21] the company Europe, what do they do?
[12:22–12:25] They should be limited
[12:25–12:28] because they follow the rule of the
[12:28–12:31] policies and the other ones not. We
[12:31–12:33] speak about ethics,
[12:33–12:36] principles and that one. It's clear
[12:36–12:39] ethicalwell and moral are
[12:39–12:42] related to countries and
[12:42–12:45] the cultural aspect. But there are some
[12:45–12:47] basic principle should be
[12:48–12:49] for everybody
[12:51–12:54] the same. We cannot really
[12:54–12:57] live in a country, in a world
[12:57–12:59] where everywhere
[12:59–13:02] someone has something different to say.
[13:03–13:06] Yeah. I think
[13:06–13:09] it's really interesting when we
[13:09–13:12] have, uh it's
[13:12–13:14] like there's not a new set of
[13:14–13:16] capabilities of what companies can do
[13:16–13:18] very easily, very fast.
[13:20–13:22] And you're right, it's not just about a
[13:22–13:25] company that is operating in Italy. It's
[13:25–13:27] actually, you know could be a company in
[13:27–13:30] Costa Rica, they launch a software
[13:30–13:33] and it has users in
[13:33–13:36] Italy and in Europethey have to comply
[13:36–13:38] with these rules. But it also
[13:39–13:39] that
[13:42–13:44] uh it's not going to be easy for Europe
[13:44–13:46] to to enforce rules on them. So then
[13:48–13:50] it's the ethical aspects of what guides
[13:50–13:51] the company owners
[13:53–13:55] to to operate ethically. That's the
[13:55–13:58] challenge. And I can tell my
[13:58–13:59] personal view, I think that, you know,
[13:59–14:00] we're in a capitalistic
[14:01–14:04] environment. And there's a race, like you
[14:04–14:07] said, there's a FOMO of how do we
[14:07–14:09] take this opportunity and grow as fast as
[14:09–14:11] we can right now and break
[14:12–14:14] rules and do what we need
[14:16–14:18] to be, you know, to be the best AI
[14:18–14:20] company, the fastest growing AI company.
[14:21–14:24] And I think it presents a real challenge.
[14:24–14:27] I want to take the conversation to one of
[14:27–14:28] the things that you mentioned, which is
[14:28–14:30] which is arounduh
[14:31–14:34] bias and discrimination. So, and how
[14:34–14:36] models are inherently
[14:37–14:40] designed to actually amplify the
[14:40–14:43] biases that we have. And
[14:43–14:45] I've seen it from my personal use when,
[14:45–14:48] you know, even if you want to generate an
[14:48–14:50] image of a woman, it's always going to be
[14:50–14:52] a very pretty woman. It's always going to
[14:52–14:53] be a woman that kind of meets the
[14:53–14:56] contracts of, you know, the the
[14:56–14:57] modernWestern
[14:59–15:01] standards of beauty. And And I think
[15:01–15:04] that's and I think that that probably
[15:04–15:06] within itself is is enforcing it. But
[15:07–15:09] what examples are you seeing in
[15:10–15:12] that AI is amplifying biases and and
[15:13–15:15] cultural discrimination?Yeah,
[15:16–15:17] I totally agree with
[15:19–15:21] your mind really. And this is another
[15:21–15:24] problem because model
[15:24–15:27] are nottrained with the
[15:27–15:30] specific dataset according to
[15:31–15:34] the engineer collected or the business
[15:34–15:36] required to use this specific
[15:36–15:39] dataset with the specific features
[15:39–15:41] and the specific weight. There are
[15:41–15:44] many parameters mostly we are not aware
[15:44–15:47] of. The famous black box
[15:49–15:49] uh effector.
[15:51–15:54] We have a ready dataset.
[15:54–15:57] collected and aligned according to
[15:57–16:00] some principle, then there is the
[16:00–16:02] part related to the training. And this
[16:02–16:05] is pretty interesting because the
[16:05–16:08] training is related to who trained the
[16:08–16:10] model. So if I have a
[16:12–16:15] some attitude for what I can
[16:15–16:17] be rigorous or
[16:18–16:21] strict on that, I have an opinion, I
[16:21–16:24] am human for that. When I
[16:25–16:28] interact with the chatbot, I'm not
[16:28–16:30] talking with someone agnostic,
[16:31–16:34] despite the topic. It's already
[16:34–16:37] with the position. There is a
[16:37–16:39] famous test did with the
[16:39–16:42] release of DeepSeek, when they try to ask
[16:42–16:44] to DeepSeek and the Western models uh
[16:45–16:48] same question. So same question,
[16:48–16:51] totally different perspective. But
[16:51–16:54] the problem is that people are
[16:54–16:56] not aware. This is a perspective.
[16:57–17:00] They think this is the answer. And
[17:00–17:02] it's very different. It's a point of
[17:02–17:05] view. It's actually kind of like humans,
[17:05–17:07] right?If you ask one expert uh
[17:08–17:10] what is the answer for a question, they
[17:10–17:12] will give you the answer that they feel.
[17:13–17:14] based on their entire life experience.
[17:14–17:16] You ask another expert, they're gonna
[17:16–17:17] give you, and they're always gonna say it
[17:17–17:19] with confidence as if this is the one
[17:19–17:22] true answer, right?So maybe in that--
[17:22–17:23] Yeah, but this one in computer is a
[17:23–17:26] computer, no?You are a human, this is a
[17:26–17:29] computer. Computer is no opinion, it's
[17:29–17:31] super parties, but it's not that. Umm
[17:33–17:36] So So maybe let's take it a bit more
[17:36–17:38] practically. Let's say that I'm the chief
[17:38–17:41] risk officer or chief information
[17:41–17:44] security officer. uh and I
[17:44–17:46] handle the governance in my in my
[17:46–17:48] company. What
[17:48–17:50] What do you think is my my biggest
[17:50–17:52] challenge now?What do you think is is
[17:52–17:55] really where where I would
[17:55–17:58] get stuck or I would need to put a focus
[17:58–18:00] on?Okay. There
[18:02–18:04] are many things you must be aware of and
[18:04–18:07] you should look at because at first,
[18:08–18:10] I believe you cannot handle
[18:11–18:13] everything. uh In the
[18:13–18:16] company you must have the filter with the
[18:16–18:18] request. And the first
[18:18–18:21] filter is why you need
[18:21–18:24] AI. Which one is the
[18:24–18:27] scope of your job?What do you
[18:27–18:30] want to do in this process?Because most
[18:30–18:33] of the time AI is not required.
[18:33–18:36] Or AI, which AI?
[18:36–18:38] Machine learning, deep learning,
[18:39–18:39] generative AI.
[18:42–18:44] level need different
[18:45–18:48] guardrails and effort and
[18:48–18:50] time. For that the first
[18:50–18:53] request and the first filter is
[18:53–18:56] before to put the hands. This
[18:56–18:59] is the last step. Okay,
[18:59–19:01] let's be clear of the business
[19:02–19:05] requirement. The functional,
[19:05–19:07] non-functional, domain requirement.
[19:07–19:10] All the things related to the process.
[19:11–19:13] They involve stakeholders, who is
[19:13–19:15] accountable, who is responsible.
[19:17–19:19] Must be something inside the process,
[19:19–19:22] cannot be simple. The request please
[19:22–19:25] check if the system is compliant, if
[19:25–19:27] there is something we must be
[19:27–19:30] careful about. Because, and you
[19:30–19:33] know that, AI is
[19:33–19:36] cross-domain. In the company, it
[19:36–19:39] is not only about one single stakeholder,
[19:39–19:42] it isan operation to be
[19:42–19:44] followed by HR, ICT
[19:44–19:47] compliance, um
[19:48–19:51] human resources, we see before, legal,
[19:51–19:54] of course, and the one
[19:54–19:56] analyzed the IT for the
[19:56–19:59] infrastructure. Because another problem
[19:59–20:01] is, okay, company come with the AI
[20:01–20:04] solution, but the AI solution is
[20:04–20:07] inside an infrastructure. Okay, ohh
[20:07–20:10] let's do firstthe assessment of the
[20:10–20:13] infrastructure you propose to support
[20:13–20:15] the business with your AI
[20:15–20:18] activity. Because AI is not
[20:19–20:22] run inside somewhere, and people
[20:22–20:24] now in your somewhere, because they think
[20:24–20:27] only to the last mile, the AI. That's
[20:27–20:30] actually, I think, an incredibly
[20:30–20:32] important uh point,
[20:32–20:35] becauseyou know, as a as a
[20:35–20:37] chief information security officer, a
[20:37–20:39] risk officer, I can really audit
[20:40–20:43] what I can, what I have access to, which
[20:43–20:44] like you said, is the last mile. It's
[20:44–20:47] whatever vendor maybe I'm buying from,
[20:48–20:49] but it's a black box. I don't really have
[20:49–20:51] visibility on the whole supply chain.
[20:52–20:55] Yeah, correct. Yeah.
[20:56–20:58] So it's also incredibly limited in what I
[20:58–21:01] can do. I would use a tool that is using
[21:01–21:03] AI,I can ask them where you
[21:03–21:06] store your, you know, your
[21:06–21:08] LLM, maybe do you run it on an
[21:08–21:10] independent closed server, but I don't
[21:10–21:12] really know. And I have no way of
[21:12–21:14] auditing this. No way of auditing. How
[21:14–21:16] would I ever know?All I see is the text
[21:16–21:19] output. It's not even with code
[21:19–21:22] where I can audit your code. Maybe if
[21:22–21:24] they let me, if I, maybe it's a very big
[21:24–21:26] enterprise deal, they would let you do
[21:26–21:29] that. But if I sign up to to a
[21:29–21:31] new tool with a free trial,
[21:32–21:34] uh and I give it access now to my Google
[21:34–21:37] Drive?That's it. Yeah, yeahI don't know
[21:37–21:39] where it's going. They send all your data
[21:39–21:42] and training the models. Yeah,
[21:44–21:46] it's serious. It's serious problem. And
[21:47–21:50] I believe we are not really
[21:50–21:52] taking care of the side effect of
[21:53–21:54] all of that. So
[21:56–21:58] firstly, it's super interesting. I think
[21:58–22:00] I think there's a lot of things to think
[22:00–22:02] about. here. In your
[22:02–22:05] work, what are you focusing on right now?
[22:06–22:09] A governance of the process
[22:09–22:12] with the colleagues and
[22:14–22:17] monitoring of the market on the
[22:17–22:19] trend and supporting the
[22:19–22:21] business and the project uh
[22:22–22:25] where someone asked about the AI
[22:26–22:28] andMost of the time
[22:29–22:32] we say, okay, maybe you can do
[22:32–22:34] without. Interesting. So
[22:35–22:36] based on what you're seeing right now, to
[22:37–22:38] wrap it up with the with the last
[22:38–22:40] question, I'm very curious to see,
[22:40–22:42] because you're you're seeing the
[22:42–22:44] regulation right now, you're seeing what
[22:44–22:46] companies are doing, and you're seeing
[22:46–22:49] what what steps are actually being taken
[22:50–22:52] to limit the risk. Where do you think
[22:52–22:54] it's going to be six to 12 months from
[22:54–22:56] now?Future?
[22:58–23:00] yeah Hard question.
[23:01–23:03] So according to Sem Altman,
[23:04–23:06] within July we should have
[23:06–23:08] GPT-5. So this
[23:08–23:11] GPT-5 should be something near
[23:11–23:12] the AGI.
[23:16–23:18] Global market and
[23:18–23:20] government on the same table.
[23:21–23:24] and think ethically and on
[23:24–23:27] the governance perspective, how they can
[23:27–23:29] speak on the same language,
[23:30–23:33] it will become an everyday
[23:33–23:35] competition. And the competition is
[23:35–23:38] something I believe don't pay anyone.
[23:39–23:41] Really, don't pay anyone. Fabricio Degni,
[23:41–23:43] thank you so much for your time and
[23:43–23:46] insights. It's been super interesting
[23:46–23:49] andI've read the the
[23:49–23:51] white paper that you posted and I
[23:52–23:53] strongly recommend anyone dealing with
[23:53–23:55] this, even if you're just curious to see
[23:56–23:58] what is done and what's really the risk,
[23:58–24:01] I think there's so many aspects that are
[24:02–24:04] so under discussed and
[24:05–24:08] should get a lot more attention and and
[24:09–24:11] it's coming up very, very fast. So we
[24:11–24:14] need to see what's happening and I think
[24:14–24:17] the solutionis not something
[24:17–24:18] that's going to come solely from
[24:19–24:21] governments and and countries. It's going
[24:21–24:23] to need to come from
[24:24–24:26] from companies themselves deciding to
[24:26–24:28] develop ethically, to build things
[24:28–24:31] responsibly, to be more transparent in
[24:31–24:32] how they do things, and also to be
[24:32–24:34] transparent in what they don't know, um
[24:35–24:38] because because their users need to be
[24:38–24:41] aware of whatof what they don't know,
[24:41–24:43] right?It's like, it's okay to say, it
[24:43–24:45] should be okay to say, actually there's a
[24:45–24:48] part of our system that works and we
[24:48–24:51] don't exactly know how. Uh, I know
[24:51–24:52] it's a very uncomfortable thing to say,
[24:52–24:55] but I think it's probably important to do
[24:55–24:57] that. Uh, Fabrizio, would that be
[24:58–25:01] okay if if our listeners contact you over
[25:01–25:03] LinkedIn?If we, if they have questions?Of
[25:06–25:08] course. Thank you so much. Brilliant.
[25:09–25:11] Have a great rest of your day, everyone.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!