Quantifying Risk vs. Managing It: A CISO’s Perspective
Episode Description
Cyber Risk Quantification (CRQ) promised clarity. What many CISOs got instead was confusion, wasted budgets, and data no one believed. In this episode of ChatGRC, cybersecurity leader Ross Young lays out why traditional risk models fall short, and what to do instead. From procurement-driven frameworks to regulatory overload, he challenges how we measure and invest in security, especially in the age of AI.
Guest appearance
Ross Young is a veteran CISO, educator, and co-host of the popular CISO Tradecraft podcast. With a background that spans U.S. intelligence (CIA, NSA), enterprise cybersecurity (Capital One, Caterpillar Financial), and policy development (Team8, Johns Hopkins), Ross brings a rare depth of perspective to the evolving conversation around risk, compliance, and security leadership.
Transcript
All right.
Welcome to GRC Talks, where we dive into the most challenging topics of governance, risk, and compliance in the era of massive AI adoption.
I'm Asaf Katz.
And today, I'm excited to host Ross Young.
Ross has spent the last 20 years shaping how we think about cybersecurity.
He's taken, his career is taken from high stake intelligence work at the CIA and the NSA to senior leadership roles in the private sector, including the CISO of Caterpillar Financial and Divisional CISO at Capital One.
But he's also the co-host of CISO Tradecraft, the creator of OWASP Threat and Safeguard Metrics, and also served as a CISO in residence at Team 8, which meant he's seen most of the cool startups that we're seeing
today and taught at the John Hopkins University.
With that unique mix of government, enterprise, academic experience, I think Ross has a rare view on the challenges that we're facing in the CISOs and security teams in general are facing today.
So, Ross, thanks so much for joining me.
Yeah, it's my pleasure to be here.
Thanks again for having me on the show, Asaf.
Yeah, brilliant.
So I'd love to just sort of dive right into how we
how I came across your profile.
So I saw a post that you put out on cyber risk quantification, which, if I remember correctly, about 10 years ago was massive.
There was basically all the talk, and then there's methodologies that came out and companies.
I've had some experience myself working in this space, and I'm very curious to kind of learn a little bit more about your journey in general with implementing cybersecurity quantifications, processes, views, because it's quite controversial.
Yeah.
Yeah.
So I had the chance of doing cyber risk quantification in multiple companies.
And for me, it did not work out well.
And I'll just kind of give you some of the observations or lessons learned that I have.
You know, other people may have different experiences, but for me, I've kind of changed towards some more qualitative type approaches.
So usually, if you're not familiar with cyber risk quantification, you start off with the formula, risk equals likelihood times impact, and you try to quantify that.
And you just take the example of phishing.
Hey, if I have 10 phishing attacks that are going to be successful in my organization, and each of these attacks costs $500,000 of harm, then I ultimately expect $5 million worth of annual impact or annual loss expectancy when you multiply these things out.
And that is your risk measurement.
And it seems really good at first.
But what I found is every time I tried to put things on the risk register, it started becoming so hard to do because I didn't have good likelihoods and I didn't have good impacts.
And I'll just give you some four examples.
Let's say we have a third party that we are going to-- Just to understand correctly, when you're saying I didn't have the right likelihoods,
the likelihoods that were drawn from sort of what benchmark stats around the frequency of these attacks?
Like, how did you?
Yeah, yeah.
So there are high level metrics, right?
So you can go in and say, hey, if you were a manufacturing company, you may have a 10 to 15% chance of a major breach happening to your company that year.
high level.
Once you start saying, well, what about if I do something like I have a third party and that third party doesn't have a SOC 2 assessment, how much does not having a SOC 2 assessment increase or decrease the amount of risk of using this as a third party vendor to store my PII data?
There's really no information to find.
So is it a 50% risk or a 2% risk of likelihood of how big that attack is going to happen when they lose your data?
I don't know.
And so what I found out is I have a vague likelihood times a vague impact, and then I have a very vague risk score.
And so I didn't feel comfortable giving those scores, giving those values to my leadership team in risk beatings for risk acceptance,
I didn't trust the data.
They didn't trust the data.
And it just made me look really stupid to say, here was data that came from some arbitrary thing that no one actually believes in.
Like, I don't see why I put my career reputation on the wrist for presenting that in a meeting.
It just seems kind of dumb in my perspective.
Yeah.
And it's interesting because basically, the way I see it, maybe I'm wrong, the reason why you would put out a model like this is because there's a cost, right?
right?
We want to spend now $2 million on improving our endpoint security or something.
So there's a question of ROI.
So it sounds like that question hasn't really gone away.
So how else would you justify an expense, if not by going with those Monte Carlo simulations and everything they do in order to quantify?
Yeah, and so what I like to do is, let's call it a rough.
magnitude order type exercise.
So what I like to do is I go and I talk to the chief financial officer and I said, help me understand procurement card authorities.
How much can a manager approve?
Maybe a manager can approve 50,000.
Well, how much can a director approve?
Well, a director can approve, you know, 250,000.
Well, how much can a vice president approve?
Oh, he can approve $2 million.
And you figure out what those approval limits are.
And then what you go in and you say, okay, when we assess this, this threat to our organization, which of these blocks do we think it needs to be in?
Is it zero to 50,000?
Is it 50,000 to 250,000?
Or is it 250,000 to 2 million?
Because now we know who has the risk authority to approve that decision based on procurement authority.
authorities, right?
Because if you can spend that, you can also approve that risk.
It's the same type of financial loss that could go either way.
And so when I started explaining this to organizations, they said, wow, that makes a lot of sense.
We're now empowering these people to make these approvals up to these specified authority limits.
And now instead of having to get the exact number of, hey, is this 33,053 cents, which I don't believe we can get that, can I actually make an educated guess to say, is this between 50 and 250,000?
Yeah, I think so.
I think it's in that ballpark.
And that usually is good enough versus trying to get thousands of people to do
all the likelihood and impact with exact measurements, which tends to be very, very chaotic and problematic, right?
So having this high order magnitude qualification in looking at this, I think it's worked really, really well.
And did you find it, 'cause what I find is mostly people are either qualitative or quantitative and people who are data-driven, they wanna see data.
They're uncomfortable with with anything else.
So have you found sort of pushback on on on this type of approach?
Were you coming in more with estimates rather than the other models that are offered?
No, I think this is much preferred one because, you know, they selected where it felt.
So they tend to trust those numbers versus me giving them a math formula that they don't believe the results.
I think it's the really big thing, right?
The other thing that I would- You give them a part of building the likelihood.
So they're building the assumptions.
Yes.
The other thing that I think is really, really key here is when they start to think about these numbers, it's valuable to them.
I think the only one pushing for the detailed stats is more the math and the stats folks, which tend to be the fair risk quantification folks and people who really believe in the cyber risk quantification.
When I went to the executive risk committees and I heard how compliance and how legal and finance talked about risk, they did not go into these like high level statistics.
They shared it in examples like this.
And so not only am I giving something that's valuable, I'm also giving something that is the norm for my risk committees.
Now, maybe if you're in a different organization, that may be different, but I think this is the way that's gonna resonate with more people in more industries.
No, I agree.
I think there's also, eventually it's a managerial decision to invest or to not invest.
And if I, I'm just kind of putting the board member hat and thinking, if I have to make a decision that is coming from a super sophisticated model,
that everyone says is correct, but I can't explain it.
I'm going to prefer the decision that I can explain if I'm asked.
Would you agree that that's the...
That's exactly right.
I think these are the things.
Like, imagine you have the most amazing neural net model that's a 93%, but it uses 50,000 different inputs.
Like, nobody can really understand how it actually derived that answer.
You just know it's a 93% accuracy, right?
So things like that...
As much as we'd like to use that, I feel like organizations still prefer the other method, and we haven't actually seen enough evidence to show that this cyber risk quantification outperforms it.
But I tell you what I have seen, I have seen a lot of organizations that have blown millions of dollars on cyber risk quantification and gotten horrible outputs.
And I think the risk of a bad deployment of CRQ is actually worse than using CRQ.
I think the chances of that going bad are very, very high in most enterprise Fortune 500 companies.
So say more about that.
I mean, what examples are common
So think about it this way.
If all I have to do is go in and pick, is it a 50 to $250,000 problem?
of them are 250 to 200 or to 2 million.
That is really about a five second choice for me when I'm filling out this form.
If I have to spend the time to go and fill out, you know, 10 different variables and try to figure out what it is and try to train everybody on what fair risk quantification is so they know how to fill out the formula.
That means every time that I do these risk approvals, which can be thousands in large companies, right?
I'm spending 20, 30 minutes.
Every single one of those has a cost.
We're spending, you know, man hours that, that have, you know, 250 an hour cost behind these things.
And not only that, but then we're going to go to the risk committees with data that no one believes, and we're going to debate over this crappy data and say, Hey, I don't trust this data.
I really think it's 30,000, not 250.
Like, they're going to do all this.
And this is all time, waste, and suck to organizations.
So when you start doing this across a large enterprise, I have seen millions of dollars being spent on cyber risk quantification, and it hasn't driven out better outcomes, which is why I prefer this more qualitative measurement based on procurement authorities that enables risk approvals at the appropriate level.
Got it.
I think going forward to the era of massive AI adoption, which, you know, there's no question, people just don't know what's coming.
We just simply don't know from from an AI, from a cybersecurity and AI point of view, we don't know what we're going to be enabling because we're enabling the good guys, but also the bad guys, but also the good guys who are just making mistakes now at scale.
So, you know,
All these quantification models, they're based on the past, on prior.
I mean, a model that tries to be a good model, will try to take the most accurate piece of information that they can and put together a stat that would make sense.
They will also be defendable, right?
If you're saying probability of 13% to be attacked in this, it's probably backed up by something.
It's not made-up.
But I feel, I wonder what your thoughts on this.
I feel that now that we're going forward to the era of massive AI adoption, the stats and our intuition are no longer valid.
I think what you're highlighting is the data changes too frequently and the technology does as well.
So for example, imagine car insurance.
We got, I don't know, 50 years of teenagers driving to figure out what's the likelihood of them getting in an accident.
And it's not just one teenager or 100 teenagers.
We're talking millions of teenagers in national crash data around the US alone.
Now, tell me, how much data do we have on MCP or agentic AI attacks?
It's less than a couple of years old.
And we don't have millions of companies submitting how many, you know, data breaches and attacks that they've had.
And so it's very much the wild west of trying to get good data to actually do cyber risk quantification.
And by the time that data comes around, the attacks will have changed, right?
So this is where I think in cybersecurity, it's a very difficult problem to do
high levels of risk quantification because the data isn't there to substantiate the statistics.
Let's talk about it.
So the the approach of trying to quantify cyberbase because we're trying to do better, which we're trying to invest and trying to decide where the there's the regulations are coming in like the AIU Act and others who, you know, basically trying to tell you the
the general guidelines of what you should be doing.
And by the time they come out, there's already advancement to that.
So what would be, firstly, what are your thoughts about the efforts that are being done in a government level and governance level?
And how do you see them progress now?
And then how do you combine those two in terms of getting a right approach to making investments today?
So today, organizations are motivated
primarily by profit.
I think we have to understand most organizations are for-profit, right?
Which means if I have to choose between spending more dollars on marketing and sales, which can contribute to more profit, versus spending more money on cybersecurity, that's a trade-off decision.
And I only want to do that when I clearly know there is a major risk that I'm reducing, or B, I have a legal requirement to do so.
And where I will say is when we look at the most highly regulated sectors, things like the banking sector, historically they have proven to be more secure than unregulated sectors.
So I think compliance very much does have its place.
And so I'm actually very pro compliance in the world, right?
I know there's a lot of compliance.
I don't always say that compliance made 10 years ago is the right compliance to still do today because attacks change.
But I do think we are going to see a number of AI threats and issues, right?
If I build a chatbot and that chatbot hallucinates, if that chatbot makes promises for my organization, which are binding, and it doesn't have the authority to do that, there's a lot of risk and litigation that comes with those things.
Not only that, but nobody wants their data stolen, nobody wants their biometrics lost, nobody wants to be discriminated against.
And so having laws that say, organization, show me how you're going to protect against these types of things, bias, discrimination, and other things, when these AI models are doing, you know, things with my data, I think is very important for organizations.
And so now, even though they may not have the financial incentive to do it from a risk reduction, they're going to have the legal requirement to do it for EU AI Act in other states in the US.
So I think it's going to drive a better outcome.
May not be the financially optimal outcome, but I think it's going to reduce risk.
I'm very curious because you also, you,
You lecture and you train security teams and CISOs.
I'm very curious to see what sort of mindset shifts you feel like you need to make when you train team or when you, or, you know, the sort of philosophy that you feel like we're stuck on and this is what we need to be looking at when it comes to the massive challenge that lies ahead.
I think the biggest problem with organizations today
is they spend money in the wrong patterns.
And let me give you an example by this.
If you talk to an organization, you say, well, how did you spend your cybersecurity budget?
They would, you know, typically say, well, I have to perform these functions, asset management, phone management, cyber threat intelligence, running a SOC, all of these things.
And then they're going to buy tools to support or increase the effectiveness in these different things.
And that's all fine.
We're going to go out and build millions of dollars on these services and products to provide functions in our organization.
But what I think is missing is a threat-informed defense.
And here's what I mean by that.
If you told me I would spend millions of dollars to stop phishing attacks, that's great.
But what if I actually didn't worry about phishing attacks at all because I was really worried on AI risk?
Well, I've just lost all that money that I could have dedicated for AI threats.
Now, I'm not saying phishing isn't the number one attack, but this year, it was the number three attack, right?
So understanding, you know, that, hey, this year, the number one attack, stolen identities, number two attack, exploits on internet-facing vaults, number three was phishing.
So based on that, how do I actually make sure when I pick my defenses, I stop identity based attacks first, I stop, you know, vulnerable websites second, and I've aligned my budgeting that way because those are the most likely ways I'm going to spend versus putting everything, you know, on phishing first.
So, so understanding how the threats are going to happen this year so that I can optimize my defense picking strategy, I think is something that I don't see enough in organizations today.
So, you know, when you look at the regulation, I guess what I'm trying to get to is, is the regulation enough?
I mean, there's the general guidelines, they tend to lag behind, definitely don't know what's coming.
As you said, organizations are profit driven.
We know that, you know, OpenAI, Anthropic, the big Google, they're gonna do everything in their power to become
you know, it's very deeply embedded in our in our work.
You know, you can see open AI pushing connectors
like Claude also pushing connectors so you can now immediately connect to your database.
You can connect to your CRM.
Every employee in the company can basically connect to their CRM, and then their data is who knows where.
There's not even guidelines.
I mean, it's the guidelines that the providers are providing, but there's no guidelines for a CISO on what to do with that because it just came out.
So do you feel that's enough?
And if not, then what is the sort of best practice if you would
advise a company to adopt?
So what I'm seeing organizations doing is taking a variety of controls and mapping those to various standards.
Now, we used to do that ourselves, and now you have things like secure controls framework that do that for you.
And then what you do is you go in and say, okay, these are the five laws that my legal team has told me we need to comply with.
What subset of controls map to those requirements?
And then they go in and then they do some type of, let's call it an audit or a risk assessment to say, show me that every IT owner is doing these controls.
That's pretty much what we're doing.
The problem really comes down to a couple of things.
One, every one of these laws is a silo, right?
So we pass a law for banking called Gramm-Leach-Bliley.
Then we pass another law called Sarbanes-Oxley.
Then we pass another law called NYDFS.
Then we pass another law called, you know, California Consumer Privacy Act.
And so what happens is I have all these disparate laws and I have to report to all these disparate entities.
and I have all these redundant controls across all of these laws, and it's never updated, and that's where it gets really, really tricky, right?
So the laws that were passed in 2002 are very different than the laws that are being passed in 2025, right?
We didn't have mobile phones, we didn't have cloud, we didn't have all of these things that are here today.
And so my budget is stolen trying to maintain those old compliance when I need to be focused on 2025 threats.
And so I think we have to think about that.
2025 will only come in 2026 or seven, probably.
Yeah.
Yeah.
So part of it is, you know, that budget loss satisfying old compliance regulations and how much compliance is enough to keep my auditors and regulators happy when really I need to be focused on the threat reduction piece that may not even be in a compliance requirement, right?
Yeah.
So what's your advice to, you know, if I'm a member in a GRC team in a company, knowing all that and agreeing with you, right, that we're now tangled into complying with old laws and I don't have, because these laws don't just come with a budget allocation, it also comes with a set of tasks that I have to perform, takes my time for that.
I don't really have the time to invest in what's coming.
So what's your advice to team members
who are trying to cope with and get their organizations ready, but are sort of between the rock and a hard place when it comes to complying with their job.
I like to think of compliance as a minimum viable product.
How do I satisfy that compliance objective without spending a dollar more?
Right?
So if you do that, you look through each one of your secure controls frameworks mapped to your various standards and provide that evidence, you've satisfied those controls and those laws ultimately.
Versus I'd like to spend more of my money on the risk reduction piece, right?
So for example, let's just say I'm really worried about stolen identities, right?
I'm going to spend a lot of time
on multi-factor authentication, on building a very effective asset inventory, making sure I build a process when we decommission systems, we actually remove them and don't leave old expired vulnerable systems out on the internet.
And all of that may not actually be required by the standard in the compliance regs,
But if I think that's the number one attack attacking me based on Verizon data breach data, based on threat intelligence report, based on information sharing ISAC groups, then that's where I'm gonna spend the budget because this is the highest likelihood of causing material harm to my company.
Brilliant.
Ross, thank you so much.
How could people contact you if they...
Want to learn more about you?
Well, obviously you got a podcast, but is there any, would you be okay if people reach out over LinkedIn?
Yeah, absolutely.
So I have a podcast called CISO Tradecraft.
Please take a look at that.
G.
Mark Hardy and myself, we put weekly episodes to really help people learn more and become cybersecurity leaders.
You can e-mail me at ross@cisotradecraft.com.
I'm also on LinkedIn, so feel free to connect there.
I do private coaching, so if anybody is looking for help to elevate their career success, I do that as well, as well as some consulting opportunities.
Brilliant.
Ross, thank you so much.
It was fascinating.
Of course.
Thank you for bringing me on and letting me share my views.
And I know there's going to be someone who loves these views.
I know there's going to be someone who loves cyber risk quantification, but hopefully I just kind of shared a different perspective to help people think through things.
Sounds good.
Have a great day.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!
