The CISO who speaks his mind: 30+ Years in Cyber
Episode Description
In this episode, history keeps repeating itself in security and GRC. Richard Bird joins us to unpack the patterns we keep missing and the truths we avoid. From persistent myths to practical steps for real security, Richard reveals what most teams get wrong—and how to fix it.
Guest appearance
Transcript
Richard Bird, not just a cyber security veteran with over 30 years in the field, but one of the clearest and loudest voices when it comes to what's broken and what needs to change. From leading identity at JPMorgan Chase to serving as chief security officer at Singular AI, Richard's become a go-to speaker, advisor, and advocate of strong voices in the security space. He's also a Led Zeppelin fan and is known for being someone who doesn't pull punches.
His most recent viral post on Disney's Data Breach, where he called out the total lack of financial incentives tied to security performance, sparked major discussion, and it's just one of the many places where he's challenging the status quo. This episode is about what we're seeing now, what we've seen before, and what still isn't being said loudly enough. Richard, welcome, and let's get into it.
Glad to be on, James. Good to hear. So, to get things out the gate, you've been unapologetically vocal about the challenges security teams face.
So, why do you do this, and do we need more CISOs that follow your lead? Well, I think I'd probably start with the back end of that question first, which is there really are a lot of CISOs out there with strong voices in the marketplace. Now, whether or not they're heard, or whether or not there are platforms available for them to be heard, or whether they can get above the noise of the status quo and kind of kumbaya hand-holding on never question how things are going, that's a totally different conversation. But if we look at why I've made noise for quite a while, it's because I'm a non-traditional technologist.
When I came into technology over 30 years ago, I didn't know any different. I remember that I was a technology programme manager, and I had a software developer who told me that the project we were working on a week before we were supposed to go live was going to go live six months later. And I had come out of construction project management where you didn't miss dates by six months.
And so, the perspective that I brought was not as a developer, not as an architect, not as an engineer. And that put me in a position to be an IT executive management and leadership for the entirety of my technology career. And one of the things that bothers me is that we see a repeat of patterns over and over again.
And more importantly, the thing that bothers me the most that I think is really the critical need for being vocal about is that if cyber losses represent a scoreboard, it's very difficult for those of us in the security world and equally difficult for our peers in the business part of our corporations and organisations to suggest that we're winning, to suggest that we're improving. Heck, I don't want to win. I'd just be happy if things got better.
And instead, when you look at the cyber security losses, it's a hockey stick curve. I've said this a number of times, and it sometimes irritates folks when it jumps out of my head. But there's an argument to be made that if you looked at the compound average growth rate of valuable stuff that the bad guys steal, you'd be better off investing your money funding bad guys than putting it into the stock market because the hockey stick curve has gone from the hundreds of millions into the trillions of dollars worth of losses.
And I think that that is worthy of those of us that are security practitioners and professionals to stand up and say, we've got to do something different. Things need to change. We need to sacrifice some golden idols because we have become a corporatised function.
And now there are the things that come with a corporatised function in security, which is protecting political turf, protecting budgets, protecting none of which are the restrictions that the bad guys have. People are always saying bad guys are more agile than the security practitioners in the corporate world. It's not because we're not agile.
It's because we're layered under stacks of bureaucracy and rules and processes and policies that the people on the other side of the equation do not have to deal with. And I'm not saying those things aren't important, but I say I am saying that when those things become the key function of security organisations and not protecting and reducing attack surface and making things better, then the bad guys obviously are going to have an advantage. So you've laid the ground perfectly and maybe we can go and discuss what needs to be changed.
We made contact with each other after your post around the Disney case. So I would like to start there because I think it's a really interesting example of not just what's gone on recently with insecurity, but also your take on it. And I think a lot of people had their say on it too.
So when something goes wrong, like in the Disney case, CISOs are often the only ones, and I put only in capital letters, only ones accountable. For those unfamiliar with the particular scenario, maybe you could explain what happened there? Sure. There was an event that drove a resolution that was put in front of the board at Disney for a vote.
And what that resolution was, was should basically the entire C-Staff of Disney have some form of their compensation associated with security performance? I want to make sure that I put a pin in that word, performance, because I think this is where a lot of contention and argument comes in when this topic gets raised. Now, if we reel back six years ago, the board at Disney pretty much unanimously voted down the proposal, which is interesting because the contributing event that caused that resolution to be put in front of the board was a really embarrassing set of problems with Disney's streaming services, where people were able to hijack the streaming services without paying for them. And it really was foundational security that led to that failure.
But if we kind of bring it to now, I just simply reposted this interesting story from six years ago where I was quoted in multiple media outlets about, look, as long as we refuse to incentivise people around security behaviours, don't expect improvement. And this created some really interesting dialogue and debate. And I'll come back to the security outcomes comment.
A long time ago, I had a great boss who told me, he said, you know, one of the things that will make you successful is if you never confuse effort with results. We have a lot of effort in security, a lot. So the security function is well-defined in almost all organisations.
The security function has checklists. It has solutions. You can make an argument the way that security has become institutionalised over the last 30 or 40 years.
It's almost become a secondary audit function, right? Audit as insurance, right? As opposed to, are we really looking at the outcomes and the processes and the results associated with security? And that's my point, right? If you look at the results and the outcomes from security, we're not making a lot of improvements. Ask a simple question. Are you reducing your attack surface every day? There'll be a lot of people on the business side will say, wow, that's really complicated.
We can't do that. And I'm like, do you know that 25 years on, we have more data about every single thing in security than we've ever had in the history of security and business. You're telling me that you can pay bajillions of dollars for your Splunk instances and security information is being dumped in there and we can't define any interesting metrics-based information from all of that data.
But more interesting is the response from some security professionals, which is like, oh, well, security just rolls to the top because it's what's good for the company. Baloney. Good for the company has never been a thing that has gotten a business case approved, right? Companies make money.
They're in business for profit. They are in business for reducing operating expenses. They are in business for a number of different reasons.
None of them are associated to security. And this brings up this Disney example that I thought was so fascinating, which is really one of the first out front headline attempts for a company to treat security like a business function instead of a policing function inside of their organisation. And so there's this really interesting quote from Warren Buffett's longtime partner, show me the incentives, I'll show you the outcomes.
Right. And so it raises this question about whether or not we should begin to treat security as a business within our organisations. And the rounding off point for this is, is I can tell you right now, the entire bad actor side of the equation, they do treat security as a business.
They treat breaching security as a business. There are entire economies that are driven outside of the view of the corporate world. I hate saying the dark web, but there is trading of credentials.
There are business groups that are associated with ransomware teams. There are, you know, there's organised crime. There are, you know, human trafficking rings, child pornography rings that are all treating, attacking us and taking our stuff as a business.
But then when you bring up the notion that we should treat security as a business within our corporations, people start to lose it and say, well, we can't do it that way. And that's the big, you know, that's the big bridge that we need to cross. Are we going to continue to do the same things and expect different results? Or are we going to take a look at this thing and say, we may have, you know, exceeded the capabilities of security, you know, a decade ago and needed to change the model.
And now we're late, but we better get moving and figure it out. I mean, when looking at it just completely rationally, if you're looking at incentivising performance, that would be the go-to within sales, within business development, within marketing. Why, when you talk about incentivising performance and security, and it looks to be something on a company level, company-wide working together, why was that deemed so controversial? Well, I think it's because of the genesis of security to begin with, right? As we look at the history of security and organisations, it was bits and bytes.
It's in the digital space. Early days, it meant that the business ascribed a certain level of mysticism, right? You know, Hogwarts-level magic when it comes to technology. Technology is a faster tractor.
It is AI, quantum. Everything is just a tractor moving faster, ploughing more ground, producing more harvest. And if we treat it like anything other than a means of production, we end up creating this really interesting mythology around it, which was what we've done with security.
And I really liked your reference point, James, because it becomes a really interesting conversation and argument when you go, well, wait a minute, can you name me another function in any corporation, agency, or organisation that isn't incentivised based upon performance and outcomes? Salespeople can work really, really hard, right? As a salesperson, as a BDR, I can make 200 calls a week. And if I don't hit my numbers, I don't have a job, right? So why is it that we have given this kind of gilded holiness to security? And initially, it was because we treated it as something that was purely technical. And the reality is that it was never purely technical.
I think that the classic technology triad came out 30 years ago that said people, processes, and technology, right? And people continuously, both internally and externally, defeat all of our security controls. I always say Bob in accounting is the best hacker ever. If you do anything that impacts his job, he will figure out a way to get around it just simply because it's an inconvenience, right? People, process, and technology has always been where it's at.
But we've isolated the security function and technology. We've given it this gloss of it's just the right thing to do. We haven't put any metrics, KPIs, performance measures around it.
And then we sit around looking at each other and go, I don't know why it's not working, but it stopped working. So now we fire the CISO, right? Like, you know, talk about misplaced priorities, right? And that is probably the last point that I think is the most critical. When we look at what's been happening over the course of the last 18 months, relative to CISOs being targeted for security performance issues, I think of Joe Sullivan at Uber.
I think of Tim Brown at SolarWinds, most recently Chris Krebs. When we look at these dynamics, how is it that there are very few, if any, other business functions that if something goes wrong within an organisation, the CEO loses his job, right? Or the CFO loses his job. We're talking about a CFO having to, you know, have engaged in embezzlement, right? And violating, you know, international trade laws in order for them to lose their job.
Breach and exploit, which is really a team effort across the entire company from employees to contractors to the entire executive staff happens. And now all of a sudden, you know, like a Joe Sullivan, you know, he's being charged by the Department of Justice. What messages are we really sending about the value of security in our organisations when we isolate these consequences for total poor team performance on security to one person, right? And that's, you know, that is really troubling.
I think that, I know that you guys have heard this. I hear it, you know, every week, right? You know, security professionals take the CISO as only a part of that community. Security professionals feel attacked.
We feel threatened. We feel very concerned for our livelihoods and our jobs because this mythology of security belongs to the security people. And if 30% of the organisation is violating security controls, doesn't matter.
That's the security people's problem. The landscape looks very, very difficult for us from a career and professional standpoint. Completely understandable.
And like you said, not the first time that we've heard this, probably the first time we've heard that so eloquently put. So thank you for really explaining it, especially to people who are not afraid with CISO's work day in, day out. Really, really strongly put.
Thank you. Moving on to just something that you referenced there, and I think we will talk about it within this step from Gartner as well. You quoted as the same Gartner said that 75% of employees will work around security policies to use AI.
So we're bringing into today's world and perhaps today's hot takes on AI. What does that tell us about today's work environment and culture and the way you see it, especially if you've got the majority of the company working necessarily against security rather than for it? Well, I think first and foremost, the survey that came out of Gartner a few weeks ago that solidified those numbers may be the first intellectually honest survey from a responder standpoint that I've seen in a long time. Probably directionally intellectually honest, right? Because as I've phrased this in a number of conferences I've gone to, it's like 75% of people's surveys said they will actively work around security controls if they cause inconvenience to them getting their job done.
Everybody has looked at me and said, that seems like a gross understatement. Like, I think it's probably more like 90 and 95%. And certainly those are security practitioners with experience who are saying, like, it's nice to see that three quarters of them finally admitted it.
But the reality is, is that almost everybody does that. And that is the huge contributor to the problems within our space as practitioners in delivering results. The thing that I really liked about the survey, especially in coupled up relative to AI, is that it is confirming what we've known for years and years and years in the human factors and human behaviour space.
It's a great indicator of what we can expect relative to bad outcomes and consequences when we know that the super majority of people within every organisation have no qualms about accessing AI services, functionality and features, even outside of the policies and controls that they have had layered on top of those activities. And it's interesting because I don't think that there's been a technology, and this rarely gets talked about, I don't think that there's been a technology in the evolutionary cycle of building the faster tractor that has ever been more appealing and more tempting to the consumer, right, the end user. And it's understandable, right? It's cool.
You, I, we use AI every single day. I was a, I was a, you know, a, a resistor in the first six months after, you know, the, the big, you know, chat GPT open AI announcements in 2022, 23. And, and then I started, you know, fiddling around with things.
And I have found a number of different things that make my life easier, but I'm a security professional. I also know what my acceptable and unacceptable compromises are even outside of policy. Human beings are not built that way.
Right. And I've said this so many times, written so many articles about it, that it's another contentious statement that gets people a little itchy and scratchy. But I always remind people that, that when we are talking about people that are inside of our organisations using technology, we have to remember that human beings are actually really bad at security and risk decisions at the personal level, right? I always use door locks as an example, right? Everybody knows if you lock your doors, your house is safer.
You can go out and look at the surveys yourself. The number of percentages are all over the place, but the reality is, is that the vast percentage of successful burglaries that turn into insurance claims were because somebody didn't lock their door, right? Human beings don't need to be told to lock their door to make things safer. Human beings leave their doors unlocked because it's convenient, right? This is what we do.
And there's a reason that they're called the Darwin Awards, right? People, you know, and when it gets really personal and contentious in conversations with people, I'm like, do you have anybody in your family you don't really trust with their own personal decisions? And you see people's heads start to nod. And I'm like, yeah, that cousin, they work in that development shop, right? You have to acknowledge these weaknesses and these patterns and these behavioural realities with human beings. Something that we've done a horrible job of designing into security solutions, by the way, recognising and acknowledging that fact.
And the reason that we know that's true is because we all sit around in security organisations and say, the only thing that we need to do is make everybody smarter about security, right? And I'm like, no, that actually isn't going to happen. You can actually look at the outcomes from cybersecurity awareness training in most organisations, and you'll find the same five or 10 or 15% of people fail it every single year, over and over and over again, and their behaviours don't change. And every single one of those people represents a risk or an exploitable surface within your organisation, just takes one.
So when we look at AI, and we look at the convenience and the attractiveness and the sexiness and the appeal of it, it makes perfect sense that people are flocking to it. The problem that that creates for us in the security world is that they are coming in volumes that we've never seen before. It was bad enough in the early days of SaaS and cloud applications, how many people were rushing to go use those.
We have people that are rushing in numbers, 10, 50, 100x more to go use those tools within your organisations, and they have verified that they are willing to bypass your controls to do it. Yeah, it's that convenience over the wider, greater good. You know, we see those patterns again and again.
So leading straight into GRC and AI, especially talking about patterns, are we seeing all security patterns resurface with AI, or do you feel it's something completely new and different? Where do you run with this? Both. Both. And this is, I think that this is a very rare instance in my entire career where I've said both, something new and something old.
The vast majority of the time in my career, you look at a pattern and you go, ah, we've seen this before. I mean, by vast majority, I mean 99.x percent of the time. And we look at these things and we go, okay, like, how did we solve for this in the past? The only difference now is volume and speed.
Now we've got to adopt our controls and our governance around the fact that there's more of it. It's faster. It's more difficult to spot because it's, you know, hopping through 37 different endpoints, but the mechanics are still the same.
And when I look at AI, as I've been advising a number of organisations for the last two years around AI, I've always started the conversation the same way. They say, how can we accelerate AI adoption? And I'm like, before you do that, answer this question, pair of questions for me. How good are you at data security? And how good are you at identity security? And people start hemming and hawing and, you know, I'm not sure where the balloons came from.
They start hemming and hawing and they go, well, you know, we have our problems. And my response is, then don't do AI. Because if you look at the very core of functionality for AI, it is an entity.
It is an identity, right? Every AI that's developed has a UID. And it is being given access, always through an authentication layer, just like human beings, just like contractors, just like a machine account, right? And then you look at the data security component of it, where a number of AI security plays have risen up. And you go, well, how successful have we been with data security as the core of our security architecture for 30 years? Now we come back to the escalating hockey stick curve of cyber losses, right? There's problems there.
And those are patterns that we know. But there is something that is different. And that is that if we look at the way we've built security architectures for several decades, and we look at the reality of poor performance and controls in the identity space, poor performance and controls in the data space, poor configuration management and key management in the encryption space, all of the different core domain capabilities that are really critical to AI usage as well as AI exploit, we look at those things and we go, okay, all of them need to be exponentially better in order for us to be successful with AI.
It misses something really important, which is all of those pieces have been built into an architecture that protects static assets. And this is where AI security, AI governance, AI compliance is entirely different. AI is not static by definition, right? The idea that a AI agent is static is immediately defeated by the retraining of that model, right? That is core to that agent or core to that LLM.
And when you retrain it, it could make a choice. I don't know if I'd necessarily call it a decision, but it could make a choice to access different data inside of your organisation and what you authorised and approved it for. This creates a really interesting set of problems for governance, because governance has really been focused, governance of third-party risk management and vendor risk management has been focused on a heavy dose of one and done.
Let me evaluate it. Let me see if it's safe. Let me see if it's meeting the policy requirements of my organisation and approved to use, right? Except approved to use, and then three months later, it retrains.
It's choosing to access different information at the fine grain layer. And now functionally, it is a different AI agent or a different AI model. Who looks at it then, right? Which brings into focus probably one of the weakest areas in governance in the last several decades, right? Which is true lifecycle management, cradle to grave, end to end, any changes precipitating need for re-evaluation.
This is going to be extremely difficult for companies because they've underfunded the compliance function. They've underfunded the governance function. They've certainly underfunded the security function, no matter how much money that they've put into it.
And when you look at the reality of no processes, no tools, no solutions in place for full great cradle to grave lifecycle management, most importantly, the other key difference, the vast majority of these aren't your assets. You don't own them, right? They are coming to you from outside. We can see where these problems are going to really start to manifest into catastrophic outcomes.
And the last point on this is we have built security architectures to keep things from getting in. The problem that we have now is our security architectures are not sufficient to keep things from getting out. And AI is all about what's getting out.
Wow. Yes. This is really interesting for me, is how we've set up in the last five years or 10 years even is instantly challenged because AI is not static.
And I think that that is interesting. And it turns the tables very, very quickly too, if you're probably not set up for it. And if you had to pick the biggest gap, I mean, you've labelled a few there and maybe you can go into a little bit more detail of the biggest gap when it comes to AI governance today.
If you were talking from one CISO to another, and they were asking you specifically about that biggest gap, what would it be for you right now when it comes to AI governance? Wow. This is something that just is needing of simplification, right? And that biggest gap is our structures around governance. And I've been ahead of GRC and a number of different stops, right? Our function in governance has been heavily focused on how does it work, right? How does it work? How's it built? What's the policy base, the standards that it's associated with? Does it meet these requirements relative to data privacy standards, specific geographies? You can think of the entire checklist of things that we look at in governance.
And the gap is that we're focused on how does it work? Governance needs to move to what is it doing, right? What is it doing is really critical. And what we've done is we've isolated the what is it doing to predominantly to runtime protection, right? Which is a lot of technologies out there. And the problem is that AI is operating at a very fine grain level.
It is using authorisation. There's some really interesting studies and information that's come out recently about how poorly controlled the authorisation layer is that grants the access to all of these specific data elements and specific services that none of those solutions can see after you've authenticated. You're down into the, you know, the labyrinth of things that an AI agent or service can get into.
And our control planes in those spaces are, you know, either very bad or non-existent, which is that next layer of gap.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!