In episode 07 of the State of Enterprise IT Security podcast, host Brad Bussie covers recent cybersecurity developments that hold profound implications for technology leaders across the globe.
The episode addresses three critical incidents: the discovery of a notable vulnerability in the newly released Apple Vision Pro by an MIT Ph.D. student, the costly cyberattack on Clorox which led to $50 million in expenses, and Cloudflare's successful mitigation of a significant security threat during Thanksgiving 2023. Through detailed analysis and expert commentary, Brad dissects these occurrences to extract valuable lessons and strategies, aiming to arm listeners with the knowledge needed to navigate the complex cybersecurity landscape effectively.
[00:00:00] So right now, this effect, really affects Apple Vision Pro. And I look at this actually as a, as a good thing, as security experts and hackers are always engaged in a race to crack something new.
[00:00:34] Hey everybody, I'm Brad Bussie, Chief Information Security Officer here at e360. Thank you for joining me for the State of Enterprise IT Security Edition. This is the show that makes IT security approachable and actionable for technology leaders. I'm happy to bring you three topics this week. First, we're going to talk about an MIT PhD student that has identified a vulnerability inside of the Apple Vision Pro, just days after release.
[00:01:06] The second topic, we're going to talk about Clorox and the cyber attack that they suffered, late in 2023, which calls right around 50 million in expenses for the organization and then we'll talk about CloudFlare and they're addressing a Thanksgiving 2023 security incident. So with that, let's get started.
[00:01:32] So first topic, the MIT PhD student that I would say, I don't know if you would consider this a hack, the Apple vision pro, but it was just days after release. And what it, what it did is it re reveals some potential jail breaks and malware threats. So the PhD student, he specializes in micro architecture security.
[00:01:59] So if there's somebody that I, if I were Apple wanted to look at my stuff, it would probably be this guy. And he identified a kernel vulnerability in vision OS. found that it was exploitable and then able to facilitate jailbreaking the device. And that just allows for creation of malicious software to be consumed by the hardware.
[00:02:26] So right now, this effect really affects Apple Vision Pro. And I look at this actually as a, as a good thing, as security experts and hackers are always engaged in a race to crack something new. If it's an operating system, if it's an application, granted, I mean, I like this stuff to be identified before the product or app hits the market, but this is honestly why Apple and other large companies have a bug bounty program.
[00:03:02] And if this turns out to be the way that it looks, I mean, this MIT student could be getting paid or. What I think may happen is he might have a job offer once he's out of school because finding this kind of stuff before a, a cracker or a hacker, that's, that's the game. Honestly, I'm not super worried about this one because Apple has a pretty solid track record of patching vulnerabilities and exploits fairly quickly.
[00:03:34] That could be debatable, but I think this one isn't going to be as big of a deal, because they caught it early. Second topic of today, we're talking about Clorox, and they suffered a cyber attack that caused 49 million in, they're calling it expenses. So for those of you that don't remember, or if you just need a refresher, [00:04:00] Clorox suffered a cyber incident back in, it's like August, September of 2023.
[00:04:07] And it's rumored that this attack was conducted by Scattered Spider. They specialize in social engineering attacks to breach a company's network. And I'm sure you've heard of them. They were linked to MGM, Caesars, DoorDash, and Reddit. All of those having suffered security incidents or breaches, depending on how you look at it.
[00:04:33] So the attack impacted operations and it directly impacted Clorox's ability to produce some of their consumer products. And I'm sure you're asking, wait, wait, wait, I bet they had cyber insurance. And I bet that helped offset some of the costs. But that said, you know, you still end up having to pay for some things that the cyber insurance doesn't cover.
[00:05:04] So parts of the incident response, 3rd party consulting services, I. T. recovery and forensics experts. And then there's incremental operating costs and. Those are things that really stemmed from the disruption to the overall business. What's interesting is they're still working to recover from this attack.
[00:05:30] And I always like to give you something to think about. And that's really the whole purpose of a show like this is the the hindsight is 20 20. So being able to go back and look at something and this isn't intended to throw shade on anybody. Really this is just intended to help us prevent the next.
[00:05:50] Incident or breach. So what would I suggest those that are out there in the manufacturing business, or honestly, any business, what would I suggest you do? So I would say, first and foremost, conduct a risk assessment, make sure that you evaluate the likelihood or potential impact of A risk to the organization, and that's a good way of saying, look at the technical and administrative controls.
[00:06:24] And make sure you're using your resources effectively. Second, have an incident response plan. So if something does go down, make sure you understand who, and when, and what, and why, and all of those important questions. But really, it's about establishing a security incident response team, documenting an incident response plan, And then making sure you do regular incident response tabletop exercises.
[00:06:58] So this is something I urge all of our listeners and clients to do. Because really what are you doing? You're planning for real world cyber incidents. I tell my, my sons this all the time. Because in school, they have to do lockdown drills. They have to do tornado drills. They have to do all different kinds of drills.
[00:07:22] Somebody pulls the fire alarm, whatever it is. Why do we do that? Because we're preparing. We hope it doesn't happen, but we are preparing for a real life. Incident or scenario, I would say another thing to consider is segmentation network segmentation. You've heard me talk about zero trust a lot and you're going to hear me talk about zero trust again because of what's going on at cloud flare.
[00:07:49] And this is actually a good example of segmentation succeeding. So really why? Well, we're, we're trying to minimize the risk of a cyber attack spilling over into another area of the business. I always say we're trying to minimize that blast radius. And the impact from an incident, and I think our users are still one of the most vulnerable areas of a business for cyber attack.
[00:08:27] Most of the attacks are still coming through email and the web and the user. With the password, the account with the password, those things are still the number one way organizations are getting impacted. So the 3rd thing I want to talk about today is CloudFlare. They are addressing the Thanksgiving 2023 security incident.
[00:08:57] And this, this one is great, and I'm going to spoil the ending. A little bit, because this is a great example of proper security controls that actually prevented something from becoming much larger than it could have been. But, and there's always a but, there is still a juicy bit we can learn from this, and there is a villain in this story, and I'll reveal a little more of that in a minute.
[00:09:28] But let me, let me walk you through what happened to CloudFlare. So CloudFlare had a threat actor appear on one of their self hosted servers. And this was one that had, um, think of it as like a wiki. If you know what JIRA is, it's for opening tickets, things like that. That, that's where the attacker initially got a foothold.[00:10:00]
[00:10:00] And what they did is they tripped a, consider an alert. So the security team at CloudFlare was able to shut them down pretty quickly. But, I would say they did the right thing. And instead of saying, well, you know, we got them all good. They actually brought in a CrowdStrike forensics team. And they went through and spent a couple of days going through everything before giving them a report and we'll call it at all clear.
[00:10:35] So Cloudflare, they actually contained this incident, and they did it in a few key ways. There was no impact to customer data, and there weren't really any systems that I would consider. Impacted. And why is that? Well, as we, as we've come to find, they have robust access controls. They have a well documented and enforced firewall rules.
[00:11:04] They have hard security keys and tokens and as promised zero trust tooling. And an approach and they follow the framework. So this is the, honestly, this is the promise of zero trust architecture. It's like bulkheads in a ship where if you compromise one system or cabin, it's limited from compromising the entire organization.
[00:11:36] So if I'm going to, if I'm going to look at this attack. Let's kind of dissect it a little bit. So 10 days prior to the attacker gaining that foothold, they started to do recon, they accessed the Confluence, which is the wiki site as well as JIRA. And what they were able to get into was a bug database.
[00:12:00]They then were able to establish persistent access to the on prem server. They grabbed some source code, and then they tried to gain console access. And this is when they were actually discovered. The, the attack at this point was unsuccessful. And one of the things to note is this was all done in a data center that was not yet ready for production.
[00:12:30] So it had not been rolled out. It was somewhere in Brazil. And, what's interesting is the attacker was just trying to get a foothold, and then they were going to move laterally. But because of the controls that were in place, there was nowhere else that they could go. Now, I'm sure you're asking, like, how did all of this happen in the first place?
[00:12:52] So let me break it down. And this is the origin story of this, as the origin story for several of the breaches lately, came from Okta. It came from the Okta compromise. So an access token and then three other service account credentials, they were taken from this Okta breach. And it it's, it's good that that CloudFlare admitted this.
[00:13:23] They said. Hey, the, these were impacted and we failed to rotate the credentials after the Okta compromise in October of 2023. So what does that mean? There was an access token and three service account credentials. That were known by this hacker group, and that is what they used to gain access to CloudFlare.
[00:13:50] So, if I'm looking backwards, what should have happened after the Okta breach [00:14:00] is All service accounts and all user accounts, anything having to do with either an access token or service account, all of those credentials should have been rotated. All passwords should have been changed. We should have reissued everything.
[00:14:17] Not all organizations did that. And this is just another example of, you know, not every organization has a real good grasp of their password problem. And service accounts are one of those things, if you've ever been an administrator, the thought of cycling a service account password is kind of terrifying, because sometimes you're not even sure where that service account is actually being used.
[00:14:43] Maybe it's on a service itself, while you go in, you reset that password, you change the password in the service and you try to restart it. Maybe it doesn't come back up, and then you find there was actually a hard coded password somewhere else in an application. I mean, there's just a lot of crazy stuff that happens and continues to happen, but this is just another example of it's a password again that is leading to these compromises. Now, what I want us all to take away from this is that Cloudflare is doing something that I think any organization that's been through an event like this, is doing, they're calling it code red, which means remediation and hardening of their entire environment.
[00:15:31] This, everyone, is the right response. It's a, it's a solid after action review, a solid plan, and they're updating their clients on what they're doing. So a side note for Okta, this is, this is something that they as an organization could learn from and, and really making sure that their clients, partners, and overall [00:16:00] user base understand what's being done after.
[00:16:04] The breach and after the security incident and, you know, what is what is going on to prevent something like this from happening again. So this is a is a thank you from me to cloud flare and it's the way that they're handling this. So, not only are they going through the after action and and leveraging code red, but they are publishing the indicators of compromise to help others identify if they've had.
[00:16:35] A potential impact in a similar manner by the attacker, so they're showing what did the attacker initially do when they were establishing their foothold? What did they access? What did they try? And what we found in previous attacks is the attackers generally follow the same script. So, if we can pull these or these indicator of compromises into our early detection systems.
[00:17:04] We prevent the next breach before it even gets started. So, something to think about. And, thank you for tuning in. And we will see you next time.