The State of Enterprise IT Security Podcast: Ep 28: AI is Moving Fast in Business—Security Teams Need to Move Faster

Cybersecurity The State of Enterprise IT Security Podcast: Ep 28: AI is Moving Fast in Business—Security Teams Need to Move Faster

Rapid AI adoption often happens under the radar, without the organization’s explicit knowledge—creating what we call Shadow AI. Is your organization prepared to handle it?

Overview:

 

In this episode of the State of Enterprise IT Security podcast, we dive into the explosive growth of AI in business and the critical challenges it presents for security teams. As departments across organizations rapidly adopt AI tools, often without oversight, the risks of Shadow AI and data breaches are skyrocketing. Join Brad Bussie and Erin Carpenter as they explore the dual-edged sword of AI—how it’s transforming business operations and the urgent need for security leaders to stay ahead. With practical insights and real-world examples, this episode is a must-listen for anyone navigating the evolving intersection of AI innovation and data protection.

 

Key Topics:

  • The rapid adoption of AI in businesses and its implications for security: As AI becomes integral to business operations, the potential for security vulnerabilities grows exponentially.

  • What Shadow AI is and why it’s a growing concern: Shadow AI refers to unauthorized or unmonitored AI use within organizations, posing significant risks to data security.

  • How security teams can keep up with AI adoption: Security teams must adopt agile strategies and advanced tools to effectively manage and mitigate AI-related risks.

  • Practical tips for implementing AI governance and controls: Establishing clear AI governance frameworks and strict controls is essential for maintaining data integrity and security.

  • The changing dynamic between business users and security leaders: Collaboration between business users and security leaders is becoming increasingly vital as AI reshapes traditional roles and responsibilities.

 

Listen to the Episode:

 

Watch the Episode:

Read The Transcript:

Ep. 28 AI is Moving Fast in Business—Security Teams Need to Move Faster

 

Brad Bussie: Hey, everybody. I'm Brad Bussie, Chief Information Security Officer here at e360. Thank you for joining me for the State of Enterprise IT Security Edition. This is the show that makes IT security more secure, approachable, and actionable for technology leaders. Joining me again for our second episode is Erin Carpenter, our Senior Digital Marketing Director at e360.

 

Erin, welcome. Thank you for being here.

 

Erin Carpenter: Thank you, Brad. It is a pleasure to be back. I had so much fun last time, and I'm thrilled to be back.

 

Brad Bussie: Yeah, a lot of people liked the back and forth that we had because I think we've got some good perspectives across markets. And I think that's what’s interesting for a lot of our listeners and viewers.

 

So, I thought we would talk about a couple of different things today. This is pretty exciting because Erin has some live information that she's going to share. Now, it may not be as exciting if you are just listening, so I would urge you to go and take a look at this on YouTube or one of the other channels where you can actually watch, because we're going to pull up some live demos.

 

And we’re going to be talking about AI.

 

Brad Bussie: I was recently at Black Hat, and everybody was asking me, "Brad, where's the Black Hat update? What are all the things that you learned?" I thought we would take a little bit of a different approach because what we're going to talk about today ties back into some of the things that I learned at Black Hat.

 

One of the most important things I noticed is that no one has really got it figured out yet from a security standpoint. And a lot of the things that we've shared on the podcast over the past year are pretty much where the experts are at as well. So today, I figured we would talk about a very real topic, which is Shadow AI.

 

How do we handle this within a company? Everyone is starting to leverage AI in one way or another. Some are pretty new to it, while others are far more advanced. As a cyber professional, that can be a little scary on both ends of the spectrum. We're going to go through an exercise. I promised this a minute ago, where Erin is going to walk through something that I found exciting when we were planning the show. We’ll take some of our GenAI's that are out there and pit them against each other with some dummy data we created to show some of the power of these GenAIs.

 

Then, I'm going to talk about how, from a security perspective, I would want to either govern that or control what is even allowed to be put up there in the first place.

 

And Erin was going to talk a little bit about how AI is being used in everyday applications now from a marketing perspective. She wants to make sure that the marketing data and the things being put into any of the AIs we use are following the right security protocols and standards. You know, before we do anything, it's typically, "Hey, Brad, can we do this?"—which I love. But there also needs to be a piece of it where if somebody tries to upload something, it just says, "No, that's against company policy."

 

It needs to be a blend. So we're going to talk a little bit about that. Erin, I thought maybe we could start by talking a little bit about Shadow AI in organizations because I think this is a topic that you've heard about pretty often as well. I feel like everybody's just trying to figure out what they can do with this great new, you know, two-letter word: AI.

 

And I thought maybe we could highlight a couple of things there. What do you think of that?

 

Erin Carpenter: Brad, I think that's a fantastic idea. Thank you.

 

Brad Bussie: Yeah. So go ahead.

 

Erin Carpenter: One of the things you said at the beginning pertained to Black Hat when you were there, and you mentioned how a lot of these security leaders are still trying to figure it out. That's fascinating because that mirrors a lot of what I'm hearing in the marketing community.

 

Even since I’ve been following this and using AI actively since November 2022, everyone agrees that we are still in its infancy. We are still in the early innings. That also precipitated the need for us—for you and I—to talk about this as well. One thing that I recognized is that the dynamics between security leaders, marketing leaders, and other functional department leaders are probably changing and evolving. They need to be tighter than ever before.

 

I can only speak from a marketing perspective at this moment, but the fact that you and I are connecting, and I'm checking with you, really shows me that we have to have a tight alliance. There also has to be a level of education for the leaders so that you don't become a bottleneck as well.

 

So we're empowered both in advance, but then also, after something has been implemented, to make sure that we're following the proper governance protocols.

 

Brad Bussie: Yeah, and I have been called, unfortunately, a bottleneck before—especially when it comes to introducing, not just like a GenAI into the business, but an artificial intelligence or augmented intelligence, if you will, that is going to do something pretty specific with data.

 

This was a place that I was actually a bit of a bottleneck for e360 because one of the things that I've noticed about organizations in general is that they’re typically not ready for an AI implementation that has to do with their data. The data isn’t properly classified or tagged, and when those two things aren't present, and I unleash this super-fast indexing mechanism that’s going to do correlation, causation, and all of these great things with the dataset, what's to stop any user—or really anyone at all—from asking that AI a question through a prompt? That's what we're all being trained to do now with all of these different AIs. Just talk to it like a person, ask it the things, you know, set the stage.

 

If I don't have any guardrails, how is it supposed to know that maybe it shouldn't give somebody information that is HR sensitive or PII, or it's just something about a user that's protected by HIPAA or protected by any number of privacy concerns? So these are the kinds of things that I'm seeing when we start talking about the implementations.

 

One of the things that I’m going back to is a term that we used to call "Shadow IT," where you would have people in an organization, generally in a business unit, who would either ask IT for something and not get it, so they would just go and do it themselves. Or they wouldn't even think that IT needed to be involved. It's just an application after all, and they can put it on their credit card. Next thing you know, you've got 500 applications that your user base is using that, as a security professional or in IT, you know nothing about.

 

We're experiencing something very similar when it comes to AI and GenAI. We can name a couple, whether it is Gemini, ChatGPT, or Claude—we're actually going to look at Claude today, which is pretty interesting, some of the things that Anthropic is doing. What ends up happening is, before you know it, your user base is using a wide range of tools.

 

I see this a lot too on personal devices. That's a different show. We're not going to go too crazy into that. We’ll just talk about some of the things that I would expect to see in a corporate environment with something like that, such as some controls around which tools are allowed to be used.

 

Back in the day when we were doing this for IT, we implemented a technology called CASB. It’s like a cloud access security broker, and we were limiting, in a lot of cases, which applications could be visited in the first place. It would have to go through IT to have an exception or to set up single sign-on, all those kinds of things.

 

I am looking for that now for all the different GenAIs, and it is possible through technology. Whether it is one that I add to a browser as an extension that watches the traffic and where I'm going, and then it limits what kind of information I can upload. Because that's what everybody's concerned with.

 

I talked about data security a second ago—about information coming back. But could you imagine all of a sudden you're training one of these GenAIs with proprietary information and it’s training their model? And the next thing you know, one of your big competitors says, "Hey, what is e360's secret sauce?"

 

The GenAI is going to be happy to oblige and give all the information that it now has because it was just given freely and then used to train. Now, there are ways, depending on your licensing, to prevent those kinds of things from happening, but it has to be intentional. And I'm finding that users typically just go with what's free and easy, and those have the least amount of protections.

 

So, as a CISO, I have to protect people from themselves often, and they don't even realize it's a problem. But it is. So what I've implemented—and a lot of our clients do something similar—is either a browser extension that does the things I was just talking about or a secure access service edge through SASE. If you've got a client on your endpoint, almost all the SASE providers are doing this now, where I can say, "I only want my organization to use Gemini," or "I only want my organization to be able to use ChatGPT."

 

So, when I try to go to Claude or somewhere else, I get a challenge back from the proxy that says, "You're not authorized. If you think that this was a mistake, click submit here and give us a reason." It goes to IT and security, and then we can make a judgment call.

 

So, those two gates, I think, are important for organizations. And that is helping to curb some of these things that are potentially happening with GenAIs in organizations.

 

Erin Carpenter: Yeah, good context. That also puts some onus on you to be up to speed on some of the latest models too because, let's say you block access to Claude, I would be bummed.

 

Brad Bussie: So, what I have seen in some of the stuff that it can do through your demo that we had earlier, I was like, "Wow, that's pretty legit. Some of the things the audience is going to see today." Yeah, I agree.

 

Erin Carpenter: Yeah, no, that’s really helpful. Cool. Well, I think whether someone's an elementary user of AI or an advanced one, the long and the short of it is almost everyone is using it, and protections need to be in place.

 

Brad Bussie: Yeah. Well, and I see marketing using it more than a lot of other individuals. I see those who are having to crunch numbers or large pieces of data or compare and contrast. And I think some of the commercials that we're seeing on TV— and I know I sound like a super old dude when I say that because I'm sure a lot of people are like, "What's TV?" You know, I just look at stuff on my phone. But basically, we're seeing this from Microsoft, where with Copilot they're saying, "You can just take this 150-page document, ingest it into Copilot, and then say, 'I want you to create a presentation from this. It’s going to be 10 slides,' whatever, whatever, whatever, and then it spits out this beautiful PowerPoint."

 

But I think, Erin, what you’ve done showing me some of these data sets has really shown what is possible. But you do need a little bit of knowledge on how to prompt. You have to get your dataset primed. There are some upfront things that you have to do that aren't necessarily as easy or intuitive. But once you do it, there’s something to talk about.

 

So, I'd love to go through that use case. I think you had a pretty cool demo of some of these datasets and how you can do different things with the GenAIs that are available. I think we picked two at random and decided to show the art of the possible. And then, as a security leader, I’ll give a little bit of what I would insert or expect based on what you’re showing me.

 

Erin Carpenter: Yeah, let's do it. And before I go into that, there are two things I want to mention. As a business user actively using AI—and I got this from Paul Ritzer, who's the head of the Marketing Intelligence Institute—if you think of your job as a collection of tasks, and you start to evaluate and compartmentalize them, and think about how AI can advance any of those, that's how you can make your job more efficient. You can advance faster rather than just going from start to finish.

 

There are even evaluations you can perform to see, "Hey, this usually would take me three hours. I could shorten it to one hour or 10 minutes," something like that.

 

So if you do that, I think you can think of AI as a way to get you to where you need to be faster. The other thing to keep in mind is it helps to know what good looks like. If you start with the end in mind, that helps you become better at prompting. I think that's where someone who is a little more experienced in whatever work they're doing will have a better time using it because they'll know how to guide the AI, just like they’re guiding an intern.

 

Brad Bussie: I love that. And that makes me think of so many people who are saying, "AI is going to take my job." Right? And it really makes me say, "I don't think we're there yet." I don't think we truly have artificial intelligence. We have augmented intelligence. And the only reason it's going to take your job is because somebody else knows how to use it better than you do.

 

Erin Carpenter: That's it. And honestly, that’s my point of view. Someone who is a little more experienced in their role—as long as they're willing to get in there, try and try—they actually might be more advantaged because they’re going to get to where they need to be faster. It’s not like someone just walking into the workforce who might be super tech-savvy is necessarily advantaged, because they're new at the role and don't even know where to start.

 

It’s like a blank canvas for them. Right? So don't be afraid. Even though we're a couple of years into this and ChatGPT came on the scene, don’t be afraid to tinker.

 

All right. Are we ready to set up this demo?

 

Erin Carpenter: Yes. Here we go. We are going to start with a dummy dataset that I obtained from a fabulous company called Refine Labs.

 

So this is example data. What you're seeing right here is something that someone in marketing might want to use. They're going to look at a set of data, perform an analysis, and perhaps create some charts that they want to report up to the C-level to demonstrate performance.

 

This sheet of data demonstrates the source of opportunities and the source of the pipeline, whether it's cold outbound, low-intent lead gen, or ABM intent. A market leader might want to demonstrate that low-intent lead gen, which is what a lot of people are forced to do, really isn't that impactful if you're actually looking at the pipeline.

 

While it might generate a lot of leads in volume, when it really comes down to pipeline, it's very inefficient. All right, getting deep into the marketing. Go ahead.

 

Brad Bussie: I was going to say, for those of us that aren't in marketing, what is ABM? I mean, I'm sure somebody is like, "ABM? What is ABM?"

 

Erin Carpenter: Account-based marketing. So yeah, at the end of the day, it's doing marketing well, focused on a big account with a buying committee, and attacking it from multiple angles. Cool. There are a couple of sheets of interest. This is a pipeline source report. Then, this is a report that breaks down the pipeline by a point of conversion.

 

When those leads came in, did they come in from gated content? That’s content with a form, content syndication, events. What was the point of conversion for those channels? If you want to analyze them and derive some insights, that could take a marketing analyst. It could also be assisted or augmented, as you say, Brad, by AI.

 

So here's what we're going to do. We’re going to test it with ChatGPT. I'm using the ChatGPT team, so it is a more advanced form of the model. Then, we're going to do an experiment with Claude. All right. You ready?

 

Brad Bussie: I'm ready. I'm ready.

 

Erin Carpenter: First thing, I'm going to give it a prompt.

 

I'm going to say, "You are a marketing leader analyzing a pipeline source report, and preparing the data for the C-level. Generate some reports along with a summary analysis and recommended action steps in a manner that the board would appreciate."

 

Okay, so I put my prompt in there. Now what I'm going to do is take a screenshot. I'll do one screenshot. Oh, hang on a second. There’s one more thing. There are two reports here. One reflects the overall pipeline source. ChatGPT doesn't really care if I misspell either.

 

Brad Bussie: That's good. It straightens it out. Yeah.

 

Erin Carpenter: The other reflects opportunities by point of conversion. All right, so I took one screenshot. Then, I'm going to go to the split funnel analysis. Honestly, the more context you give it, the better. I didn’t give it a lot of guidance, but what you're going to see are two screenshots side by side. Because this is going to bother me, I'm going to copy and paste that prompt.

 

Brad Bussie: I love the character recognition it can do, the OCR. Probably for most people watching and listening, they’re thinking, "Boy, I probably would have done a CSV file or something like that." I’ve done this before with CSV, but with a lot of these GenAIs now, you can just use a screenshot. It’s going to recognize where the columns are, where the rows are, and what the header is.

 

It’s getting pretty useful for just screenshots and other things.

 

Erin Carpenter: Oh yeah. I've leveraged that so much. Sometimes I need to put something in a spreadsheet that's in table format. I'll take a screenshot and say, "Put this in a table." Then, I’ll copy and paste the table, and it’s in there.

 

In the past, you'd have to manually copy, paste every single cell, or type it.

 

Brad Bussie: You know, it’s wild that it still hasn’t gotten very good at, like, if you’re trying to have it create a video or create a picture and there’s any text in there—even if you tell it what the text is supposed to say—it comes out jumbled, symbols. It almost looks like a different language, even though you said, "Hey, I want you to say this exact thing." It’s still not there yet.

 

Erin Carpenter: It’s not there. It’s practically creating hieroglyphics when it does it. Yes. Okay, so it created a nice analysis here. This is where I would want to review and make sure the data is correct. I would cherry-pick it a little bit and spot-check to make sure the data points are right. This might even be more wordy than I want, so I would ask it to synthesize it. The one piece that

 

is missing here, and I’m going to push it a little further, is I want it to create some visualizations. I’ll tell it, "Now generate some visualizations of this data."

 

ChatGPT first branded this as Code Interpreter in July of 2023. Thankfully, they rebranded it into data analytics or something like that, and just rolled it up into the overall ChatGPT 4.0. So what it’s doing now is analyzing that same data and is hopefully going to create some reports. It might be a pie chart or a line chart. Go ahead.

 

Brad Bussie: While it's doing this, if I were looking at this from a security professional’s point of view, you said the first thing that I would want to see is the version of ChatGPT that you're using because it’s a segmented-off piece of the open AI models where it doesn't use any data that’s put there for training the model.

 

So there are different licensing structures. We don't have to get into that, but that's the first thing I would want to see. Second, if I were concerned about PII, like if any of this data in those screenshots contained sensitive information, what’s nice is a lot of the browser plugins, as well as some of the SASE technology, will actually do the exact same thing. But it’s going to look at that text before it lands in ChatGPT.

 

It would look and say, "There’s a social security number in here," or "There’s something that identifies the employee," or "Some of that marketing data contains sensitive information." It would actually prevent you from even hitting enter to submit it. It would pop a box that says you need to delete these because it’s detected some of those things.

 

Obviously, having a policy is a good first start, but we need some tooling in there as well to protect us from ourselves.

 

Erin Carpenter: Yeah, that’s a really good point. All I know is that the ChatGPT team says that all the data stays secure within here. Now, with that said, I've checked with you on it, Brad, and I still would not put sensitive data in it.

 

Brad Bussie: Yeah.

 

Erin Carpenter: So everything is still anonymized.

 

Brad Bussie: Thank you for that. I mean, I’ve read their privacy policies and their whole implementation. It looks good on paper. I’m just more of a "trust but verify" kind of person. If we can prevent some of those things from getting out in the first place, then we’re better off.

 

Unless you absolutely have to do that kind of thing for your organization, then I would just want the PII redacted. If I couldn't do that, then I would suggest a private version or your own GPT, where there is no way of cross-contaminating. It’s all in your own infrastructure and things like that.

 

Then, I would be okay with it. But there are a lot of ifs, thens, and elses that need to happen before that.

 

Erin Carpenter: No. Yeah. That would probably require an enterprise-level version at the time. It’s a whole other ball of wax, but I understand why it’s important. Okay, so here’s what it came up with, which is interesting.

 

Brad Bussie: That is.

 

Erin Carpenter: Yeah. This is not what it showed last time. Of course, you can tell it to iterate if this is not what you want to see. I’d rather see a bar chart or a pie chart. If you look here, we tested this earlier. You can see some of the visualizations it created right here.

 

Brad Bussie: Yeah, and this all goes into how it’s prompted, I think. I think you and I have both seen this. I’ll say this, and then I’ll ask it the same thing tomorrow. It may interpret based on some of the other things that I’ve talked about between now and then, and it may do something different.

 

It really is like talking to a person a lot of the time. We’re fairly inconsistent.

 

Erin Carpenter: Oh, absolutely. You’ll see here, it actually turned that screenshot into a table before it turned it into a chart. Wow. All right. We are going to take this prompt, and now we’re going to hop over to our good friend, Claude.

 

Well, I got to go back to the beginning because you want to see it from the beginning. All right, so I’m going to prompt it, copy and paste. The only thing different in this prompt from the one I tested earlier is that I asked it to create an interactive dashboard. Spoiler alert—Claude, ever since they released Sonnet 3.5 about three and a half weeks ago, ironically, there’s a feature in it called Artifacts.

 

If you turn it on, it enables you to create this interactive dashboard. ChatGPT 4.0 cannot do that. That’s why I added it here.

 

Brad Bussie: I was blown away. Yeah. Oh, sorry. I’m stealing the thunder.

 

Erin Carpenter: There is no thunder to steal. It’s a whole thunderstorm here, so there’s nothing to steal.

 

All right, split the funnel. I’m lazy, so I’m going to do two screenshots right here. Go back to our friend, Claude. Again, Claude lets us add multiple screenshots. One thing I’ve noticed is if you're uploading CSV files or certain data, you may run into limitations in terms of the context window and what you can put in.

 

So those are some limiting factors. This screenshot’s fine. I’m going to go ahead and hit submit.

 

This is so freaking cool. Look at this. Wow. I am not even a programmer. Not even close.

 

Brad Bussie: Your friend Claude looks pretty proficient.

 

Erin Carpenter: That’s it. I mean, it looks good. We’ll have to talk to some of our engineers to make sure this is right, but...

 

Brad Bussie: Yeah, I’d have to dust off some of my old programming knowledge to read this.

 

Erin Carpenter: Right. All right. Here we go—total revenue, $17 million, 9,175 opportunities, win rate 8.58%. And remember, I wanted it to create an interactive dashboard.

 

Look at that. Yes, it’s interactive. I can click some buttons and toggle between views. It’s done a really great job of creating a helpful visualization. Here’s the analysis, recommended action steps, and I could go through those. I can actually even publish this if I want. I can download it to a file, and off we go.

 

I can even tell it to change the data around, eliminate something, or create a different visualization. You get the idea.

 

Brad Bussie: Yeah. It makes me wonder how long they’re going to let data live interactively like this. If there’s going to be a limit—because I think some of this could get pretty big. Depending on how an organization uses it, if they’re having others manipulate the data or visit the data, they almost become a hosting company at that point. I wonder if there’s going to be a rule like, "This is only going to live for a certain amount of time," and then it’ll have to regenerate. It’ll be interesting to see how these Artifacts are used.

 

As you know, as a security professional, I’m looking at some of this stuff and thinking, "This is great, but what controls do we have to make sure the data stays where it’s supposed to?"

 

Erin Carpenter: Oh yeah. I don’t know if it’s in the actual terms, but Claude says that sensitive data is not used to train the model. That’s where I go, "Hey Brad, is this right?" Again, I keep coming back to you. Hey, quick caveat with this particular feature—it’s using the $20-a-month version. Claude is still excellent at the free level, but to my knowledge, it doesn’t do these Artifacts.

 

But I mean, the amount of time saved—as a marketing leader, I can now validate exactly what I need to show the board. Low-intent lead gen, this is why we’re not paying companies to do lead gen of white papers. They don’t create a strong win rate. It’s a waste of money.

 

But that can be repurposed for any department, really. There are so many incredible insights. Look how fast I was able to get to that conclusion, perhaps take a screenshot of this, and put it into my monthly marketing report.

 

Brad Bussie: Yeah, and just imagine if you had to depend on somebody to do this for you. The savings in time—it’s forcing people to stretch a little bit in their career and helping them to grow in ways we didn’t expect.

 

I know a lot of marketing professionals now who are quasi-IT in a lot of what they’re doing, especially with the applications. Having a good understanding of integrations—like integrating something between HubSpot and another tool, and how to use Zapier, with that being middleware that does all the translations—that’s generally IT-level stuff. But yeah, it’s interesting having conversations with marketing professionals. They’re like, "Oh yeah, this is how you set up the connector. There’s an API, and you do this, this, and this." It’s like, do you want a job? Because we’re a little shorthanded in cyber and IT in general, and I think we could use some help.

 

Erin Carpenter: You know what? That’s funny to hear you say that. That’s actually a really good segue, as you’re talking about Zapier and middleware. I’ve actually done that to some extent, where I’ve connected data from Google Sheets to something else and created actions. It actually does

 

plug into the API of ChatGPT. It can create actions based on a programmed prompt. That’s one application that has AI infused into almost every single application. If it wants to survive in the next year, it’s going to have to have that, if it’s not natively AI.

 

So what do we do? Can I use it? Can I not? What do I prompt it with? I mean, HubSpot has AI built right in. All of them do. So what do we think about as business users? And as security leaders, how do we guide people with these applications?

 

Brad Bussie: I think middleware is crucial. Generally, what they’re doing is becoming the broker between everything. If you insert API security into that, that’s where I would want to see it as a security professional. I want to see that APIs are secure, that you’re using the right type of authentication and authorization, and that you’re using zero-trust principles. A lot of these middlewares, like Zapier, are doing that to a certain extent, but that’s where you have to plug in the whole application security stack.

 

So it’s still tool-intensive. There are a lot of different kinds. If you’re a listener or someone who’s watching, and you’re interested in what we’re talking about, if you go and look at Apigee or Noname Security, you’ll start to get a sense of API security and why it’s so important. That really is where all of these connectors are going and how they’re talking to things.

 

Now, when I’m looking at an application—let’s say it’s HubSpot, and it’s got its own GPT interface or artificial intelligence interface—generally, what they’re doing is similar to what we’ve talked about, but they’re doing it with the data they have access to. So things that you’ve already uploaded and that exist inside HubSpot—that’s where you can say, "Create that cool dashboard for me based on the customers I have inside." It’s indexing data, doing all the cool correlation things, and then displaying it.

 

Some of them do go out and allow a connection with an external data source. Those are the ones you need to look out for, and those are the ones I’m not a big fan of. They’ll try to augment the data they have with something on the internet or an external data source of some kind. They’re trying to enrich the information. That’s where data security comes into play for me.

 

So read the agreements with your applications pretty closely because all of them now have an AI clause where they talk about what they’re using, how it’s using your information, and all that kind of stuff.

 

Like you said, Erin, pretty much everybody’s doing this. When I walked the floor at Black Hat, I couldn’t walk two feet without bumping into AI. Even if you weren’t an AI company, you were a security platform powered by AI or able to do something new and exciting with fewer people. Because like you just demonstrated, creating those dashboards and things like that—maybe that used to be a person, and you don’t need that person anymore.

 

That’s not the case most of the time, because that person who was building those is probably now your prompt engineer—the person who knows more about these GenAIs than anyone else. I think I’m seeing more of that than complete displacement. But you’re right, most organizations and applications that have an AI component are mainly living inside the data they already have and just making your interface with them better and faster. I see that in a security operations center—a good security use case where instead of building a playbook by hand, now I can go into my SOAR solution inside that operations center and say, "I’m trying to do the following things with this data to prevent the following." You give it a big, long prompt, and next thing you know, you have a playbook that you can then ingest into the organization.

 

That, to me, is exciting. All of these different applications are doing fun, exciting, and interesting things with GenAI. We just need to be careful how they’re using the data and make sure that, as consumers, we understand how that data is being used.

 

Erin Carpenter: Wow.

 

Erin Carpenter: Sounds like we have a lot of exciting upcoming episodes around this as it evolves.

 

Brad Bussie: Yeah. Yeah.

 

Erin Carpenter: Never let go of one of the models or one of the applications. Keep testing them because they keep changing.

 

Brad Bussie: They do. They definitely do.

 

Erin Carpenter: Yeah.

 

Brad Bussie: So I think that’s pretty much all we wanted to cover for today. Anything else?

 

Erin Carpenter: Yeah, absolutely. And I want to throw this out to anyone who’s listening—if they are… I’m just kidding. No, honestly, if anyone who’s listening has any questions for us—I know I have a lot of folks in my marketing community who have these questions and are curious—shoot me a question. Shoot Brad a question. I’ll be happy to run it by him, at least from that angle or any angle.

 

Brad Bussie: Yeah, and we’re definitely open to questions and answers across cyber too. If there’s something you’re thinking about and you’d like us to go more in-depth on—stories you come across, things that are happening in your own organization—just feel free to drop that as well, and we can maybe add it to a future episode.

 

Erin Carpenter: Love it.

 

Brad Bussie: All right. Well, thanks, everybody, for joining us for the State of Enterprise IT Security Edition. We’ll see you next time.

 

Erin Carpenter: See you later. Thanks, Brad.

 

Brad Bussie: Thanks.

 

Written By: Brad Bussie