Welcome to Cash in the Cyber Sheets. I'm your host, James Bowers, and together we'll work with business leaders and industry experts to dive into the misunderstood business of cybersecurity and compliance to learn how to start making money from being secure and compliant. Welcome to Cash in the Cyber Sheets.
Hey everybody, welcome to Cash in the Cyber Sheets. I'm your host, James Bowers, Chief Security and Compliance Architect here at Input Output. Very happy to have you back with us this week, and we are continuing our discussion on the Dirty 13, the 13 most common audit issues that we see when auditing CPAs and really just about any other firm out there. This week, we're going to get into incident response management, basically how companies are managing incidents, how they're documenting that, how they're getting ready for it.
And we've got some other podcasts, a lot of documentation on our site about incident response management, how to create one, how to review it, the structures. There's a lot of information there. This week, we're just going to be going over really the highlights and basically the big things that we see that companies are tripping up on.
So really good information in there. Before we get into that, please click that subscribe, click that follow wherever you're listening to us, Spotify, Apple Podcasts, YouTube, wherever it's at, go ahead and follow, send us some comments, love to hear from you. So jumping into incident response management.
The biggest thing that we see is no policy at all, just there's nothing. Or if there is, it's very, very light, meaning they've got a page or two in the policies that they've got that say they need to have an incident response process or that they've got to plan and make sure that they're ready for incidents. But that's really it.
And if it sounds like, well, that wasn't really a whole lot of information. It's not, it's, it's really nothing. It doesn't give any direction.
And what happens is when an incident occurs, everything just goes sideways. The company doesn't know how to respond. They don't know which foot should go in front of the other.
And it's, it's just a whole house of cards that come down. As part of that lack of policy, the next big thing that we see, and they're really kind of related, but sometimes even when we see a policy, we don't see this next issue, which is not having an incident response team. Now these can be called whatever you want to call them.
In a lot of our policies, we call it the ICERT, the Information Security Incident Response Team. There's computer security response teams, potato, potato, whatever you want to call them, it's, it's the group of people that when you have an incident, the group of people or organizations that when you have an incident, these are the ones we're going to call and pull in and they're, they're going to save our bacon. Some really good ones to have on there is you definitely want to have a ICERT leader.
Who's going to manage all of this? Who's going to keep everybody moving, making sure that everybody's doing what they're supposed to do. Also good ones to have on there are like your technology leads. So typically your head of IT or some of your top tier technicians you'll have as part of the ICERT.
You also typically have your insurance companies. What have all that listed out? Some things that aren't always on there is like the local non-emergency police departments, your local FBI field offices. So that way you can report a cyber incident.
You also want to list who's going to handle PR issues or have discussions with the public, which is PR. Legal counsel, all of the different, different parties, different groups that you may need to help support. You want to have them all listed in one spot.
And that doesn't mean that when an incident occurs that you're instantly pulling everybody in, but you do have everybody's name listed there. You have all the contacts. So at different stages of the incident, you know who it is you need to reach out to.
So that's very, very important. You want to have that incident response team identified and documented and available. That's a big thing.
You don't just want it in a drawer somewhere that nobody knows where it's at. Make sure it's available to all the people that need it. Another big thing that we see, even with pretty well developed incident response plans, is no communication plan.
And this can be a complete absence of communication directives or structure, or it could be missing one of the following regarding communication with the incident response team. How's the incident response team going to document what they're seeing, retain evidence, communicate with each other about where they are in each of the different steps, so that way the team's moving together as a unified team, rather than just a whole sack full of kittens going all their different way. So you want to have a good communication structure that doesn't need to be crazy.
That can even be just a team's channel, just an incident response team's channel. We're going to put everything in here, all of the evidence, just throw it all in. We'll clean it up later.
But here's where we communicate. Also, you can just have the cell phone numbers. You can do it by email.
Honestly, something like a Teams or a Slack is really good, because then that way all of the communication, everything's contained in one spot. And you also have all of the documentation for it, which is really good post-incident and for lessons learned, and also for requirements sake, for regulatory requirements. You also want to have in your communication plan, how are we going to talk with our staff, with our associates? There's a hurricane coming.
How are we going to notify all of our associates that they don't need to come into the office? How are we going to notify them when it's safe to come back in? Or, God forbid, we get hit like in North Carolina, God bless all of those people out there dealing with it. How are we going to, as an organization, reach out and make sure that our associates are safe and do what we can do to help them? It's very, very important that we have this information. It also goes back to the employees to make sure that they have their updated contact information.
So we want to have all of that in there. This doesn't need to be a crazy system either. I would caution against just doing email, because if we're in an incident, email may not work.
Other things that work really well, you could use a platform like Twilio. That, I believe, you can even do to where you're only paying for the messages you send out, a very, very low amount. So it can basically be sitting there on standby for you.
And when you need to use it, you're spending a little bit of money to communicate, but not that much. So make sure you have a good communication plan with your staff. The final one with the communication plan is your client communications.
How are we going to communicate with our client? And this is very important, and it's very critical that legal, your compliance officer is involved, but there's regulatory requirements for how you're notifying your clients or the data subjects that are affected, and what time frames, and what do you say, and what do you have to say? These are things you don't want to be trying to figure out during an incident. You want to have this all planned out beforehand. What I recommend is having a few different templates for different stages of an incident.
We just noticed something, but we don't really know what's going on, so we're investigating. Here's a letter for that that we can send out to everybody. Hey, we're investigating.
We have no idea if you were impacted yet. I wouldn't say it like that, but we're investigating. We'll let you know if we see anything.
Right now, we haven't seen that your data is compromised. That's a better way to say it. As of yet, we haven't seen your data compromised.
Then as you continue through the investigation, here's what we've discovered. Here's maybe, if you want, what we believe, and here's what we're continuing to do. Also, make sure you have information in there, how they can contact you with questions, and any recourse for them.
And then finally, the template for what it's going to look like once you know what happened and once you know what the resolution is going to be. Again, just have a structure there so that when this time comes, you can just pull that out of the drawer, fill in the blanks, and send it out. The next big thing that we see, and this is practically with every incident response plan, even some pretty major companies, but there's no action thresholds.
And what I mean by that is we haven't defined at what point we're going to do what actions, and here's what I mean by that. Actually, I'm going to explain it in the way where I see it as a problem. So without having these defined limits, these defined action points, what happens a lot in an incident is obviously we don't want to reach out to our clients until we know what happened, or best case, we don't have to reach out to them at all because they weren't impacted.
We would really prefer that. And because that's what a company really, that's their end goal, that's their preference, they'll take baby steps during an incident. We see that this system or this server may have been compromised.
Let's give it another few hours to see what the team can identify and then we'll do a client communication. Few hours go by. It looks like maybe this was, but we've just got to investigate a little bit further.
It may not have been. Okay, let's wait a little bit longer, see if they can clarify that so we can give definitive information. And this just continues and continues as these little baby steps to get a little bit more information so we can speak intelligently.
And what ends up happening is when you look back, you should have sent out a notification days ago, in some cases, weeks ago, or God forbid, months ago. And now here we are well past our regulatory, our regulatory required notification time. That came out horrible.
But we're well past that 30, 45, or 60 days, whatever it is for the type of incident and for our jurisdiction. So now we have a regulatory violation. And that just, that starts to compound things, that makes things a lot worse.
What we also see is obviously companies don't want to spend money they don't have to, so they hold back on pulling in the forensic or the action teams, the recovery teams, let's just see a little bit more. If, if we really need to pull in those teams and we will, are you sure we need to? Maybe what we can do is let's check this over here and now, now let's check this again, it turns into all these baby steps that this is typically hours and days have gone by where you should have pulled in the teams that could have helped better contain it and better identify exactly what happened. But because you waited, it's become a bigger issue or you've lost logs that, that could have shown you what did or didn't happen.
And we'll get into it in another episode, but logs are very important because they can actually show you what didn't happen, which can give you the legal precedence to not have to send out notifications. It can save you a lot of money and a lot of embarrassment. So holding off, trying to save that, I don't want to say a little bit of money.
It's, it's expensive for forensic teams and response teams. They're good at what they do. Saving that bit of money can turn into having to spend a lot more or take a pretty significant reputation hit.
So that's, those are the biggest issues. The lack of policy, the no identified incident response team, a lack of communication plan for the incident response team itself, for the staff and for the clients. And finally, what we see as the most pervasive issue is no action steps.
To counteract all of these, make sure you have a good policy in place. Here's what we do. Here's how we plan for incidents.
Here's how we structure our team. Here's what we do when an incident happens. Here's who we're pulling in, how we communicate, how we resolve, and then how we do a lessons learned.
Have good communication plans in place. These can just be a simple communication matrix, or again, set everybody up through email, Slack and Teams, and cell phones. That way, if one system's down, you can do multiple others.
And most importantly, have your action thresholds. When we notice, when we notice that this server or these servers look like there is an intrusion, pull in the forensics team. The money is already authorized.
Management does not need to be consulted. Pull them in. When we have gone 12 hours into an incident, at that point, whatever it is we know, we're going to send out a notification.
Things at significant points like that, either timeframes or when something specific happens, a system or some sort of event, you want to have those structured so you don't take baby steps during an incident, which end up putting you miles from where you need to be. That is all I have today on our Dirty 13 incident response management issues. Again, this was such a supersonic fly-through, high-level overview of it.
Like we talked about before, trying to keep these a little bit more condensed and palatable, but we do have a lot more information on our website and other podcasts that go into more detail. And don't worry, there's going to be a lot more on incident response. It's a huge, huge issue.
So we'll continue to talk about it, probably even have a whole season just on incident response. But in any case, thank you for listening today. Please click that like, the follow, send us some comments and can't wait to see you back next week.
Thanks for joining us today. Don't forget, click that subscribe button, leave us a review, and share it with your network. Remember, security and compliance aren't just about avoiding risk. They're about unlocking your business's full potential. So stay secure, stay compliant, and we'll catch you next week on Cash in the Cyber Sheets. Goodbye for now.