Transcript
Watch the Webinar
Erica Smith
Good afternoon, everyone, and welcome. I’m Erica Smith, Director of Security Operations Center here at CyberMax, and I’ll be moderating today’s discussion on one of the most critical aspects of cybersecurity, response actions. Insecurity detection gets a lot of attention, but it’s really our responses that define how well we’re truly protecting our customers. Today, we’ll be diving into some real-world examples of how our team is containing threats, minimizing impact, and ensuring our customers stay secure. I’m joined by Ryan Bratton and Stephanie Camacho. Both have been in the SOC for five years. Ryan is our auditor and Stephanie is one of our shift leads. Together, we’ll walk through some real examples and the thought process behind each and the lessons we’ve learned along the way. We’ll go ahead and get started. I will be hopping off camera. I’m having some technical difficulties right now, but I’ll be here and we’ll do our best. So just to go through the webinar agenda, we’re going to go through some real stories from our SOC that highlight the power of our proactive real-time response. Hopefully, it will give you a better understanding of the day-to-day life at the CyberMax SOC. We’ll explain why these stories matter and what the big R really means for CyberMax. And then hopefully, at the end, we’ll be able to answer some questions. So for our first story, I’m gonna hand it over to Ryan. Ryan, take it away.
Ryan Bratton
Thank you very much for that introduction, Erica, and for laying out the agenda for today. And as you mentioned, we’re gonna run through a couple stories here with you guys just to kind of give you an understanding of what it’s like in the Cybermax SOC, right? What it’s like in a lot of SOCs for that matter, and kind of understand what we look at on a daily basis. What are some of the things that we deal with day in and day out, right? So just to kind of set the stage for this story for y’all, First of all, the title of this story is called The Call That Protected Four Clients. So as cybersecurity professionals, we place a significant focus on hard evidence, right? Tangible facts that lead to conclusions. But what about when there is no evidence? What do we do then? That sets the stage for this story. I hope you enjoy. So our security operations center, we received a phone call from one of our customers. informing us that one of their providers was recently compromised. Given that there are connection points between the compromised vendor and our own customer, our customer was very concerned about this, right? They’re concerned that the vendor compromise is going to impact their own organization, right? So what do we do, right? So what’s the SOC going to do here? We’re going to spring into action. We’re going to gather the necessary information, identify any possible indicators of compromise, We’re going to also review the potentially impacted customer to identify and investigate for any signs of intrusion. So as we began our investigation, we actually identified that three other CyberMax customers may have been affected because they work very closely with the same provider that was originally impacted. So what do we do from here, right? So we’re going to expand our search. We’re going to broaden our viewpoint here. take a step back and kind of understand maybe what’s happening between these four customers of ours, right? So we’re going to look at that, at the customer’s environments. We’re going to identify any signs of intrusion, evidence of compromise, or even just suspicious activity that might lead to something else in the future. As a blanket statement, we did not identify any signs of compromise in any of our customer environments, but our response doesn’t stop there, right? We’re going to work diligently with each of these four customers to provide them with some actual recommendations to limit their exposure to this compromised vendor. So as we performed our review, as we investigated those sign-in logs for any IOCs or indicators of attack as well, we also may have identified some certain security points that may need to be improved or worked on for that customer environment. We didn’t just stop there. We’re going to provide them with some security recommendations to improve their posture overall. And again, just to kind of clarify this here, this was a phone call from our customer, right? This was not an alert on our dashboard. This was not a specific event in the SIEM. You know, this was a notification from our customer letting us know about a potentially compromised vendor or a compromised vendor of theirs, right, that may have led to… their own organization becoming affected. Fortunately, you know, their organizations were not impacted by this, but just to kind of round out this story here, the notifications of malice, right, they can come from any source. It can be an alert on our dashboard, but it can also be a notification or even just a hunch from an analyst, right? Something that they suspect is a little bit off from the norm. But it’s our duty to spring into action and protect our customers at all costs, no matter the origin of that reporting.
Erica Smith
Thanks, Ryan. When the SOC receives a notification like this, what are you looking for in the environment to determine if they’ve been impacted?
Ryan Bratton
Yeah, so that’s a good question. And, you know, we’re not going off of that specific alert, right? Like I mentioned, this is something that we didn’t receive as an endpoint detection and response. you know, notification or alert in that regard, right? This requires our team to understand normal. We need to know normal and be able to identify what malicious activity really looks like. So just to kind of break down the investigation for you a little bit, we’re going to break it down into certain segments, right? A few different segments. You know, initially, we’re going to identify those connection points, those linking factors. They may be systems or accounts, right, between the customer of ours and the compromised vendor. We’re going to identify those connection points and really dig into the login events there, network traffic, any suspicious programs that might be on those systems that link the organizations together. And after we perform that comprehensive review of the environment, for any of those signs of intrusion, we’re going to take that tangible information that we’ve gained from that, the IOCs that we may have uncovered. if we did uncover any of them, or even if they’re IOCs that were provided to us from that vendor, we’re going to generate detection rules for any of those known IOCs.
Erica Smith
Okay. What would you have done differently if we did identify that any of those customers were impacted?
Ryan Bratton
Yeah, so in the event that we saw that one of these four customers of ours was somehow impacted by this incident, we’d be engaging our threat response team, right? So the threat response team is an in-house group of team members that are highly skilled and experienced analysts, as well as incident responders. And the threat response team would work with the client, the potentially impacted client there, to understand the full scope of compromise. What was the sprawl of this information? How did this incident spread between one system to another, right? They’re also gonna provide initial remediation actions and engage the client’s incident response team to further protect their environment, right? So what do we have to do here for that? We have to engage the IT department for that organization to unpack the logistics of their environment. And we’re also going to engage the client’s incident response team to provide a full scope of analysis and remediate that threat from the environment.
Erica Smith
Okay. So what can organizations like the one in this story do to limit their risk of third-party compromise?
Ryan Bratton
Yeah, so the first thing that comes to mind for me here with this question is basically, you know, implementing those security best practices, right? So what are some of those? I’m just going to give you a few examples here. You know, using strong authentication methods, right? Strong, unique passwords, as well as multi-factor authentication wherever applicable. You also want to enable logging and monitoring of all accounts and systems in your environment. And what that’s going to do is allow for something to be detected in the event that something is a little off or unusual, right? I’d also recommend restricting login times and login locations to only those necessary for business purposes, of course. So primarily though, this is the main point here that I want to make about this is implementing least privilege policies, right, or principles for any vendor accounts that you may have. Vendor compromise, what leads your organization to being compromised, typically begins with those insecure, over-privileged, or even forgotten about vendor accounts. Think of the scenario here where you make an account for a certain vendor that might be on site next week, and you have to give them certain permissions, and you give them an over-privileged account. Unfortunately, that account didn’t get the permissions removed, and it remained forgotten about in your environment. That’s a high-risk point there as well. So something else you can do, make sure that those systems that are linking those two organizations together are appropriately locked down, and any operating systems that are in use there should be constantly updated.
Erica Smith
Great call-outs. So why are vendor compromises so impactful to organizations today?
Ryan Bratton
Yeah, so if you think about what we do on a daily basis, in not just security environments, but as a business. So as a business, you have to interact with third-party vendors in some capacity. I don’t know of any organizations that don’t have some kind of third-party vendor. It’d be pretty impressive if you did everything in-house. So if you imagine this, you’re going to have multiple vendors in and out of your environment, and that obviously brings in additional risk. Something you need to do is you need to hold your vendors accountable, right? You need to hold them accountable for their own security posture. You need to maintain a high level of visibility into these systems and accounts in order to reduce the risk of compromise for your own organization’s environment if your vendor does become breached. Just a final point here I’d like to make is, you know, in our very interconnected world, Unfortunately, the mistake of one of your vendor employees or just a vendor in general could lead to a compromise of your own organization. So maintaining that high level of vigilance, as well as training your own employees to recognize any signs of compromise or malice is incredibly important here.
Erica Smith
Great call out. Okay, I’m going to hand it over to Stephanie to give us our scenario number two. Steph, it’s all yours.
Stephanie Camacho
Awesome. Thank you so much, Erica. So our second story is titled One IP Address, Two Organizations Saved. So effectively, an analyst in the CyberMax SOC saw a detection for a suspicious IP. And in this case, the IOC wasn’t all that reliable. It was easy to brush off. So on the surface, it looked like something that could potentially just be brushed off. But our analysts didn’t, our team didn’t. This analyst got curious. They decided to proactively search for this IP in other environments as another way of trying to get more idea of what actually is going on. Upon investigating in another client environment, they were actually able to determine that the same IP had signed into Microsoft Outlook Mobile for a second client. So our analysts decided to do a bit of a deep dive into that mailbox. There wasn’t anything overtly malicious there. So their research showed that the threat actor had deleted some emails related to PTO, but that wasn’t really something that would have ever created an alert in the SOC that people delete emails all the time. But still, that diving deeper on an IOC that maybe could have been brushed off and then pivoting that into another client environment the analyst was able to determine that the activity was not consistent with what should have been happening from that user, and it was, in fact, a legitimate activity. So their ability to do some cross-client pattern matching and to dive a little bit deeper outside of just the environment that the original alert triggered in, they were actually able to save two clients in that process, and they were able to catch a mailbox compromise before further damage was done.
Erica Smith
Yeah. Steph, what steps should an analyst take when they encounter an IP or an indicator that might seem minor but keeps reappearing?
Stephanie Camacho
So from a SOC perspective, really any alert we’re investigating, it should fall into one of two buckets. So either it’s benign, and we can exclude it going forward, we don’t have to look at it again, or We’re not comfortable with it. We need to dig deeper and see what’s going on. Ideally, everything that comes across our desk falls into one of those two categories. So, and this analyst really followed that philosophy that like, I don’t feel comfortable getting rid of this and never looking at it again. Let me just dive deeper, see what else is going on. And it really ended up paying off and protected that second client.
Erica Smith
Yeah. Do you think that this case highlights a gap in current SIEM or SOAR capabilities, or is this more of a human-driven detection strength?
Stephanie Camacho
I think this is really a human-driven detection strength. So in regard to SIEM SOAR capabilities, the ability to do pattern matching across different client environments can be immensely helpful, but it needs to be done in a way that’s still protects client privacy and protects client data and make sure that there’s no sort of cross-pollination of information there. So, and especially in our SOC, because we do have a pretty extensive tech stack, we have so many different tools that we’re monitoring, that the analyst taking that initiative to look beyond just the single client really, I mean, that really showed some extra work and extra effort and It really highlighted their ability to identify those trends and human-driven detection strength.
Erica Smith
Yeah. How can we, in the SOC, encourage our team members’ curiosity and get them to continue digging deeper instead of, you know, just resolving alerts and moving on?
Stephanie Camacho
Yeah, SOC work can be really monotonous just by the nature of it. A couple of things that I know we’re doing in RSOC that really help with that is first, good detection engineering. So making sure you’re getting rid of any noise, any repeat activities, stuff that is not high fidelity so that analysts have more bandwidth to focus on the actual malicious stuff. That’s very important. Also diversifying tasks, like in RSOC, we’re running multiple different areas of responsibilities or AORs, and letting the analysts work in each of those AORs so that they don’t have to do the exact same thing every single shift, every single day, so that they have a diversity in their tasks and keeps them a little bit more engaged. Something else we try to do is let them focus really on the quality of their work, that if they come across something that’s a little bit unusual, a little bit tough to get to the bottom of, giving them the bandwidth and the time to really dig into that and dive deep, that really enhances that sense of curiosity and really gives them the freedom to do that digging if they need to.
Erica Smith
Yeah, great call outs. Okay, so we’re going to move on to our scenario number three. We’re going to have it started by Steph.
Stephanie Camacho
Yeah, so Effectively, this can kind of almost be an extension of story number two, that what if the mailbox had stayed compromised? This was a different client though, this instance. But effectively, the story is called a malicious inbox rule and 300 plus shares. So people use inbox rules all the time just to manage the relentless flood of emails or even just getting rid of spam. But threat actors commonly use inbox rules to evade detection. They can maintain persistence, maybe stay in the inbox a bit longer, steal information and exfiltrate data from their victims. And in this scenario, the SOC was able to identify a malicious inbox rule that had compromised a user mailbox for a client. Would’ve been really easy for the SOC to just be like, Hey, client, one of your users has a malicious inbox rule. take a look and remove it. But the analyst didn’t drop it there. The analyst did dive deeper. They took a look at the user account and they were actually able to tell that that same user had, or the compromised mailbox by the threat actor had already forwarded that exact same credential harvesting e-mail to 300 additional e-mail addresses.
Ryan Bratton
Yeah, it’s an interesting point, right? Because we see this a lot, unfortunately, where threat actors are compromising user accounts through maybe phishing emails or social engineering attacks, where once they’ve compromised their mailbox, then they go and send out all these mass emails. And the purpose of that is, of course, to try and spread their malware. They may have attached a file there, like Steph mentioned, where they’re trying to spread this file around to try and get people to click on it, type in their credentials, so through prompt notification, you know, identification of this issue and notification of our customer, the SOC was able to inform the impacted client about this issue at hand, right? So not only do we inform them about this, you know, conducted a review as well, but we also provided some actionable recommendations for remediation. So what are some of those recommendations, you know, that might be you know, forcing password resets, right? We may have done that initially, right? That’s something that the SOC can take care of right on hand, you know, and we’re gonna identify those points where maybe user accounts were compromised and take that immediate action. So through retroactive investigation here, the SOC was able to identify that, you know, the initial access point was, like I mentioned, that phishing e-mail. So certainly, you know, this phishing e-mail was spread across a number of users. and in order to compromise those credentials, right? So once the credentials were compromised, this inbox rule was created, right, by the threat actor, and then the mass delivery of these emails was initiated. So a thorough review of the environment identified eight additional accounts that were impacted and promptly remediated by the Cybermax SOC. So as a result, CyberMax was able to work directly with our customer to provide those additional remediation actions or recommendations, right, to further improve the overall security posture of their environment in order to thwart future attacks of this nature.
Erica Smith
Jeff, what makes an inbox rule malicious in nature?
Stephanie Camacho
There’s a couple of different indicators we might look for to try and determine that. Marking items as red can be unusual, or marking them as red and deleting them, especially if they’re coming from what looks like it’s a legitimate source. Like if I see a user deleting emails from their boss, that’s a pretty good indicator that the inbox is probably compromised. Forwarding emails out of the domain, like sending emails off to a Gmail or something like that is a good indicator. Rules that are filtering for sensitive information, such as the word payroll, is a little bit suspicious. Moving things to folders that don’t make sense, or sometimes the name of the inbox rule itself also doesn’t make sense. It doesn’t match what it’s doing. Those are all kinds of indicators that the SOC can look at to determine if an inbox rule seems like it might be legitimate for the user or it does seem suspicious.
Ryan Bratton
Gotcha.
Erica Smith
Ryan, how is the SOC able to identify initial access and then determine the full scope of compromise?
Ryan Bratton
Yeah, so an incident of this kind of nature, right, it’d be escalated to our threat response team, which is, like I mentioned before, one of those teams where we’re inside the SOC, where we have these highly skilled analysts, they’re working through detections of this nature all the time. So this is every day for them, right? So this team would execute a thorough investigation, looking at the sign-in logs, right, for any of those anomalous user login events. Additionally, they would run what’s called a message trace. So a message trace is gonna look for certain emails that were sent to and from specific inboxes, right? So we’re gonna try and identify that initial email, right? What was the first user that got compromised here effectively? And then where were emails sent from there, right? And then we’re gonna need to investigate those specific user accounts to see if they interacted with the email, maybe clicked on this file and input credentials, right? So after we’ve done that initial review, and digging through those user authentication logs, reviewing the message trace telemetry, our team would perform any of those necessary remediation actions, which, like I said before, might be locking out accounts, forcing password changes on accounts, revoking active session tokens, or forcing a sign out for those user accounts, basically to eradicate the threat actor from the environment. Once we’ve done that, The SOC would put together a very nice detailed report basically to escalate this to the customer and inform them of the incident in their environment and the actions that we took accordingly.
Erica Smith
Steph, how might this have been different if we did not initially detect the malicious inbox rule?
Stephanie Camacho
Oh, way worse. So, you know, effectively we had the initial user that was compromised and then eight additional users. So we had nine total compromised users in this environment. If we hadn’t been able to detect it at the inbox rule and jump on it as quickly as we did, this could have been far more than just nine users. This could have potentially been an environment-wide compromise, or even depending on what users were compromised, it could have been worse that if an administrator gets compromised, you know, maybe the threat actor is able to escalate privilege or also get administrator rights in the environment themselves and start establishing persistence and foothold. Or if an executive’s account got compromised, maybe the threat actor would have been able to exfiltrate proprietary data or something like that. The fact that we were able to detect it at the inbox rule and therefore respond quickly really prevented this from being a far worse situation than it ended up being, where we just had to effectively reset passwords for nine accounts.
Erica Smith
Ryan, how can organizations prevent these types of attacks going forward?
Ryan Bratton
Yes, I think that’s, you know, that’s kind of the big question, right? Like, you have this happen all the time, right? In organizations, every day this happens, right? It’s a normal process here, but what can you really do about this, right? This is something that’s obviously on the top of every executive’s mind, as well as those IT managers as well, trying to limit the risk of this attack happening in their environment. So first thing that comes to mind is obviously the user awareness training and education. So unfortunately, our greatest asset being our people might also be the weakest link. Threat actors are always trying to perform these social engineering attacks against end users because they know that they work, right? And just for a little context here on what a social engineering attack is, it’s an opportunity where a threat actor takes advantage of an end user, right? Tries to maybe convince them. for of legitimacy or, you know, hey, I need you to do this for me because of this. Right. And they put in their end user credentials and then all of a sudden they’ve been social engineered. Right. So that’s that’s obviously a big concern. And unfortunately, social engineering attacks do continue to be the leading source of compromise in organizations today. So stopping attacks like this occur from, you know, right from your end users. Everybody in the organization is responsible for security, not just the security team.
Erica Smith
Yeah.
Stephanie Camacho
Yeah, I wanna support what Ryan’s saying here, that you can implement technical controls all day long, but people make mistakes, and that’s just human nature, and they’ll click on things that they shouldn’t click on. And that really does highlight the need for, you know, end user training, but also that 24/7 visibility component that, you know, if somebody clicks on an email at 8 p.m. at night before they go to bed that they shouldn’t be, you know, somebody’s there watching and responding to it.
Erica Smith
Right. Okay, so let’s talk about why this matters. We’ve done, I think we’ve done a good job of explaining, you know, what everything means and let’s talk about why it matters. So for us, at the Cybermax SOC, Big R is really a mindset that we put into action every day. It drives every action that we take within our security operations center. We’re not just flagging an alert and pitching it over the fence to our clients, right? We’re taking real-time response actions that minimizes that, you know, time from detection to containment to eradication. We’re proactively investigating and validating those threats before the alerts are confirming them. So What we’re doing is we’re acting on a suspicion, and that means that if we get a phone call or a notification, like Ryan and Steph were saying earlier, or even an analyst’s hunch, right, we’re going to go ahead and investigate and follow up on it. This is our mission and our purpose. And that’s what we do 24/7, 365. This highlights, I think, the difference within the CyberMax SOC and other MSSPs or other MDR providers, right? They focus on the little R, which means they’re focused on the little things. They’re going to pitch those alerts over the fence to you and create more work for you. There really is not a sense of urgency. It doesn’t really explain what’s actionable for you to do. There’s a lack of follow through, and they’re often inefficient and unreliable, right? Like a notification without any kind of response is absolutely just noise, right? And that’s what we aim to prevent in our SOC. For us, big R means that response comes from within the SOC. We take those immediate actions and investigations. to minimize the time from incident to eradication without ever having to hand it off to another team. And that just provides a cohesive response to threats. So we’re minimizing the noise and not creating extra work for our clients. I think our stories did a great job of highlighting that we’re proactive and not reactive. with the fact that our response actions are taken right away and our analysts provide follow-through on the investigations and the alerts that they’re working?
Stephanie Camacho
Yeah, absolutely. For that human-led threat detection point, I mean, in the second story that I highlighted where, you know, the analyst took an IP and looked for it in a different client set, that really shows that human-led threat detection that they didn’t wait for an alert to hit in our SOAR platform before they acted. They were like, Nope, this is weird, this doesn’t look right. And they took action based on that. They didn’t wait for an alert to come behind them and confirm like, Oh yeah, there is a compromise. That proactive work and that, again, human-led threat detection, that ended up protecting the client before it did become more than you know, the threat actor just deleting a couple of emails. Right.
Ryan Bratton
Yeah, and just that beyond alerts point there on that last part of the slide, you know, that does relate back to that first story. So if you remember back to the first story that was presented, the SOC received a phone call, right? It wasn’t an alert. in the queue, it wasn’t an alert from an endpoint detection and response tool. This is a phone call from one of our customers about one of their vendors being compromised, right? So there’s no evidence of compromise in our customer environments, but we don’t stop there, right? We go beyond the alerts, we go beyond the first step, right? We’re going to dive in and help protect your organization, regardless of if we receive an alert.
Erica Smith
Yeah. So we’ve covered a lot today, but if there’s one takeaway I’d like everyone to get from this, it’s that response actions define your defenses. Every incident is an opportunity to refine, improve, and build resilience. So for next steps, you can visit the talesfromthesoc.com to download the e-book. You can watch the replay and you can get e-book #2. You can download our quarterly ransomware research report. We have a new one coming out soon, and you can get the latest CyberMax MDR Buyer’s Guide. And with that, we’ll go ahead and wrap things up. Big thanks to our panel for their time and expertise, and to everyone who joined us live. Let’s keep the conversation going, connect with us on LinkedIn, and share how your teams are strengthening their response playbooks. Until next time, stay prepared and stay resilient. Thanks, guys.