The Intersection of Cybersecurity and AI
Kitecast - Joan Ross
AI seems to be a “dime a dozen” if you trust the cybersecurity vendors. What does meaningful cybersecurity AI look like? What is on the horizon when it comes to the potential of cybersecurity and AI? And what does this mean when it comes to risk management? Chief Intelligence and Security Officer and Cybersecurity Professor Joan Ross discusses these and other cyber-related topics in this episode of Kitecast. She explains how AI can be used to detect and stop cyberattacks as well as rapidly respond to breaches when they happen. The conversation also touched on how cybercriminals and rogue nation-states are leveraging AI to create attacks that are more complex and harder to detect and stop.
Transcript
Patrick Spencer 0:00
Hey everybody, welcome to another Kitecast show. I’m here with my co-host Tim Freestone the CMO over at Kiteworks. Make sure I say Kiteworks not Kitecast, that’s a tongue tire. We can’t be right Tim.
Tim Freestone 0:36
Yeah. How you doing Patrick?
Patrick Spencer 0:38
I’m Fine. We have a special guest today, Joan Ross, who has a plethora of experience in the cybersecurity space. She’s been a CISO. She’s a professor, she started her own company. She’s worked in Fortune 500, multibillion dollar companies. She’s worked in various associations, you name it. Joan has done it. So, this is going to be a fascinating conversation, we’re actually going to hone in on some of the latest work that she’s been doing around cybersecurity and artificial intelligence, which I know is of interest to many in our audience and certainly of interest to Tim and me. But before we start with Joe, and I have a quick question for Tim. So, who is the band that played Mr. Roboto?
Tim Freestone 1:26
Oh, it sticks.
Patrick Spencer 1:27
I was going to say, I’ll have to, I won’t sell you up the river
Tim Freestone 1:31
No, I, I that’s showing my age. But I do remember that one. I also think that was at one point on my karaoke list of go to.
Patrick Spencer 1:44
Call on us. That then the question is, you know, what does that Japanese really mean that they have in the song? Will quiz Joan on that. So, Joan, cybersecurity AI fascinating subject. You are in the final throes of finishing your PhD work at Baylor actually, you went from purple University of Washington to the green or whatever color they have over green gold over at Baylor, you know, what prompted you to go and embark on another academic venture?
Joan Ross 2:18
Well, I am still tied to the purple and gold, the University of Washington for not only my Bachelor and Master’s degree, but I figured three degrees at the same university was just probably pushing it too much inbreeding too far the inbreeding realm and decided to kind of stretch my wings a little bit but go to someone who had just, you know, was a contender in the NCAA, other sports as well. Yeah, I’m thrilled with everything that I’ve been learning, as you know, Dr. Patrick Spencer, to be in cybersecurity is kind of a lifelong learning effort. And, Tim, I’m so thrilled at everything that you do and support in the industry. Thank you.
Tim Freestone 3:12
Yeah, we do our best or at least try.
Patrick Spencer 3:15
So, you’re working on this cybersecurity AI degree, I think you started 2020. You’re getting ready to write your dissertation. I think, you know, can you tell us what the subject is? And I’m going to ask you, what is real cybersecurity AI because everybody and their dog whether you’re in marketing, software, finance software, cybersecurity software, everyone is AI, and I guarantee that’s not really the case. I was
Joan Ross 3:43
intrigued after the movie, The Imitation Game came out as to why, you know, machine learning, which has been around since then just really hadn’t evolved into the cybersecurity realm and certainly AI had been on the forefront for some time decades, actually. And, you know, AI is so effective in so many areas, the medical area, in neuro linguistic processing, and natural language processing. It’s very effective for deep learning. And so, you know, machine learning and AI have excelled in all of these other areas. Why weren’t we seeing it more effective in helping us with getting ahead in cybersecurity? And so that piqued my interest and that started my research. The research design that I’ve been working on, is working with over 60 organizations to make it statistically significant. As to if they started using AI tools. Was it more effective in helping them get ahead of the bad guys? in detecting attacks in real time. And so, for that, I had to go back and read a lot of the dissertations and research that others had published on machine learning and AI, and its applications in these other areas, as well as how it’s been used to date in cybersecurity. So, when we,
Patrick Spencer 5:25
that’s fascinating, and we’re going to, you’re going to get it published, we’re going to buy the book and a couple years, then we’ll have a whole different podcast show with you talking about the book. But what, you know, how do you decide, you know, everyone says they do AI and CISOs out there in bombard I mean, there’s 1000 Plus cybersecurity vendors in the marketplace today. And they want to have an AI capability in terms of proactive detection of threats as they come in. And they also need to have aI probably for cyber incidents after they actually happen. How do you vet vendors to determine who really has the right AI capabilities for you? And who’s saying they have AI, but they really don’t? Or it’s a very, very simplistic pedestrian approach when it comes to the application of AI?
Joan Ross 6:10
I’m glad you asked that question. Because I’m still collecting datasets just for your audience out there, if anyone is interested, it’s actually been an uphill battle, and getting organizations to participate in the research because there’s so burnt-out hearing about AI. And there still are some fundamental misunderstandings about what artificial intelligence applications are, when it comes to being effective, especially for cyber security. And so, you know, the questions that any organization should have is one, do we have a vision for how artificial intelligence will help us detect cyber-attacks, because the fact of the matter is, you’re just not going to be effective, doing cybersecurity now, much less than the future without the use of artificial intelligence, and then, you know, what vendor should I choose, that are going to be effective for that application of artificial intelligence and, and have it be an investment that’s really going to have a return, as well as integrate well, with all of my other significant investments in tools that I’ve already spent 10s of millions of dollars on, let’s face it. And so that should be the very first consideration. And a lot of people are already way down the road with that, but they really haven’t brought everything together. And so, you know, your AI choice should ingest logs from every tool that you have now, through API technology. And so that helps you then on a road to sunsetting your other tools where you’re finding, you’re just not getting the information that you need from them, and that investment isn’t paying off for you. And so, you’re going to go more towards your evolving vision for your industry on the tool selection that is working for you.
Tim Freestone 8:20
So, you, you mentioned AI in the context of detection. And then you just talked about logs. So, I imagine much of the innovation is happening in the SIEM space. What other four-letter acronyms that are taking AI head on?
Joan Ross 8:43
I consider SIEM technology. But that’s, that’s just me. I’ve been in the industry for 30 years now. And, you know, SIEM proved useful, but it’s like antivirus anti-malware technology. You know, you don’t know what you’re not getting in terms of alerting, or who’s reading the logs and being able to correlate, you know, if your SIEM is correlating those logs appropriately and alerting appropriately, and the only way that you’re going to distinguish that is to break through whatever might be hindering you on your team or in your executive team or, you know, your budget strategy on at least doing proof of concepts with vendors and comparing what you’re getting from your SIEM versus what you’re getting from new AI tool selections. And seeing that alert different.
Tim Freestone 9:43
SIEM is its own blossoming industry. It’s not an advancement on any given technology that’s been around. It’s its own market, if you will. And it’s just about Trying it out, I can imagine that it plays heavily in attack surface management, sort of technologies. I know, I have a tendency to get into acronyms, but it’s where I feel safe.
Patrick Spencer 10:13
Now, it’s my job to invent as an AI cybersecurity acronym. Yeah, exactly.
Joan Ross 10:19
Tim, it scares people, when you say SIEM is old technology, because they just invested so much in it. And for yourself, who, you know, you know, that acronym so well, and what it does. So, you know, the way that that I try and ease people into the AI technology is ingesting the SIEM logs into the AI technology. So, then the you can see, because there’s still a place for the SIEM then. But eventually what, what AI really can do that differentiates it from the SIEM quite frankly, is behavioral analysis of devices. And this is important. If the AI has been incubated correctly, that’s another question for your team to ask is, you know, can it detect every device and think of this, think of IoT, think of OT, think of IT, you know, think of just all of the different computing environments and the plethora of devices that we have. The AI and one of the answers to data as to why AI hasn’t made those inroads into cybersecurity has been a scalability issue with so many devices. So, unless the lender has solved that problem, the AI tool isn’t going to be effective. And that’s why you need to ask the right questions and choose effectively.
Patrick Spencer 11:52
I have a question for Joan, as you’re talking about devices, can we talk about content because you get all that and then it sorts of spills over? I suspect this is where Tim was probably had it spills over into content? Right? How do you all that content, since the data that you have is really the lifeblood of the organization? And in most instances of what the bad actors are after? How do you take like Syslog data in regards to information around who’s loading content? Who’s downloading content? Who’s changing content? Are they sending it outside the network? Are they send it out within the network, all that governance stuff? Is there an AI play there that plays into cybersecurity, but there’s probably some AI compliance angles as well that need to be explored?
Joan Ross 12:38
Another great question. And I would say behavioral analysis does play into this at the user level, at the device level. So, when you think about all of these different components, you know, what you want to do is detect is anomalous activity, a user doing something different than they typically do device behaving in a way that it doesn’t. This is how you find rogue devices. This is how you find malware, persistent malware that’s resident and hasn’t been detected today. You know, this is where you find misconfigurations, even, you know, a script running badly before it takes down the entire network. You know, this is where AI does have that play, Patrick?
Tim Freestone 13:34
Interesting. Yeah, it was it was a route that I was going down. But even if we just stick in the device, the challenge with the scalability of devices, that’s that scalability that you talk about, and challenge is just in the infrastructure that’s known and unknown in the first party, right? It gets, you can’t even begin to think about the ecosystem outside of your organization. And the scalability problem there. It’s it just seems mind boggling to me, because data moves outside of your device ecosystem, right? Even if it’s, you know, your IoT ecosystem, it’s still yours. But once the data gets outside of that, well, then you’ve got that time is 10 times 1000 times 100,000, right?
Joan Ross 14:26
I call it killing the kill chain. Tim, you need to detect when people first tried to gain a foothold into the network, whether through an application, whether through a service account that’s been mismanaged or gone stagnant, or a user account that’s gone stagnant. So killing the Kill Chain means that you detected in the first or second stage of the kill chain before they can exfiltrate that data, and that’s where Behavioral Analysis plays in so let’s say you have you have Patrick, who’s going to Mykonos for some lovely vacation time. But all of a sudden, you’re detecting that he’s trying to access email from Rome, you know, you know, unless, unless Patrick, you know, is vaulting about the globe, he shouldn’t be accessing email from Mykonos, you know, he shouldn’t be accessing from Mykonos, not from Rome. And so, there’s a there’s an example of how you can tie it to a user. But it’s still at the device level as well.
Joan Ross 15:42
Yes. And now everybody wants a start down this road, they’re going to want customizations, because every business has some very unique components. And so, you also want to select a vendor that can adapt and either work with your data sets to deliver what you need to, for the AI to function in such a way, or that you can, you know, set it up yourself if you have a team that wants to go down the AI path.
Patrick Spencer 16:16
Let’s shift gears, your background, Joan in serving as an advisor for various companies and sitting on various boards. You can talk a bit about that right? You sit on a board over at, you sit on a board for I think the Seattle College and a couple others as well. CISOs are, they’re interested? We’ve talked to a couple right, we’ve Tim and I recently spoke with Andreas Wuchner over in Switzerland, and he sits on a number of different advisory boards. There’s a lot of CISOs, I’ve spoken to them, they’re interested in that additional activity beyond their day-to-day job, how do you find those opportunities, and then engage in them and vet to determine if that’s the right one for you?
Joan Ross 16:57
Well, I’ve been fortunate in that they’ve come to me people that I know, in the industry, some of the early professors of information security here who, you know, they might have started out, you know, as an engineer at Boeing, and then went to the University of Washington and started up their security program and, and reached out to CISOs, for example, that she knew, in order to provide some realistic education, because one of the things that people criticize about college and university education is that it’s not real world enough. And so, you’ll find good professors will reach out to contacts that they have in that industry and invite them in and put them on a board, for example. It’s like starting a task force, right? In a police department or something. It’s a think tank, within a unit a lovely university or college setting, to think how can we best prepare the students for day one of starting in the real world at a job where much is going to be expected of them. So, I’ve been fortunate in that way. But as you know, it’s the six degrees to Kevin Bacon thing. We all know someone in the industry now. And when you see us getting on these boards, and you’re interested in that you should reach out to that person too. Because odds are even where I’m at, you know, could use more people. Some really good people, we have everyone from, you know, military, to industry, to grant writers, just a whole plethora of different talents comprise our board, and we all work together. You know, one of the things that we did here in Washington state is we took the University of Washington, we took a telecommunications vendor in this area. And we took a scenario we did an incident response exercise as to how we would all interoperate together in responding to an incident and this is actually under a presidential executive order from 2013 that every state has to do this exercise. And so, you know, when you bring a university cybersecurity board into it, we can help think of the scenario we can help write up the report we can do a lot of things that maybe you know, the state and the other commercial industry isn’t quite set up for or has the cycles for and make it fun and still, you know you’re playing but at the same time, the lessons learned from it are considerable and really help out a lot
Patrick Spencer 19:58
Interesting. Now you’re part of the holistic information security practitioner. That’s really interesting. You know what I was looking at your bio beforehand, you and I’ve known each other for a number of years, I didn’t actually know that aspect of your bio. Can you comment on what that entailed? And what does it mean to be a holistic information security practitioner, some in our audience probably are interested in my want to become holistic information practitioners, you never know.
Tim Freestone 20:29
I’d be settled for holistic anything at some point. Anything be good at anything comprehensively.
Patrick Spencer 20:39
This is working on that search.
Joan Ross 20:43
Now that I know this, I’m going to give your names to Ty Lambeau who created the credential and got it accredited. You know, we look at these governance framework cybersecurity governance frameworks. We look at the risk management frameworks ISO 31,000, for example, we look at the information security frameworks, we look at sans we look at ISO 27OO2, we look at HIPAA and PCI DSS and, and all of the frameworks that you know, all the Cloud Security Alliance, if you’ve seen how many controls are in that one, and imagine a holistic framework that brings all of those controls and oversight efforts together. That’s what the credential is about. And how do you approach that? You know, how do you cut through that? And make sure because we know that if you’re just you’re always throwing stuff against the wall, what could happen today, it’s not just about following control frameworks. But it’s adopting a governance framework about risk about what could happen, and what aren’t we prepared for what aren’t we thinking about? You know, so those are the kinds of things those are the kind of approaches that we want a holistic security practitioner to think about and bring into their organization. So, you’re just not, you know, robotically following control frameworks, but you’re thinking in between it like, oh, you know, the frameworks don’t really address how many people have command line, access to databases? I don’t really see that control in here, for example, you know, but you know, that’s going to be something that you’re going to want to look at, you’re going to want to look at, what could fall between the cracks.
Tim Freestone 22:54
It’s a good segue, because I’m particularly interested in risk management and risk and the word risk. You know, when you look at a lot of the materials Gartner puts out, they tend to speak about it in two contexts, cybersecurity, and risk management. And in my mind, there’s sort of above both of those is just risk. And everybody’s trying to orchestrate cybersecurity technologies and orchestrate, you know, what’s called Risk Management, frameworks and processes and people. But there, there’s so intertwined into just one statement, which is cyber risk, then would you say that’s ultimately what you were trying to you’re trying to do is elevated to uber cyber risk, where frameworks and technologies come together to be able to see the entire potential attack surface? And how do you how do you control and manage both technologies and frameworks together?
Joan Ross 24:07
Tim, you’ve hit on one of my favorite topics, which is how do security people talk to executives, especially about risk and they talk business risk? Yes. Right. Yeah. And that’s the uber risk that I think you are getting out everything else. And you have to speak that language to be successful. But it’s really, it’s, you know, a lot of times the executives say you have to speak our language without trying to understand how to speak the security risk language. So, it’s been a little bit fraught over the 30 years that I’ve been in Excel. I was lucky enough. When I was VP of information security at Washington Mutual. I was visited by about 30. The feds who did risk management under the President of the United States, the USS Cole had just been bombed, and risk management at that time and financial risk and critical financial infrastructure. And a lot of these Feds were people that I had worked with in the 90s. Because my start in cybersecurity came from through cryptography, we had developed a client side commercial, off the shelf, cryptography software product, and using Kerberos, five and MD five, and DES not even Triple-DES not come out yet. And so, this was to encrypt the data stream at the application level, to protect IP, and this, this was before firewalls or even VPNs had come out yet. So, you know, this, this is going back quite a way. And, you know, it was just a niche product between Unix and Linux systems. But that was my start into the security field, from technical support. And so, working with these Feds on this, you know, what I learned about approaching risk at all was, here’s what I know, here’s what I don’t know. And here are my priorities based on a and b. And they taught me a lot of the language that they use. And, and so I still teach that to this day, because I think it cuts through exactly what you’re talking about, which is, you know, how do you address that to the executives? How do you communicate it across your team? And how do you balance this word risk, which also means something completely different on Wall Street?
Patrick Spencer 26:55
How do you measure risk? That’s a very open-ended question. That’s another dissertation for you.
Joan Ross 27:02
But not for me, because I’m, I’m certified. I’m NSA, IAM and IEM and they came out in 2003, with a quantitative measurement of risk. So, I’m actually trained in that. And I’m very passionate, actually, Patrick, about measuring risk.
Patrick Spencer 27:23
Can you actually get to? I mean, it’s a debated subject as to how do you evaluate risk? You know, the insurance companies are trying to figure out how do we price or insurance based on the risk of an organization, you’re going to see. So, before you have to present to the board, you need to present to the C suite to the CEO, and tell them exactly, this is what our risk is, we need this much insurance. Or if you’re an insurance company that rely on folks like you to say, this is what the risk is, so they can price it accordingly. So, they don’t lose their shirt in the event of a huge cyber event.
Joan Ross 27:57
Six years ago, with three other people we tried, we approached VCs on both the East and the West Coast, we had developed technology, that would actually check and see if the controls that companies stayed at were in place were in place. And we went to all the insurance companies to get their backing, because it would help lower insurance premiums by this empirical way to ensure that controls were in place and working effectively. And all the VCs and even the big insurance companies said, well, people will just go ahead and get attacked. And, you know, we’ll make the decision whether to cover it or not. And then they’ll buy the technology they need. And it was really discouraging, quite frankly. And so, I know the insurance companies have been trying to come up with similar technology to do that sense. And so, the question of insurance premiums and cyberattacks really has to come down to never needing the insurance in my book, you really just have to get ahead of the bad guys. And you have to be a technologist and you have to have a great team with heart and soul and you have to have great full-time backup live monitoring all these important oversight controls. If you can’t afford these, you need to look at you know going to a manage provider that can do that for you. Because your customer deserves at a minimum that much. I remember when my niece was two years old and got the notification that her medical information had already been compromised. You know, so it’s, it’s, it’s a terrible way to start your life knowing that you know, being because of a lack of control, die someone else’s personal information personal health information was released? Or can you
Patrick Spencer 30:07
use AI to determine your risk? Right? If you’re using AI? Does your risk get mitigated and will the insurance companies or boards of directors begin to look through that lens to see you have this cybersecurity strategy, this is what we have in place, oh, you need to have certain pieces in place to proactively detect as well as to address after the fact that includes AI capabilities?
Joan Ross 30:34
That’s my work at Insite cyber, so I took the NSA recommended algorithm for the quantitative one with permission for the percentages of measuring risk, so that the AI could also give a calculation and a confidence factor around that you want to ask, who’s your authoritative source for your security metric, your risk metric, because a lot of vendors just pulled this out of thin air. And, you know, that is what CISOs and security teams are relying on, as we present this information to executives and board of directors and shareholders. And if you don’t know the authoritative source of these metrics, you know, what confidence can you really have in them. So, you need tried and tested algorithms.
Patrick Spencer 31:31
I was reading, Tim probably saw this as well. And you may have, there was a school district, I’m not naming the school district was recently hacked with a ransomware attack. And they went ahead and said, we’re not going to pay it. We have backup assistance are backed up, all you have is student names, and maybe addresses or something along those lines. And then they, of course, went ahead and they fulfill their promise when they didn’t pay the ransom. The bad actors released it on the dark web. And it had student health and mental records. So, when you have an actual attack, can you use that AI capabilities? Or how do you determine, you know, what your risk is, in this case, they didn’t understand the breadth of what the breach look like? How do you work in those situations?
Joan Ross 32:15
That’s where it that’s where you need to pull in somebody right away before you make a statement like that, or a decision like this. And, and if it’s, if it’s the one that I’m thinking of, they spent two weeks haggling over the amount, and, and it was just a bizarre story in the first place, you know, around that negotiation. So, this is where unfortunately, school districts need people like us. This is where university, you know, boards can actually help out? You know, because the nice thing about us is, we’re kind of at the top of our careers already, you know, we’ve kind of have been set up well for retirement, hopefully. And so, this is where we can actually help others. Get ahead. And so, whether that is stepping in and evaluating the risk, and what actually is there or helping before that happens, I think it’s going to be very important. And I think this is something that hasn’t really happened within the industry, yet that ability to form almost like they do with what is it the V doc society that does this forensic help of all these different scientists that come together to help solve cold cases, kind of something similar in cybersecurity, where you go and you help out public institutions that may not be able to secure or understand what technology would be best for protecting our most vulnerable populations, which includes students.
Tim Freestone 34:06
The kind of to take this another direction. I don’t know if you saw this. But and I’m not sure if it’s the first but the CISO or CSO at Uber, Joe Sullivan was convicted. And he’s going to do jail time as far as I can tell. Do you think this has any? Well, first of all, how much of how much do you think in this case, if you if you know it to any more degree than I do? Someone like Joe was just exercising a CYA process, which you know, any human being is want to do in situations like that. And will it affect, you know, moving forward and how CISOs view their job and the risk threshold they’re willing to take and the information they communicate or don’t communicate with There’s, there’s always this sort of, there’s got to be a threshold of what magic gets disclosed or doesn’t get disclosed in cases of breaches. But I just found it very interesting that, you know, he’s, he’s essentially going to jail for this breach, not for the breach himself, but for what he didn’t disclose, I guess. Any thoughts on that?
Joan Ross 35:24
So, something I always discuss in any position I go into, is that there are two people who can be sued regarding a cyber breach, and that’s the CEO, and the CSO. And so, you have to know that when you accept the position, and your external counsel or your internal counsel, let’s start with external counsel. Now, I have a brother who’s been an attorney general for 23 years and is now federal judge, I am a sister who’s been career Homeland Security, I have a brother, my sister’s brother-in-law, who’s FBI, cyber security. So, I’ve always known black from white. And as a female in cyber security, I’ve certainly had companies try to get me to do many things favorable to them, but cross the line. And as a CISO, you have to be willing to walk away from the job. And that is very difficult for a lot of people to do, especially if they like the executive team, especially if they love that job that they’re doing. It’s very difficult, but you have to do that. And you have to have your own counsel around you that says, this isn’t good, you need to get out of there. Second, if external counsel is telling the CEO, the chief legal officer and the CSO that you don’t have to report this breach, or it’s okay to pay the ransom, then the CISO needs to remove themselves from any part of communication with the outside world, you need your team, you’re going to be put in that position. You know, again, the CEO and CSO can be sued. So, you either have to have external counsel handle it for the company, or your marketing officer handle it for the company, you can’t have the CEO or CFO handle it because we’ve seen what happens, right. But this is known for some time. This isn’t new, this is actually law. And this has been known. We’ve also this has been this is nothing new, where they’ve tried to make the CSO, the person to do these things, you have to have the fortitude, the integrity, the character, to say I won’t do this. And it’s a very hard conversation to have. And you can read the book, Crucial Conversations, you can listen to motivational tapes, but when you have that conversation, be prepared to walk out the door or be fired. And that’s actionable. But that’s another conversation. Secondly, if it’s internal counsel, telling you that then the chief legal officer, along with the Chief Marketing Officer handle any communications, the again, the CISO does not get involved in those communications, the chief security officer’s responsibility is to protect executives. And so, our counsel to them is this is against the law. If they decide to act anyway, you have to remove yourself one way or another from it now, I don’t know, Joe, I know a lot of people who do know him. This is not good for our industry. And, you know, my heart aches for Joe, because he’s going to do some time for it. And, you know, this didn’t have to happen. And whatever forces were brought to bear just wasn’t good overall. And that’s all I’m going to say on it. I think probably. There’s a lot more we’ll learn over time about it.
Tim Freestone 39:25
Yeah, just it’s one of those things where I can imagine just your point there that the microscope gets more powerful on CISO decisions moving forward across the board, which is if the job wasn’t hard enough already.
Joan Ross 39:42
There’s another risk for you to where’s that fourth element of risk for you is, you know, the legal risk as well. That’s right.
Patrick Spencer 39:53
Before we wrap up, I had two questions for you. One for CISOs out there anyone who’s in the Cybersecurity space, what are one or two books that are on your bedside table that you read? Or that you’re in the process of reading, that are new ones? And then to you got to tell us what you’re doing over at St. Martin’s, the new role teaching as a professor over there? What are some of the classes that you’d be tackling in the next few quarters?
Joan Ross 40:19
I’ll answer that one first, because that’s where I’m headed next. So, I am teaching intelligence this next semester, and then the following semester, incident response, which I did my masters and where I worked with the US Coast Guard, here at their joint harbor operational center, there’s five, or there were five at the time when I did my masters, and lucky enough to have one of them be here in the Seattle region, and worked with their planning commander on incident response. And the first 50 things he did in the first two hours of managing a huge regional incident, like an oil spill, if Mount Rainier exploded, something of that nature. And I converted that to cyber risk, the same concepts, the same planning, because they have two hours to get their first initial plan out. And so, I adapted that then to the corporate cyber world. And I’ve used that ever since. And so, I’m going to have the good fortune of teaching some military, former military reserve military who want to enter cyber security. And so, the Coast Guard planning peers is right down their alley, and that conversion, so I’m really, really excited about this. And of course, intelligence, everything we’ve talked about this hour is intelligence, right? What you come in with what you mentor your team, with your executives, your mentoring executives, in cyber security, you’re mentoring your board, you’re mentoring your organization. You know, everyone thinks they know security, but you are really bringing a culture in your culture when you’re the chief security officer, both physical security and information security. And you know, you’re detecting both internal risk as well as external risk. And so, the idea then, in terms of teaching is also about bringing that culture in. On books. I’m reading Go Ship right now, I don’t know if you know that story but I do recommend that you look it up. It’s not specific to information security, but to kind of the approach to personal security. And, and nefarious acts on the high sea, and what you would do in confined quarters and detecting certain things, but what I am voracious on is researching every night, unfortunately, I’m on my phone, just looking out. What are the attacks? What are the vectors and what we’re seeing are polymorphous attacks, we’re seeing diversion attacks, we’re seeing more sophisticated attacks that are combining CVEs is so you know, these are the things that are very difficult for humans to detect, but simpler for AI to be successful in detecting and where humans excel is where the AI annotates it up that they’re detecting this anomalous activity, they anthropomorphizing now, the human excels in validating, yes, we are under attack? Let’s contain this before it gets to the third kill stage, contain it before it gets to this network. And how are we going to do that team? This is how we’re going to do it. So, I voraciously read up in and you have to you have to always be a lifetime learner when you’re in cybersecurity because the attacks morphs and evolve. That’s why we’re in it. It keeps us young keeps us interested.
Patrick Spencer 44:32
So, what I’d like I have one more question for you because you piqued my curiosity for CISOs cybersecurity professionals here trying to keep on top of all those attacks and the permutations that are taking place in terms of the sophistication that’s being used the employment of AI is they use multiple CVEs, like you said, what do you read because there’s so much stuff out there, it’s impossible to read at all. What do you look at that gives you a sufficient amount of information to really stay out of the game?
Joan Ross 45:00
I go to wired.com. That’s a great place to start. They have some great articles; they have some great technical follow up. So, I have so many things just coming to me automatically now, because as you know, these algorithms that are in the applications via LinkedIn, or I’m trying to think of what other good application that I use, I have, even Twitter will have algorithms that will show up on a lot of the technical articles on it. But you know, one of the things that I do just go to MITRE, and just check the recent ones. I mean, this is so important to do. The first thing I do when I start any job, you’re talking about going back and forth between you know, the fortune 500 companies and, and smaller startup companies, and any vendor selection that you do, go to MITRE and see what type of vulnerabilities that technology has, and see if they’re addressing it. And if there’s a technology you’re considering, and they’re not releasing fixes for anything that’s been, you know, found as a vulnerability, you’ve got an issue with that technology, because that means that you’re going to have to isolate and protect that technology if you choose it.
Patrick Spencer 46:31
That’s great. Well, I think Tim, and I could probably spend about two or three hours with you, and our audience will probably stay on the line. But I think we should save that for our next podcast conversation with you. This has been fascinating, Joan, we really appreciate your time. Best of luck in the new role teaching over at the university and good luck finishing up the dissertation. We’re all going to want to take a read once you have it published.
Joan Ross 46:59
I have a couple more years on it left. I need some more data sets. Dr. Spencer, Tim, I can’t tell you how much I’ve enjoyed this hour.
Patrick Spencer 47:11
For listeners who want to watch other Kite Casts check us out at Kiteworks.com/kitecast.
Thanks for listening.