News

McAfee CTO: How AI is changing both cybersecurity and cyberattacks

McAfee CTO: How AI is changing both cybersecurity and cyberattacks

Artificial intelligence is sweeping through almost every industry, layering a new level of intelligence on the software used for things like delivering better cybersecurity. McAfee, one of the big players in the industry, is adding AI capabilities to its own suite of tools that protect users from increasingly automated attacks.

A whole wave of startups — like Israel’s Deep Instinct — have received funding in the past few years to incorporate the latest AI into security solutions for enterprises and consumers. But there isn’t yet a holy grail for protectors working to use AI to stop cyberattacks, according to McAfee chief technology officer Steve Grobman.

Grobman has spoken at length about the pros and cons of AI in cybersecurity, where a human element is still necessary to uncover the latest attacks.

One of the challenges of using AI to improve cybersecurity is that it’s a two-way street, a game of cat and mouse. If security researchers use AI to catch hackers or prevent cyberattacks, the attackers can also use AI to hide or come up with more effective automated attacks.

Grobman is particularly concerned about the ability to use improved computing power and AI to create better deepfakes, which make real people appear to say and do things they haven’t. I interviewed Grobman about his views for our AI in security special issue.

VentureBeat: I did a call with Nvidia about their tracking of AI. They said they’re aware of between 12,000 and 15,000 AI startups right now. Unfortunately, they didn’t have a list of security-focused AI startups. But it seems like a crowded field. I wanted to probe a bit more into that from your point of view. What’s important, and how do we separate some of the reality from the hype that has created and funded so many AI security startups?

Steve Grobman: The barrier to entry for using sophisticated AI has come way down, meaning that almost every cybersecurity company working with data is going to consider and likely use AI in one form or another. With that said, I think that hype and buzz around AI makes it so that it’s one of the areas that companies will generally call out, especially if they’re a startup or new company, where they don’t have other elements to base their technology or reputation [on] yet. It’s a very easy thing to do in 2019 and 2020, to say, “We’re using sophisticated AI capabilities for cybersecurity defense.”

If you look at McAfee as an example, we’re using AI across our product line. We’re using it for classification on the back end. We’re using it for detection of unknown malicious activity and unknown malicious software on endpoints. We’re using a combination of what we call human-machine teaming, security operators working with AI to do investigations and understand threats. We have to be ready for AI to be used by everyone, including the adversaries.

VentureBeat: We always talked about that cat and mouse game that happens, when either side of the cyberattackers or defenders turns up the pressure. You have that technology race: If you use AI, they’ll use AI. As a reality check on that front, have you seen that happen, where attackers are using AI?

Grobman: We can speculate that they are. It’s a bit difficult to know definitively whether certain types of attacks have been guided with AI. We see the results of what comes out of an event, as opposed to seeing the way it was put together. For example, one of the ways an adversary can use AI is to optimize which victims they focus on. If you think about AI as being good for classification problems, having a bad actor identify the most vulnerable victims, or the victims that will yield the highest return on investment — that’s a problem that AI is well-suited for.

Part of the challenge is we don’t necessarily see how they select the victims. We just see those victims being targeted. We can surmise that because they chose wisely, they likely did some of that analysis with AI. But it’s difficult to assert that definitively.

The other area that AI is emerging [in] is … the creation of content. One thing we’ve worried about in security is AI being used to automate customized phishing emails, so you basically have spear phishing at scale. You have a customized note with a much higher probability that a victim will fall for it, and that’s crafted using AI. Again, it’s difficult to look at the phishing emails and know if they were generated definitively by a human, or with help from AI-based algorithms. We clearly see lots going on in the research space here. There’s lots of work going on in autogenerating text and audio. Clearly, deepfakes are something we see a lot of interest in from an information warfare perspective.

VentureBeat: That’s related to things like facial recognition security, right?

Grobman: There are elements related to facial recognition. For example, we’ve done some research where we look at — could you generate an image that looks like somebody that’s very different [from] what a facial recognition system was trained on, and so fool the system into thinking that it’s that actual person that the system is looking for? But I also think there’s the information warfare side of it, which is more about convincing people that something happened — somebody said or did something that didn’t actually happen. Especially as we move closer to the 2020 election cycle, recognizing that deepfakes for the purpose of information warfare is one of the things we need to be concerned about.

VentureBeat: Is that something for Facebook or Twitter to work on, or is there a reason for McAfee to pay attention to that kind of fraud?

Grobman: There are a few reasons McAfee is looking at it. Number one, we’re trying to understand the state of the art in detection technology, so that if a video does emerge, we have the ability to provide the best assessment for whether we believe it’s been tampered with, generated through a deepfake process, or has other issues. There’s potential for other types of organizations, beyond social media, to have forensic capability. For example, the news media. If someone gives you a video, you would want to be able to understand the likelihood of whether it’s authentic or manipulated or fake.

We see this all the time with fake accounts. Someone will create an account called “AP Newsx,” or they slightly modify a Twitter handle and steal images from the correct account. Most people, at a glance, think that’s the AP posting a video. The credibility of the organization is one thing that can lend credibility to a piece of content, and that’s why reputable organizations need tools and technology to help determine what they should believe as the ground truth, versus what they should be more suspicious of.

read more

Further reading
Become a member

Invest in yourself or your company.

Individual Membership

For ambitious technology leaders and executives who want to get ahead.

Corporate Membership

For companies who want to connect with worldwide technology decision makers