Feature

The Hidden War of Cybersecurity

SentinelOne’s Gregor Stewart on AI’s role in the cyberwar, and how enterprises can leverage the technology to bolster their defenses.

Written by Jess Swanson | 5 min May 30, 2025

The Hidden War of Cybersecurity

Gregor Stewart was pursuing a master’s degree in artificial intelligence at the University of Edinburgh when Google unveiled the first language model in 2007. His background and interest had been in computational linguistics, but this was unlike any of the language trees and probability tables he’d encountered before. 

“You’d give this trillion-word language model a prompt, and it would give you a completion,” he says, “and that completion started to look an awful lot more relevant to what you wrote and much closer to real language.” A user could type in something at random, such as, “The color of Walden Pond is so beautifully—” and the machine would predict that the next word was likely “blue.” 

It may not be so impressive today, but nearly 20 years ago, that technology was a breakthrough, altering the trajectory of not only Stewart’s career but artificial intelligence itself. 

As an academic in the mid- to late-aughts, Stewart was one of the lucky few with access to Google’s language model. In the following years, he witnessed the sudden seismic shift from grammar-based language models to statistical language models to the artificial neural networks that ChatGPT is based on today. 

“None of what is happening right now—except for the real cutting-edge stuff—seems all that surprising,” Stewart says about the recent hype around AI. “I just never thought we’d get here this fast.” 

In the 18 years since the debut of that first large language model (LLM), Stewart has quickly become one of the industry’s most trusted voices in AI. He’s honed a crucial niche in the field of machine learning, particularly as it relates to natural language processing and generative AI, having directed and scaled research and engineering teams across the globe. Today, as the vice president of AI and machine learning at SentinelOne, an American cybersecurity company that automates threat detection and response using artificial intelligence, Stewart finds himself at the forefront of what he refers to as “the hidden war” of cybersecurity. 

“Remember, in the War on Terror, people were saying things like, ‘You have no idea how many [terrorist] cells there are and how many people are trying to blow things up’—and thankfully, we don't need to know,” Stewart explains. Similarly, in cybersecurity, he adds, “there are many, many more things that are foiled before they ever turn into a successful attack.”

Cybersecurity’s AI Problem—and AI solution

Phishing, spam and malware attacks have long plagued IT departments. But in the wake of the COVID-19 pandemic, company leaders, having instructed employees to work from home to avoid contracting coronavirus, unwittingly increased their risk of contracting a different kind of virus as remote workers started logging in from their less secure personal networks and devices. 

Since 2020, there’s been a spike, specifically, in ransomware attacks, which involve hackers encrypting a company’s computer files and refusing to release them until receiving a cryptocurrency payment. In May 2021, a ransomware attack by the hacker group DarkSide made international headlines after holding Colonial Pipeline’s network digitally hostage. The company is the main oil supplier across the East Coast, and the shutdown led to fuel shortages, panic-buying and flight delays until 75 Bitcoins (roughly $4.4 million) were paid to the group. 

In 2024, the average ransomware payout was $2 million—up 500 percent from $400,000 the year before, according to Sophos’ State of Ransomware 2024 report. The same year, an unnamed Fortune 500 company paid the largest known ransom payment ever recorded: $75 million to hacker group DarkAngelz

Earlier this year, a DDOS (distributed denial-of-service) attack overwhelmed X’s system, repeatedly taking the social media platform offline as a vast network of hijacked computer devices—called a botnet—flooded its web request system. It’s now estimated that a cybersecurity attack occurs every 39 seconds

Stewart compares this phenomenon to the slasher film Scream. “So you know how the calls were coming from inside the house? It’s true that in any sufficiently large network or corporate system, there are dozens if not hundreds of actors all wandering around with a knife,” he explains. “The presence of threat is ubiquitous.”

Unsurprisingly, burnout is at an all-time high in the industry as the burden to protect companies from these virtual attacks often falls squarely on chief information security officers (CISOs) and other IT security decision-makers. A study by Blackfog found that almost all cybersecurity professionals work an extra nine hours per week beyond their contracts and cite stress and job demands as the main reason for wanting to leave their positions. 

After all, countless threat alerts pop up every day, and it’s up to cybersecurity analysts to determine whether each is benign or an attack on the network. The workload is so extreme, Stewart says, that most analysts can’t even review half of the alerts that are generated. 

“Can you imagine the panic people feel in this field?” he stresses. “It's like, ‘Oh my god! If we get hacked, our whole company could go down, and we might end up paying out millions in fines.’”

With the advent of AI, cybersecurity attacks are likely only going to become more sophisticated and frequent. In the Blackfog study, 42 percent of respondents reported that they worry about the looming threat of an AI-powered attack on their company. Bad actors are already leveraging artificial intelligence in myriad ways, from deceiving voice recognition software to individualizing automated phishing attacks to poisoning the data sets that train AI models.

But for Stewart, AI isn’t so much the problem as it is the solution. In fact, SentinelOne CEO Tomer Weingarten has said publicly that the company uses these cutting-edge technologies as “a force for good” for its roughly 10,000 customers, which include governments, healthcare providers and educational institutions. Stewart oversees this effort to make artificial intelligence and machine learning work for cybersecurity, not against it. The goal is to limit the onslaught of false-positive alerts, assist in mitigating threats faster and make cybersecurity professionals’ workload manageable. 

“The number one determinant of maturity within any [cybersecurity] team is longevity: People work together, they achieve outcomes, and they stay together for a long period and learn more and more about the way things can go wrong,” Stewart explains. “We can't help that by telling people to be more mature and that you should stay longer, even though it sucks. The only way we're going to get true maturity in security teams is to make the job something that people can stand.”

In 2023, SentinelOne unveiled a new threat-hunting platform that uses AI and large language models to run operational commands across the network on security teams’ behalf. Essentially, a user types in a threat-hunting question—such as, “Have there been any unusual network connections to external IP addresses in the last 90 days?”—and within seconds the platform produces a report with those insights. 

“AI is really great at seeing things people don't see,” Stewart explains. “As a means of defense, it can see over different timescales, in different dimensions, and it can take all of those indications and compress them and pick the best ones to show people.”

It’s not about replacing humans with machines, Stewart clarifies, but playing to the strengths of both the machine and the human. “The idea isn’t if only these people were better, they’d be able to do this,” he explains. “People are really good at thinking like people and they’re not really good at thinking like machines. So let’s make the machines do that part and give people a job but also a life.” 

That’s a sentiment that carries a particular weight for Stewart, who manages a global team from Palo Alto and is often up at 5 a.m. to respond to questions and concerns that trickled in overnight on Slack. No two workdays are the same, but he spends much of it in Zoom meetings “making sure the conditions are right and the goals are clear” for staff and his colleagues. To unwind, Stewart exercises on the rowing machine in his garage and 3D-prints new gadgets with his 8-year-old son. The most recent: a counting mechanism that displays a number and increases by one each time the plunger is pressed.

Stewart’s title might suggest he’s nose-deep in machine learning algorithms and AI data sets for hours at a time. But most of his work is actually making sense of human behavior, whether understanding customers’ natural use of language when typing commands or staff members’ individual skills and interests to keep them focused and fulfilled.

“My empathy is always with the person who is doing the work and has the responsibility for the task—it's not with the technology,” Stewart explains. “This is the most satisfying work I’ve ever done in my life because you actually get to know people.”

The Future’s Alright

When Stewart was 6 or 7, he got his first computer. He was immediately hooked. It didn’t take long for him to start building text-based adventure games. By the time he was 9, he spent his entire summer learning how to create elaborate color-cycle animations of spinning planets and rain patterns. 

It was Edinburgh, Scotland, in the early ’80s, but even then he sensed the potential within these machines to build intelligence. Of course, it didn’t cross his mind that this technology could be used for nefarious reasons—at least not until much later. 

Growing up, Stewart’s mom had been a nurse, and to this day, he credits her dedication to nurturing and caring for others as what motivates him to stay optimistic as he leads the charge in leveraging artificial intelligence for good. “It’s a deep-seated thing from my mother, but when I was younger I’d always hoped that we’d be able to build machines that looked after people, and I believe that we will,” Stewart says. “To be an effective executive in a field that changes so rapidly, you have to use these larger values as an anchor.” 

As more and more people fiddle around on ChatGPT and test other AI applications in research or scheduling, recent polls suggest that they remain wary of how this technology will shape society. A national Gallup poll found that a majority of Americans believe AI will have a “somewhat” or “very” negative impact on the spread of false information, job opportunities and national security in the next five years.

“The hyped potential is insane,” Stewart says of the perception of AI and machine learning, “and nothing short of essentially replacing most of what we consider work.” 

The irony is that even the most skeptical AI users don’t realize that they’re not only using this technology every day, but relying on it. Weather forecasting apps and digital assistants like Apple’s Siri or Amazon’s Alexa have become so common in daily life that they lack the novelty of, say, an autonomous weapons system. And while high-profile cyber attacks like Colonial Pipeline might’ve shaped perceptions negatively, they also brought the threat of these technologies mainstream, leading to massive reforms in the cybersecurity industry. Now most insurance companies require customers to invest in some sort of data breach or cyber liability insurance. 

“In the past, we used to have to explain that you all are vulnerable to these kinds of signature-based attacks,” Stewart recalls. “Now people want to invest in the most up-to-date methods to get their insurance premiums reduced.” 

However, as companies continue to invest in artificial intelligence and machine learning as a form of cybersecurity, these mechanisms will ultimately create a new repository of sensitive information based on signature and proprietary business practices. One of the main concerns plaguing cybersecurity today is “exfiltration,” Stewart says. That is, the sensitive information being retained by a cybersecurity provider to teach its AI models can also be leaked or breached, creating another vulnerability. 

To assuage those fears, Stewart recommends negotiating what’s known as a zero data retention (ZDR) agreement. In doing so, the cybersecurity provider ensures that the query data, profile details, and other internal and external interactions will never be stored, and then deleted as soon as they’re processed. “AI has super powerful elements,” he says. “You need to guardrail to make sure they are constrained and company secrets aren’t divulged.” 

Nevertheless, Stewart considers himself an optimist. It’ll be years, he estimates, before all of the potential applications of artificial intelligence are uncovered. It’s human nature to fear this technology as one would in a science fiction novel. But Stewart is constantly reassured that this technology is ultimately being leveraged by humans to advance civilization, not destroy it. 

The future he envisions is one where robotic companions and AI-powered assistants aren’t being used to replace humans but to help care for them, especially as we age. Think household chores, medication reminders and daily communication.

“Many people just don't have anyone, and the demographics would suggest that it's a fairly lonely world,” Stewart says. “Twenty-five years from now, I feel like we'll be able to care more as a society for one another through this technology.”

  • Cybersecurity

Jess Swanson

Contributor

Jess Swanson is a Miami-based freelance writer and journalist drawn to interesting people and unusual experiences. You can find her bylines in Vox, Business Traveler, Time Out Miami, and The Village Voice.