Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- AI-powered cybercrime poses a growing risk to businesses.
- Most of these organizations feel unprotected against the threat.
- EY highlights some key steps for building up cyber defenses.
AI-driven cyberattacks are almost universally considered a grave threat to businesses today. Yet for both financial and logistical reasons, most organizations feel inadequately protected and lack a clear roadmap to shore up their internal defenses.
That gap between awareness and readiness is the big takeaway from a report published Thursday by consulting firm EY. Based on a December survey of more than 500 senior cybersecurity officials across industries, the report found that 96% of respondents believe that “AI-enabled cybersecurity attacks are a significant threat to their organization,” while fewer than half that number (46%) say they feel “strongly confident” that their organizations have adequate cybersecurity mechanisms in place to keep the threat at bay.
Also: 5 security tactics your business can’t get wrong in the age of AI – and why they’re critical
The majority of respondents (67%), furthermore, said they’re still “in pilot mode” when it comes to ironing out their strategy for keeping their organizations protected from this new wave of cyberattacks.
But pilot mode isn’t enough in a world where AI is continually providing cybercriminals with new means of attack, according to Ganesh Devarajan, cyber risk lead at EY Americas.
“We are navigating a unique landscape where AI is weaponizing the digital environment just as it fortifies our defenses,” he told ZDNET. “If I were sitting across from a [chief information security officer] today, my advice would be simple: the time for ‘wait and see’ is over. Protecting a business now means building a holistic strategy where AI and employees aren’t just working side-by-side, but are also amplifying each other’s strengths.”
Also: Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?
A cross-industry plateau
Cybersecurity isn’t the only domain in which businesses experimenting with AI have been failing to launch in a robust, meaningful way. Despite a high degree of interest in using the technology internally, many businesses are struggling to do so in a way that generates real returns. Organizations are stuck on a kind of plateau as they try to turn internal AI initiatives into sustained growth; the willpower is there, but the way is often unclear.
An oft-cited MIT study published in August, for example, reported that 95% of enterprises’ internal AI initiatives had failed to deliver any substantial ROI. It was a wake-up call for AI developers and their business customers. In short, something about the current approach to deploying AI within organizations wasn’t working.
Also: Why enterprise AI agents could become the ultimate insider threat
A couple of months later, a survey of thousands of business leaders across 21 countries found that the vast majority (87%) said that AI would “completely transform” how their organization gets work done over the next year, yet a paltry 29% said their teams had the skills and training in place to make that outcome happen.
Hurdles for cybersecurity
Both of those themes were echoed in EY’s new report.
Also: AI threats will get worse: 6 ways to match the tenacity of your digital adversaries
In broad strokes, the consulting firm found that while most high-level cybersecurity pros are all too aware of the fact that AI is rapidly equipping their adversaries with new and more sophisticated modes of attack (such as phishing and deepfake scams), they’re hindered by lack of a clear plan for building up their internal security.
Financial constraints were found to be one significant issue: 85% of the respondents to EY’s survey said their employer’s “current cybersecurity budget is insufficient to meet AI-enabled threats,” according to the report. On the upside, EY also found that the number of organizations committing at least 25% of their cybersecurity budget to building AI-powered solutions specifically is expected to grow from 9% today to 48% over the next two years.
The consensus, in other words, seems to be that the best way to combat new AI-driven cyberthreats is with AI-driven defenses — a trend that’s already begun to play out in the financial sector.
Specifically, EY’s survey found that AI will be given more control in six key areas of cybersecurity:
- Advanced persistent threat detection
- Real-time fraud detection
- Identity and access management
- Third-party risk management
- Data privacy and compliance
- Defense against deepfakes and other uses of AI to impersonate real people
Also: AI is making cybercriminal workflows more efficient too, OpenAI finds
Governance was also a major constraint: 97% of respondents said a robust security framework for internal AI use was “essential” to generating ROI, yet only 20% said they’d fully built out that framework.
Four tips
OK, but what can cybersecurity experts actually do right now to meet the new wave of AI-powered threats? EY highlighted four key areas they should focus on.
- Budgets must be reworked “to prioritize AI-driven cybersecurity.”
- Instead of trying to use a plethora of AI to automate specific tasks — which EY suggested is a key bottleneck keeping businesses locked in the pilot phase — organizations should switch to an “orchestrated, agent-driven” approach. In other words, implement a top-down control model for internal AI use so cybersecurity leaders can easily visualize AI agents’ actions and, if necessary, correct them.
- Teams need to “invest aggressively” in training their existing employees to safely and effectively collaborate with AI agents.
- Adopt an arms-race mentality to maintain internal guardrails, because as AI-assisted cyberdefenses improve, so too will the tactics deployed by AI-assisted cybercriminals. “Organizations that treat governance as a living system — continuously improving and integrating into culture and operations — are best positioned to build trust, manage emerging risks and translate AI innovation into durable competitive advantage.”