Why Do 65% of Companies Say Their Current Security Can’t Stop AI-Based Attacks?
Imagine this: You're the IT head at a bustling mid-sized firm, juggling emails, meetings, and the constant hum of servers. One morning, an alert pings another phishing attempt blocked. You sigh in relief, thinking your firewalls and antivirus are holding the line. But what if the next attack isn't a clumsy email scam? What if it's powered by artificial intelligence, morphing in real-time to slip past your defenses like a chameleon in the shadows? This isn't sci-fi; it's the reality facing businesses today. According to a recent Lenovo survey of 600 IT leaders across the globe, a staggering 65% admit their current cybersecurity setups are outdated and simply can't stand up to AI-driven threats. That's not just a statistic it's a wake-up call. With AI tools exploding in popularity, from chatbots to predictive analytics, cybercriminals are harnessing the same tech to launch smarter, faster attacks. But why are so many companies feeling outmatched? Is it a lack of tools, skills, or something deeper? In this post, we'll dive into the heart of this issue. Drawing from Lenovo's eye-opening report and broader industry insights, we'll explore what AI-based attacks look like, why traditional security is faltering, and what steps you can take whether you're a small business owner or a corporate exec. By the end, you'll have a clearer picture of the challenges and, more importantly, practical ways to bolster your defenses. Let's unpack why 65% of companies are waving the white flag and how to join the resilient 35%.

Table of Contents
- The Lenovo Report: Key Findings and What They Mean
- Understanding AI-Based Attacks: The New Threat Landscape
- Why Traditional Security Falls Short Against AI
- Main Challenges: External Threats, Insider Risks, and Protecting AI Itself
- Common AI Threats vs. Traditional Defenses: A Comparison
- Closing the Gap: Steps Companies Can Take
- Future Trends: AI as a Defender and the Road Ahead
- Conclusion
The Lenovo Report: Key Findings and What They Mean
Let's start with the source of that eye-popping 65% figure. In late 2024, Lenovo surveyed 600 IT leaders from companies with at least 1,000 employees across 12 countries, including the US, UK, India, and Brazil. The goal? To gauge how prepared businesses are for the AI cybersecurity revolution. The results, released in September 2025, paint a sobering picture.
At the core: 65% of respondents said their defenses are outdated and unable to handle AI-powered attacks. Only 31% felt confident in their ability to defend against them. That's a huge gap nearly two-thirds waving a red flag. But what does "AI-based attacks" even mean here? The report highlights how generative AI is supercharging cybercriminals, making threats more adaptive and sneaky.
For context, this isn't Lenovo's first rodeo. As a tech giant, they're knee-deep in AI solutions, so their insights come from real-world data. Rakshit Ghura, Lenovo's VP for Digital Workplace Solutions, nailed it: "AI has changed the balance of power in cybersecurity. To keep up, organizations need intelligence that adapts as fast as the threats." The survey spanned sectors like finance, healthcare, and manufacturing, showing this isn't niche it's universal.
Why does this matter? In simple terms, if most companies are unprepared, we're all at risk. Supply chains link us; a hack at one firm can cascade. Think of it like a neighborhood watch if 65% of houses have flimsy locks, burglars have a field day. The report urges a shift: Fight fire with fire, or in this case, AI with AI. We'll explore that later, but first, let's break down what these attacks look like.
Digging deeper, the survey was conducted in October and November 2024, capturing post-pandemic shifts where remote work and cloud adoption exploded vulnerabilities. It's a snapshot of a world where AI isn't a buzzword it's the battlefield.
Understanding AI-Based Attacks: The New Threat Landscape
AI-based attacks sound futuristic, but they're here now. At their heart, these are cyber threats enhanced by artificial intelligence, making them smarter than old-school hacks. Traditional attacks might spam phishing emails; AI ones craft personalized lures that mimic your boss's writing style, using data scraped from social media.
Key types explained simply:
- Polymorphic Malware: Like a virus that changes its code every time it's run, dodging antivirus signatures. AI helps it evolve on the fly.
- AI-Driven Phishing: Hackers use generative AI (think ChatGPT-like tools) to create convincing emails or deepfake videos, tricking you into clicking bad links.
- Deepfake Impersonation: Fake audio or video calls from "executives" demanding wire transfers. We've seen losses in the millions from these.
The Lenovo report notes these attacks are "faster, more convincing, and harder to detect." Why? AI analyzes vast data sets to spot weaknesses, automating what once took human hackers weeks. For beginners: Imagine a robber who studies your routine via cameras, then picks the perfect moment to strike—AI does that digitally.
Real-world examples abound. In 2024, a Hong Kong firm lost $25 million to a deepfake CFO call. Closer home, Indian banks reported a 50% spike in AI phishing in 2025. Globally, Gartner predicts by 2027, 90% of successful AI in cybersecurity will focus on task automation—but for attackers, it's already weaponized.
This landscape shifts daily. As companies adopt AI for efficiency, hackers follow suit. The 65% unpreparedness? It's because defenses haven't caught up—yet.
To illustrate, consider polymorphic malware: Traditional antivirus scans for known patterns; AI malware mutates, like a flu virus evading vaccines. Result? Detection rates plummet.
Why Traditional Security Falls Short Against AI
Traditional security firewalls, antivirus, password policies worked fine for yesterday's threats. But against AI? It's like bringing a knife to a gunfight. Here's why, broken down simply.
First, speed mismatch. AI attacks evolve in seconds; legacy systems rely on periodic updates. If your antivirus definitions are from last week, you're toast.
Second, complexity overload. AI threats are adaptive learning from failed attempts. Static rules-based defenses can't keep pace; they're predictable, while AI is probabilistic, guessing your next move.
Third, data deluge. Modern firms generate petabytes of data; sifting threats manually is impossible. AI attackers exploit this noise, hiding in plain sight.
The Lenovo findings echo this: 65% see their tools as outdated because they can't detect the "nearly undetectable" AI tricks like deepfakes or polymorphic code. McKinsey warns enterprises leaning on old defenses will fall short as AI reshapes the game.
Analogy time: Traditional security is a castle wall—solid against battering rams. AI attacks? Drones dropping bombs from above. You need anti-air tech, not thicker stones.
Budget plays in: Many firms skimp on upgrades due to costs, sticking with what "works." But as threats AI-ify, that complacency bites. Talent gaps hurt too—few experts in AI security.
In essence, traditional tools are reactive; AI demands proactive, intelligent countermeasures. That's the crux of the 65% confession.
Main Challenges: External Threats, Insider Risks, and Protecting AI Itself
The report spotlights three big hurdles, each amplifying why 65% feel outgunned.
External Threats: These are the obvious ones hackers using AI for phishing or malware. 65% say defenses can't cope because attacks are too swift and sly. Example: AI crafts emails that pass spam filters, boosting success rates 30% per studies.
Insider Risks: Not spies, but employees misusing AI. 70% of IT leaders worry about this; over 60% say AI agents (like chatbots) create unmanageable new threats. Think: A worker prompting a tool with sensitive data, exposing it accidentally.
Protecting AI Systems: AI itself is a target. Models and data can be poisoned hackers tweak training sets for biased outputs. The report calls these "high-value targets," with legacy systems ill-equipped.
Overarching issues: Legacy tech, skill shortages, tight budgets. Adoption slows as firms grapple with integration.
For beginners: External is the wolf at the door; insider, the fox in the henhouse; protecting AI, guarding your own weapons. Tackle all, or the 65% becomes 100%.
Real impact: A 2025 healthcare breach saw AI-manipulated data lead to wrong diagnoses. Prevention starts with awareness.
Common AI Threats vs. Traditional Defenses: A Comparison
To visualize the mismatch, here's a table comparing key AI threats to how traditional security handles them—and why it often fails.
AI Threat | How It Works | Traditional Defense | Why It Fails |
---|---|---|---|
Polymorphic Malware | Changes code to evade detection | Signature-based antivirus | Can't match evolving signatures |
AI Phishing | Personalized, convincing lures | Email filters, training | Too sophisticated for rule-based filters |
Deepfakes | Fake media for impersonation | Manual verification | Humans can't spot advanced fakes reliably |
Data Poisoning | Corrupts AI training data | Access controls | Subtle changes slip past static checks |
AI Insider Threats | Misuse by employees/tools | Monitoring logs | Overwhelms with data volume |
This table shows the core issue: AI threats are dynamic; old defenses, static. Time for an upgrade.
Closing the Gap: Steps Companies Can Take
Good news: The 65% isn't destiny. Lenovo and experts offer a roadmap to resilience. Start with mindset: View AI as ally, not enemy.
Step one: Adopt AI-powered defenses. Embed intelligence in tools for real-time adaptation—spot anomalies before they strike. Tools like Lenovo's ThinkShield do this at device level.
- Enhance detection: Use AI for behavior analysis, flagging odd patterns.
- Train teams: Bridge talent gaps with upskilling; 2025 programs abound.
- Secure AI assets: Encrypt models, vet data sources.
Budget-wise: Prioritize. Lenovo's CRaaS offers managed services 99.5% detection, under 30-min response, 20%+ savings. For small firms: Start free—open-source AI scanners.
Policy: Update insider rules for AI use; monitor without spying.
Success stories: Firms using AI defenses cut breaches 50%. Join the 31% act now.
Future Trends: AI as a Defender and the Road Ahead
Looking to 2030, AI flips the script from threat to shield. Gartner says by 2027, AI in cyber will automate tasks, not replace roles. Trends: Predictive analytics foreseeing attacks, quantum-resistant encryption.
India's scene: With AI adoption booming, policies like DPDP Act push security. Global: Lenovo's servers top uptime, setting benchmarks.
Challenges: Ethics AI bias in security? Balance innovation with privacy.
Optimism: As Ghura says, secure AI workplaces are "growth engines." The 65% shrinks as adoption grows.
Conclusion
To wrap, Lenovo's report reveals why 65% of companies feel their security can't halt AI attacks: Threats are faster, smarter, with external hits, insider risks, and AI vulnerabilities outpacing old defenses. From polymorphic malware to deepfakes, the landscape demands change.
Yet, hope abounds: Fight AI with AI, invest in adaptive tools, train teams. By closing gaps, companies turn risks into resilience. Don't be the 65%—evolve today for a safer tomorrow.
What Is the Source of the 65% Statistic?
The 65% comes from a Lenovo survey of 600 IT leaders in 2024, where they admitted defenses are outdated against AI threats.
What Are AI-Based Attacks?
These are cyber threats enhanced by AI, like adaptive malware or personalized phishing, making them harder to detect.
Why Can't Traditional Security Handle Them?
Traditional tools are static and rule-based; AI attacks evolve quickly, slipping past signatures and filters.
What Is Polymorphic Malware?
Malware that changes its code to avoid detection, powered by AI for rapid mutations.
What Percentage Feel Confident Against AI Attacks?
Only 31% of IT leaders feel prepared, per the Lenovo report.
What Are Insider Risks with AI?
Employees misusing AI tools, creating accidental leaks; 70% of leaders are concerned.
How Can Companies Protect AI Systems?
Encrypt models, secure data, and use AI for anomaly detection in training processes.
What Is Deepfake Impersonation?
Fake audio/video used to mimic people, tricking others into actions like fund transfers.
Why Are Budgets a Challenge?
Upgrading to AI defenses costs money; many firms prioritize other areas amid tight finances.
What Does "Fight AI with AI" Mean?
Use AI-powered security tools to match the speed and adaptability of AI threats.
What Is CRaaS?
Cyber Resiliency as a Service Lenovo's managed offering for threat monitoring and response.
How Many IT Leaders Were Surveyed?
600 from global companies with 1,000+ employees.
What Are Talent Gaps?
Lack of experts skilled in AI cybersecurity, slowing adoption of new defenses.
Can Small Businesses Apply These Lessons?
Yes—start with basics like AI-enabled antivirus and free training resources.
What Role Do Deepfakes Play?
They enable social engineering attacks, like fake calls, bypassing tech defenses via human trickery.
What's the Future of AI in Security?
By 2027, AI will automate cyber tasks, per Gartner, making defenses more efficient.
How Does AI Make Phishing Better?
Generates tailored, error-free messages that mimic real communication.
What Savings Does CRaaS Offer?
Over 20% cost savings in year one, with 99.5% detection rates.
Why Protect Training Data?
Poisoned data leads to flawed AI outputs, compromising decisions or security.
Is India Included in the Survey?
Yes, along with 11 other countries, reflecting global concerns.
What's Your Reaction?






