In 2025, artificial intelligence is no longer just a tool for innovation—it’s also a weapon used to scale and automate cyberattacks. Two of the most urgent AI-driven threats facing businesses today are software supply chain attacks and deepfake-enabled social engineering. These tactics are reshaping how cybercriminals breach organizations, and they’re targeting the very systems and communications companies trust most.
The Modern Supply Chain Is Digital—and Vulnerable
Most businesses today rely on a wide range of software, cloud platforms, and digital vendors. This “digital supply chain” includes:
-
Business applications (e.g., CRMs, VoIP systems, file-sharing tools)
-
Third-party software libraries used by developers
-
Cloud-based platforms with API access
-
Vendors and partners with data or network privileges
This interconnectedness creates convenience—but also risk. If a vendor's system is compromised, that threat can travel downstream into your business. And as attackers become more sophisticated, they’re no longer trying to brute-force their way into your network—they’re slipping in through the back door of the software you already trust.
In 2023, for example, a popular communications platform was compromised through its own software update process. Malware was embedded directly into a trusted update—users installed it without question. This tactic, known as a supply chain attack, is increasingly being automated and enhanced by AI.
AI allows attackers to:
-
Mimic real code styles and naming conventions, making malicious code indistinguishable from the real thing.
-
Scrape and map third-party tools your business uses, then target the most vulnerable one.
-
Scale this method across industries with little cost or effort.
And these attacks are hard to detect because they appear legitimate—until it’s too late.
Deepfakes: The Human Side of the Threat
While technical attacks target systems, AI is also being used to target people—through fake voices, fake videos, and fake identities.
Known as a deepfake, this synthetic media tool uses AI to replicate someone’s appearance or voice with uncanny accuracy. And it is already being used in real-world scams.
One of the most alarming cases occurred in early 2024: a finance department in Hong Kong received a video call from their CFO. The person looked and sounded exactly right, requesting a confidential wire transfer. It turned out to be an AI-generated deepfake. The company lost over $25 million.
Why was it effective? Because it bypassed the traditional warning signs:
There were no suspicious links or fake emails.
The caller was who they expected—at least visually and audibly.
The request wasn’t entirely unusual, just urgent.
Deepfakes are effective because they exploit trust—trust in coworkers, trust in executives, and trust in communication platforms like Zoom or Teams.
They also scale. Cybercriminals can now:
Use public video/audio (from LinkedIn, social media, YouTube) to replicate targets.
Launch multiple deepfake scams using AI-generated scripts and voice cloning.
Combine them with leaked credentials or previous breaches for even greater authenticity.
These tactics are hard to spot. Employees are trained to be skeptical of emails, but not of familiar faces on a call.
Why These Threats Are a Bigger Risk to SMBs
Large corporations have threat-hunting teams, internal auditors, and millions invested in cybersecurity tools. Small and mid-sized businesses (SMBs) often don’t—and that’s exactly who attackers are now targeting.
SMBs:
Often have less visibility into their third-party software and cloud tools.
Trust communication platforms by default (especially during remote or hybrid work).
May not have internal controls to detect fake but believable requests.
Don’t ask vendors for things like SBOMs (Software Bills of Materials)—a document that outlines all third-party components within a piece of software.
In short, SMBs have more exposure and fewer defenses. Cybercriminals know this and are now using automation and AI to exploit these gaps faster and more effectively than ever.
What Can Businesses Do About Deepfakes?
1. Understand What Software You Rely On
Start by mapping out all the software and tools your company uses—including cloud platforms, browser plugins, and open-source packages if you have internal developers.
Then ask:
Who maintains this software?
Has it had any recent security advisories?
Do they publish an SBOM or use code scanning for third-party components?
If the answer is “we don’t know,” that’s a risk.
2. Create a Human Verification Protocol
The best defense against deepfakes is not technical—it’s procedural.
If someone requests a wire transfer, a password, or access to internal data:
-
Always require a second form of verification (e.g., a Slack message and a phone call).
-
Never rely solely on video or voice, even if it seems legitimate.
-
Train your team on how deepfakes work—and how to spot common tells like unusual lighting, audio sync issues, or pressure to act fast.
Make this a part of your internal policy and onboarding.
3. Evaluate Vendor and Partner Risk
Not all software vendors provide the same level of security. Before onboarding or renewing a vendor:
-
Ask how they handle third-party code.
-
Require them to provide documentation (e.g., SBOM, incident history, audit results).
-
Consider segmenting their access within your network—don’t give broad permissions.
If your business lacks a dedicated security team, working with a trusted partner can ensure these reviews happen consistently.
Final Thought: AI Works Both Ways
AI has incredible potential to streamline operations, automate workflows, and reduce costs—but that same technology is also enabling smarter, faster, and more personalized cyberattacks.
The supply chain and deepfake threats aren't just “big company problems.” They’re targeting anyone who relies on software or human trust—which is virtually every business today.
Being proactive isn’t about paranoia—it’s about preparation. The businesses that understand these risks, build in checkpoints, and educate their teams will be far better positioned to navigate this evolving landscape safely.