← Blog

If AI Can Build Your Software, It Can Break It Too

Something uncomfortable that nobody in security wants to talk about: the same AI that writes your features can write your exploits. And the people targeting your systems don’t have budget approval meetings or compliance reviews slowing them down.

The asymmetry problem

Here’s how software development used to work: smart humans write code, other smart humans review it, bugs get caught (some of them), software ships. Security was a cat-and-mouse game, but both sides were playing at human speed.

AI broke that equilibrium.

Now one side has a machine that can:

  • Scan millions of lines of code for vulnerabilities in seconds. Not the obvious ones — the subtle logic flaws, the race conditions, the authentication bypasses that take a human auditor hours to find.
  • Generate exploit chains automatically. Not just individual exploits, but chained attacks that combine multiple low-severity issues into something catastrophic.
  • Adapt in real-time. Exploit doesn’t work? The AI tries another angle. And another. At machine speed. Your monitoring lights up and by the time a human looks at it, the attacker has already moved on to the next approach.
  • Write polymorphic malware. Code that rewrites itself to evade signature detection. Every instance is unique. Traditional AV is useless.

The other side — your security team — is still mostly operating at human speed. Reading alerts. Writing runbooks. Manually investigating incidents. Maybe they have some automation. Maybe they even have AI tools. But in most organizations I’ve worked with, security is years behind development in AI adoption.

This is not a fair fight.

What AI-powered attacks actually look like

This isn’t theoretical. I’ve seen what’s possible building AI systems, and the offensive applications are straightforward:

Automated vulnerability discovery. Feed a codebase to a model with security training and it’ll flag issues faster and more thoroughly than most manual audits. Now imagine this in the hands of someone who found your repo on GitHub — or who got access to your proprietary code through a compromised dependency.

Intelligent fuzzing. Traditional fuzzing throws random inputs at your API and hopes something breaks. AI-guided fuzzing understands the structure of your application and intelligently generates inputs that target likely failure points. It’s the difference between randomly pressing buttons and knowing exactly which button to press.

Social engineering at scale. AI that writes convincing phishing emails tailored to your company, your tech stack, your recent press releases. Not the Nigerian prince garbage. Emails that sound like they came from your CTO.

Supply chain attacks. AI that identifies which of your dependencies are maintained by single developers, haven’t been updated recently, or have known patterns of vulnerability. Then generates targeted malicious PRs that look legitimate.

The uncomfortable truth about your security team

Most security teams I encounter are doing good work — but they’re doing it with yesterday’s tools and yesterday’s workflows. They’re proud of their manual code reviews, their custom rule sets, their playbooks built over years of experience.

That experience is valuable. But it doesn’t scale against a machine.

A human security engineer can audit maybe a few thousand lines of code per day. An AI can audit an entire codebase in the time it takes to get coffee. A human incident responder can investigate maybe 5-10 alerts per shift. An AI system can triage hundreds, escalating only the ones that need human judgment.

The security teams that are keeping up are the ones using AI to:

  • Continuously scan code and infrastructure for vulnerabilities, not just at audit time
  • Automate triage and initial investigation of security alerts
  • Generate and run security tests as part of the CI/CD pipeline
  • Monitor for anomalous behavior using models trained on their actual traffic patterns
  • Draft incident response plans for scenarios they haven’t encountered yet

If your security team isn’t doing at least some of this, they’re not slow — they’re blind.

The double standard

Here’s what gets me. The same executives who approved the $200/month AI coding tools for engineering won’t approve the same spend for security. Engineering gets the force multiplier. Security gets another dashboard they don’t have time to look at.

Development teams use AI to ship faster. Security teams use spreadsheets to track vulnerabilities. One side is accelerating. The other is treading water.

And the attackers? They’re not waiting for budget approval. They’re not filling out procurement forms. They’re using the best available tools right now, today, against your systems.

What to actually do about it

  1. Give your security team the same AI tools as your engineering team. Not different, lesser tools. The same ones. If your engineers have top-tier AI coding assistants, your security engineers should too.
  2. Automate vulnerability scanning in CI/CD. Every commit, every PR. AI-powered scanning that catches issues before they merge, not after they’re in production.
  3. Run AI-powered red team exercises. Use AI to simulate realistic attacks against your own systems. See what a machine-speed attacker would find. Fix it before a real one does.
  4. Invest in AI-driven detection. Signature-based detection is dead. Behavioral analysis powered by models that understand your normal patterns is the baseline now.
  5. Train your security team to use AI tools. Not a workshop. Not a webinar. Real, hands-on, integrated-into-their-daily-work adoption. Same commitment you made for engineering.

The real risk isn’t AI. It’s complacency.

I’m not trying to be alarmist. The sky isn’t falling. But the ground has shifted, and a lot of security teams are standing on ground that moved without them noticing.

AI doesn’t change the fundamentals of security — understand your attack surface, reduce your vulnerabilities, detect and respond quickly. But it changes the speed and scale at which both offense and defense operate.

If only one side is using the new speed and scale, that side wins. Right now, in too many organizations, that side isn’t yours.

The teams not using AI aren’t just less productive. They’re vulnerable. And the clock is already running.