On April 29, NHS England issued directive SDLC-8: 850 public GitHub repositories across four NHS organizations must go private by May 11. The reason was an AI model called Mythos. It generates functional exploit code at a 72.4% success rate. One of the repos being locked down is the NHS.UK frontend library, used by 43 million people a month.
NHS won't be the last organization to make this call. As AI-assisted exploit development moves from research labs into active use, more businesses are going to look at their public-facing code and make the same decision. Understanding why starts with understanding what Mythos actually does.
What Mythos actually does
Mythos isn't a scanner. It doesn't just match signatures against a CVE database. It reads code, reasons about it, and produces working payloads. The 72.4% success rate on functional exploit generation is the headline number, but the more revealing statistic is what it found along the way: thousands of previously unknown zero-day vulnerabilities, including a 17-year-old remote code execution flaw in FreeBSD that conventional tooling had missed entirely.
That detail changes the threat calculation. AI-assisted vulnerability research doesn't just find new bugs faster. It finds old ones that went undetected for years because no human analyst had the time or the specific angle to look. A codebase that was considered clean last year may not be clean to Mythos.
The compression of the timeline from discovery to weaponized exploit — from weeks to hours — is the actual shift. That's what's driving organizations to pull their code off public repositories.
Why businesses are responding this way
The logic is straightforward: if an AI can read your source code and produce a working exploit in hours, public repositories become a liability. Closing them doesn't eliminate the vulnerability — the code still runs in production — but it removes the most direct input to the analysis pipeline. It buys time and reduces the lowest-friction attack path.
For organizations managing large legacy codebases with years of accumulated technical debt, the calculus tips further. NHS operates code across hundreds of systems, some of it old, some of it under-documented, none of it exhaustively audited for the kind of edge cases Mythos specializes in finding. The potential downside of a working exploit against NHS infrastructure is severe enough that locking down repositories looks like a reasonable risk response even if it isn't a complete one.
The 2017 WannaCry attack cost NHS an estimated £35 million and cancelled 13,500 appointments. That outcome happened because of unpatched systems — not public code. But the fear driving the current directive is the same fear: a fast-moving threat that outpaces the organization's ability to respond.
What this means for security posture
The NHS decision is a leading indicator. As Mythos-class capabilities move downstream — into red team tooling, into offensive actor workflows, into commoditized security platforms — the pressure on organizations to limit code exposure will grow. Regulated industries with large legacy codebases and slow patch cycles are particularly exposed: healthcare, financial services, government, energy, and critical infrastructure.
The immediate question for any security team is: what does your public exposure look like to an AI that can read and reason about code at this level? The answer for most organizations is that they don't fully know. The repositories are public. The audit history is incomplete. The legacy components predate modern threat modeling.
Mythos changes the cost of that uncertainty. Businesses are starting to price that in.
What to watch
NHS has until May 11 to close 850 repos. A petition at keepthingsopen.com has 633 signatories arguing against it. That debate will continue. But the broader trend — AI-assisted exploit generation driving organizations toward code obscurity — is likely to accelerate regardless of how this specific incident resolves.
The organizations that get ahead of it won't just be the ones that hide their code. They'll be the ones that understand their exposure well enough to know which code actually needs protecting, patch the things that Mythos would find, and build detection into their pipelines before the tooling is widely available to adversaries. The window to do that proactively is open. It won't stay open indefinitely.