Many argue open-source AI is essential for democracy. Many believe that it is terrifying. These two things are both true, and the tension between them is one of the defining challenges of our time. The argument for open-source AI is compelling: transparency, accountability, distributed innovation, and preventing monopolistic control over the most powerful technology ever created. The argument against it is equally compelling: once you release a powerful AI model into the wild, you can’t take it back. And some of the things people do with unrestricted AI are genuinely horrifying. Let’s talk about the WiFi router case.
The Case That Changes Everything
Researchers recently demonstrated that AI models can be trained to track human movement through walls using nothing but WiFi signals. Not specialized equipment. Not expensive sensors. Just the WiFi router already sitting in your home. The technology analyzes the way WiFi signals bounce off human bodies—the subtle distortions in the electromagnetic field as you move through your living room, your bedroom, your bathroom. With enough training data and a sufficiently capable AI model, these patterns can be mapped to precise human positions. Through walls. Without any cameras. Without your knowledge or consent.
Now imagine that capability in an open-source model that anyone can download. A stalker doesn’t need to install hidden cameras anymore. They just need access to your WiFi network—or a network close enough to reach you. A burglar can case houses to determine exactly when they’re empty. An abusive ex can track your movements inside your own home. This isn’t theoretical. The underlying research exists. The models exist. The only question is how accessible they become.
Why Open-Source Still Matters
Here’s where I’m supposed to pivot and argue for banning open-source AI. I can’t do that. Because the alternative—AI controlled exclusively by a handful of corporations and governments—is arguably worse.
The Anthropic Case Study
We’re watching the dangers of concentrated AI control play out in real-time. Anthropic, the AI safety company behind Claude, recently clashed with the Pentagon over the military’s demand to remove ethical guardrails from its AI. CEO Dario Amodei refused to cross two “red lines”:
- No fully autonomous weapons without human oversight
- No mass domestic surveillance on American citizens
The government’s response? They designated Anthropic a “supply-chain risk to national security”—a designation typically reserved for foreign adversaries like Huawei. They threatened to invoke the Defense Production Act to seize Anthropic’s source code. This is what happens when AI is controlled by closed organizations. The government can pressure, coerce, or simply take what it wants. And if the AI developers resist? They get crushed.
The Democratic Argument
Open-source AI distributes power. It prevents any single entity—corporate or governmental—from monopolizing the technology while enabling researchers worldwide to study AI systems, identify flaws, and build improvements. It lets smaller companies and developing nations participate in the AI revolution rather than being locked out by licensing fees and export controls. Security researcher Bruce Schneier has argued that open systems are ultimately more secure because vulnerabilities can be identified and fixed by the broader community. Closed systems hide vulnerabilities that adversaries eventually discover anyway—except then there’s no community to help patch them. The experts tracking AI in elections note that open-source detection tools are essential for identifying deepfakes and synthetic media. If only closed AI companies can build detectors, we’re entirely dependent on their willingness to deploy them.
The Terror of Unrestricted Access
And yet. Every argument for open-source AI must contend with what happens when genuinely dangerous capabilities become universally accessible.
Beyond WiFi Tracking
The WiFi tracking case is just one example. Consider what open-source AI models can already do:
- Generate convincing deepfakes of anyone with enough training photos
- Clone voices from just a few seconds of audio
- Write persuasive disinformation at scale
- Identify vulnerabilities in software systems
- Generate synthetic biology sequences with unknown properties
Each of these capabilities has legitimate uses. Each can also cause catastrophic harm in the wrong hands. The deepfake threat to elections is well-documented. In the 2025 Irish presidential election, AI-generated fake news bulletins nearly disrupted the vote. Romania saw deepfake videos of presidential candidates promoting financial scams. Germany faced fake MI6 announcements about bomb threats at polling stations. Open-source AI models power these attacks. And once released, they can’t be recalled.
The Asymmetry Problem
Here’s what makes open-source AI different from open-source software: the asymmetry between attack and defense. If someone releases an open-source exploit for a software vulnerability, defenders can patch it. The same fix works for everyone running that software. Defense scales as effectively as attack. AI doesn’t work that way. A deepfake video takes seconds to create and hours to debunk. A disinformation campaign can reach millions before fact-checkers even identify it. Detection tools require constant updating as generation techniques evolve—and the generators have structural advantages because they only need to fool humans, while detectors need to be perfect. Open-source access means attackers and defenders get the same tools. But attackers need to succeed once; defenders need to succeed every time.
THIS IS SCARY!!
— CG (@cgtwts) March 8, 2026
Someone just open-sourced software that can track you through walls using only WiFi.
> It shows your exact body position in real time.
> No cameras. No devices. No sensors.
> Just your home router.
> It’s completely open source.
pic.twitter.com/rDrJslitjo
Gemini 3 Deep Think generated a real-time 3D WiFi radar that maps every network around you as glowing nodes in a Matrix-style space — in one shot. It used Pearson correlation to infer which APs are physically close, since RSSI alone isn't enough. pic.twitter.com/xJ3Jc8tr6j
— BijanBowen (@Ominousind) February 13, 2026
The False Binary
The debate is usually framed as open-source versus closed-source. This is the wrong framing. The real question is: what capabilities should be freely accessible, and what guardrails can we build that don’t require centralized control?
Structured Transparency
Some researchers advocate for “structured transparency”—releasing AI models with certain capabilities restricted or removed, while keeping the core architecture open for study and improvement. This is imperfect. Determined actors can often restore removed capabilities. But it raises the barrier to misuse, which matters. Not every threat actor is a nation-state with unlimited resources. Many are opportunists who’ll move on if a tool isn’t immediately weaponizable.
Compute Governance
Another approach focuses on compute rather than code. Training powerful AI models requires massive computational resources. Even if the final model is open-source, the ability to train modified versions remains limited to well-resourced actors. Governments could regulate compute access—requiring registration, monitoring, or licensing for clusters above certain thresholds. This preserves the benefits of open models for research and deployment while limiting who can create new capabilities.
International Coordination
The hardest problem is coordination. AI development is global. If the U.S. restricts open-source releases, development simply moves elsewhere. China, Russia, and other nations face no such constraints—and may actively weaponize the technology. International cooperation on AI knowledge sharing is emerging as a proposed solution. Electoral regulators worldwide are beginning to share examples of AI-generated disinformation. The same infrastructure could support broader coordination on AI risks and norms. But let’s be honest: international AI governance is nascent at best. We can’t count on it to solve problems that exist today.
Living with the Tension
So where does this leave us? I keep coming back to the WiFi tracking example because it crystallizes the dilemma. The research is genuinely valuable—understanding how AI can extract information from ambient signals has applications in healthcare, accessibility, and security. Publishing it advances human knowledge. But the same research enables surveillance capabilities that would make Orwell’s telescreen look quaint. And once it’s out there, it’s out there. Maybe the answer is that we have to accept some amount of danger as the price of openness. Closed AI controlled by corporations and governments is its own dystopia—one where power concentrates, accountability vanishes, and the public has no insight into systems that increasingly shape their lives. Openness means some bad actors get access to powerful tools. Closedness means all power flows to whoever controls the closed systems. Neither outcome is good. We’re choosing between different failure modes.
No Turning Back
I don’t have a clean answer. Nobody does.
Open-source AI is good because it distributes power and enables scrutiny of increasingly consequential systems. Open-source AI is terrifying because it hands capabilities to anyone with an internet connection and sufficient motivation. The Anthropic case shows what happens when AI stays closed: governments eventually demand access on their terms, ethics be damned. The WiFi tracking case shows what happens when AI goes fully open: your home becomes transparent to anyone with the right model and enough determination.
Maybe the path forward is embracing the tension rather than resolving it. Supporting open research while advocating for compute governance. Celebrating transparent AI development while building robust detection infrastructure. Accepting that we’re navigating between bad options rather than toward a good one. Because the alternative—pretending either extreme is safe—is the most dangerous position of all. The open-source AI dilemma doesn’t have a solution. It has trade-offs. The question is whether we make those trade-offs consciously, democratically, and with clear eyes about what we’re choosing to accept. Right now, we’re not. And that’s what should terrify us most of all.



Leave A Comment