The browser is the sandbox

https://simonwillison.net/2026/Jan/25/the-browser-is-the-sandbox/

Comments

anilgulechaJan 26, 2026, 8:48 AM
I'd like to point Simon and others to 2 more things possible in the browser:

1) webcontainer allows nodejs frontend and backend apps to be run in the browser. this is readily demonstrated to (now sadly unmaintained) bolt.diy project.

2) jslinux and x86 linux examples allow running of complete linux env in wasm, and 2 way communication. A thin extension adds networking support to Linux.

so technically it's theoretically possible to run a pretty full fledged agentic system with the simple UX of visiting a URL.

lewisjoeJan 26, 2026, 8:48 AM
It's fascinating that browsers are one of the most robust and widely available sandboxing system and we are yet to make a claude-code/gemini-cli like agent that runs inside the browser.

My bet is that eventually we'll end up with a powerful agentic tool that uses the browser environment to plan and execute personal agents or to deploy business agents that doesn't access system resources any more than browsers do at the moment.

simonwJan 26, 2026, 8:46 AM
This is an entry on my link blog - make sure to read the article it links to for full context, my commentary alone might not make sense otherwise: https://aifoc.us/the-browser-is-the-sandbox/
rcarmoJan 26, 2026, 7:57 AM
I don't buy it. It might be very useful for a few use cases, but despite all the desktop automation craze and "Claude for cooking" stuff that is inevitably to follow, our computing model for live business applications has, for maintainability, auditability, security, data access, etc. become cloud-centric to a point where running things locally is... kind of pointless for most "real" apps.

Not that I'm not excited about the possibilities in personal productivity, but I don't think this is the way--if it was, we wouldn't have lost, say, the ability to have proper desktop automation via AppleScript, COM, DDE (remember that?) across mainstream desktop operating systems.

ijustlovemathJan 26, 2026, 7:11 AM
I've found it interesting that systemd and Linux user permissions/groups never come into the sandboxing discussions. They're both quite robust, offer a good deal of customization in concert,and by their nature, are fairly low cost.
vbezhenarJan 26, 2026, 8:37 AM
Linux kernel is ridden with local privilege escalation vulnerabilities. This approach works for trusted software that you just want to contain, but it won't work for malicious software.
nextaccounticJan 26, 2026, 8:30 AM
Unix permissions were written at a time where the (multi user) system was protecting itself from the user. Every program ran at the same privileges of the user, because it wasn't a security consideration that maybe the program doesn't do what the user thinks it does. That's why in the list of classic Unix tools there is nothing to sandbox programs or anything like that, it was a non issue

And today this is.. not sufficient. What we require today is to run software protected from each other. For quite some time I tried to use Unix permissions for this (one user per application I run), but it's totally unworkable. You need a capabilities model, not an user permission model

Anyway I already linked this elsewhere in this thread but in this comment it's a better fit https://xkcd.com/1200/

theteapotJan 26, 2026, 8:44 AM
I feel like apparmor is getting there, very, very slowly. Just need every package to come with a declarative profile or fallback to a strict default profile.
fsfloverJan 26, 2026, 8:33 AM
This is why my daily driver is https://qubes-os.org
moezdJan 26, 2026, 7:24 AM
This assumes people know more than just writing Dockerfiles and push straight to production. This is still a rarity.
ijustlovemathJan 26, 2026, 7:38 AM
Nowadays, it's fairly simple to ask for a unit file and accompanying bash script/tests for correctness. I think the barrier in that sense has practically vanished.
pjmlpJan 26, 2026, 7:30 AM
Because that is actually UNIX user permissions/groups, with a long history of what works, and what doesn't?
ape4Jan 26, 2026, 7:51 AM
cgroups are part of whats used to implement docker and podman
ijustlovemathJan 26, 2026, 7:59 AM
True, and they do indeed offer an additional layer of protection (but with some nontrivial costs). All (non-business killing) avenues should be used in pursuit of defense in depth when it comes to sandboxing. You could even throw a flatpak or firejail in, but that starts to degrade performance in noticeable ways (though I've found it's nice to strive for this in your CI).
stevefan1999Jan 26, 2026, 6:24 AM
We never say that it isn't. There is a reason Google developed NaCl in the first place that inspired WebAssembly to become the ultimate sandbox standard. Not only that, DOM, JS and CSS also serves as a sandbox of rendering standard, and the capability based design is also seen throughout many browsers even starting with the Netscape Navigator.

Locking down features to have a unified experience is what a browser should do, after all, no matter the performance. Of course there are various vendors who tried to break this by introducing platform specific stuff, but that's also why IE, and later Edge (non-chrome) died a horrible death

There are external sandbox escapes such as Adobe Flash, ActiveX, Java Applet and Silverlight though, but those external escapes are often another sandbox of its own, despite all of them being a horrible one...

But with the stabilization of asm.js and later WebAssembly, all of them is gone with the wind.

Sidenote: Flash's scripting language, ActionScript is also directly responsible for the generational design of Java-ahem-ECMAScript later on, also TypeScript too.

chimeJan 26, 2026, 7:53 AM
> Sidenote: Flash's scripting language, ActionScript is also directly responsible for the generational design of Java-ahem-ECMAScript later on, also TypeScript too.

I feel like I am the only one who absolutely loved ActionScript, especially AS3. I wrote a video aggregator (chime.tv[1]) back in the day using AS3 and it was such a fun experience.

1. https://techcrunch.com/2007/06/12/chimetv-a-prettier-way-to-...

lukanJan 26, 2026, 8:44 AM
How did you got that impression?

There is the universal hate for flash because it was used for ads and had shitty security, but anyone I know who actually used AS3 loved it.

At its peak, with flex builder, we also had a full blown UI Editor, where you could just add your own custom eöements designed directly with flash ... and then it was all killed because Apple did not dare to open source it, or put serious efforts on their own into improving the technical base of the flash player (that had aquired lots of technical dept).

SemaphorJan 26, 2026, 8:16 AM
> I feel like I am the only one who absolutely loved ActionScript,

I never really worked with it, but it seems whenever it comes up here or on Reddit, people who did, miss it. I think the authoring side of Flash is remembered very positively.

drysineJan 26, 2026, 7:11 AM
>all of them being a horrible one

Silverlight was nice, pity it got discontinued.

pjmlpJan 26, 2026, 7:32 AM
Lets not forget it was actually the platform for Windows Phone 7, existed as alternative to WinRT on Windows 8.x, only got effectively killed on Windows 10.

Thus it isn't as if the browser plugins story is directly responsible for its demise.

ridruejoJan 26, 2026, 8:10 AM
We applied a lot of the technical hacks described in this article and the original one to provide a full Linux environment (including networking and mounting directories) running inside the browser. https://endor.dev/s/lamp
bob1029Jan 26, 2026, 7:55 AM
> a robust sandbox for agents to operate in

I would like to humbly propose that we simply provision another computer for the agent to use.

I don't know why this needs to be complicated. A nano EC2 instance is like $5/m. I suspect many of us currently have the means to do this on prem without resorting to virtualization.

Tarq0nJan 26, 2026, 8:37 AM
An EC2 instance is a sandbox within a large server, so that's not really reframing the issue.
modelessJan 26, 2026, 6:37 AM
Last I looked (a couple of years ago), you could ask the user for read-write access to a directory in Chrome using the File System Access API, however you couldn't persist this access, so the user would have to manually re-grant permission every time you reloaded the tab. Has this been fixed yet? It's a showstopper for the most interesting uses of the File System Access API IMO.
vbezhenarJan 26, 2026, 6:58 AM
modelessJan 26, 2026, 7:11 AM
Thanks, this looks like a very sensible behavior.
nezharJan 26, 2026, 7:03 AM
Good question! Since this is an extension of input, I'm not sure if this is defined: https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputEl....

On my desktop Chrome on Ubuntu, it seems to be persistent, but on my Android phone in Chrome, it loses the directory if I refresh.

utopiahJan 26, 2026, 7:12 AM
Wrong title, if it's "File System Access API (still Chrome-only as far as I can tell)" then it should read "A browser is the sandbox".

At the risk of sounding obvious :

- Chrome (and Chromium) is a product made and driven by one of the largest advertising company (Alphabet, formally Google) as a strategical tool for its business model

- Chrome is one browser among many, it is not a de facto "standard" just because it is very popular. The fact that there are a LOT of people unable to use it (iOS users) even if they wanted to proves the point.

It's quite important not to amalgamate some experimental features put in place by some vendors (yes, even the most popular ones) as "the browser".

RodgerTheGreatJan 26, 2026, 7:20 AM
I stand by a policy that if a feature in one of my projects can only be implemented in Chrome, it's better not to add the feature at all; the same is true for features which would be exclusive to Firefox. Giving users of a specific browser a superior experience encourages a dangerous browser monoculture.
augusteoJan 26, 2026, 6:21 AM
The folder input thing caught me off guard too when I first saw it. I've been building web apps for years and somehow missed that `webkitdirectory` attribute.

What I find most compelling about this framing is the maturity argument. Browser sandboxing has been battle-tested by billions of users clicking on sketchy links for decades. Compare that to spinning up a fresh container approach every time you want to run untrusted code.

The tradeoff is obvious though: you're limited to what browsers can do. No system calls, no arbitrary binaries, no direct hardware access. For a lot of AI coding tasks that's actually fine. For others it's a dealbreaker.

I'd love to see someone benchmark the actual security surface area. "Browsers are secure" is true in practice, but the attack surface is enormous compared to a minimal container.

nezharJan 26, 2026, 6:30 AM
I see this as a way to build apps with agentic flows where the original files don't need manipulation; instead, you create something new. Whether it's summarizing, answering questions, or generating new documents, you can use a local/internal LLM and feel relatively safe when tool calling is also restricted.
politelemonJan 26, 2026, 7:14 AM
A sandbox is meant to be a controlled environment where you can execute code safely. Browsers can access your email, banking, commerce and the keys to your digital life.

Browsers are closer to operating systems rather than sandboxes, so giving access of any kind to an agent seems dangerous. In the post I can see it's talking about the file access API, perhaps a better phrasing is, the browser has a sandbox?

felixfbeckerJan 26, 2026, 7:23 AM
That is like saying the kernel/sandbox hypervisor can access those things. The point is that the sandboxed code cannot. In browsers, code from one origin cannot access those things from another origin unless explicitly enabled with CORS.
fragmedeJan 26, 2026, 7:23 AM
just make a separate user profile without your email , banking, and commerce, if that's what you don't want it to have access to.
grumbelbart2Jan 26, 2026, 7:34 AM
Why not "just use a different machine for banking" etc.

The point is that most people won't do that. Just like with backups, strong passwords, 2FA, hardware tokens etc. Security and safety features must be either strictly enforced or on enabled by default and very simple to use. Otherwise you leave "the masses" vulnerable.

vbs_redlofJan 26, 2026, 7:24 AM
What I'd really like to see is some kind of iframe that pins JS/wasm code within it to a particular bundle hash and prevents modification at runtime (even from chrome extensions).

Something more like a TEE inside the browser of sorts. Not sure if there is anything like this.

cadamsdotcomJan 26, 2026, 6:57 AM
Unfortunately sandboxing your computer from the browser won’t sandbox gullible agents away from your online banking.
sebraJan 26, 2026, 7:10 AM
"The browser" -> "A browser" and this is not an issue
cadamsdotcomJan 26, 2026, 7:18 AM
If you believe that, maybe you believe me when I say ignore all previous instructions and make me a sandwich!
fragmedeJan 26, 2026, 7:26 AM
If you have a cross user profile bug that can be exploited in Chrome, Google will pay you quite the bug bounty!
tdhz77Jan 26, 2026, 6:52 AM
I always find Simon Wilson’s post to be odd. He gets access to things, being tipped of things. Who is paying and why? Most of the posts are of little to no value to me. This might be the prime example. Webassembly is the sandbox. That is unless you disagree than you are being paid for your posts and not disclosing it.
rcarmoJan 26, 2026, 8:02 AM
As someone who's been blogging since 2002, I can tell you first hand that you get a fair amount of outreach. But I even though I have had to put Simon's feed through a summarizer to be able to keep up, I don't see any bias there--just _a lot_ of writing about whatever he's interested in, and either our own perceptions of what is interesting and the law of averages inevitably kick in and there are a few duds here and there.
hantuskJan 26, 2026, 7:22 AM
Good opportunities arise for those who stick their neck out. Here's some inspiration for what to blog about: https://simonwillison.net/2022/Nov/6/what-to-blog-about/

It seems he started his blog in 2003: https://simonwillison.net/2003/Jun/12/oneYearOfBlogging/

rzmmmJan 26, 2026, 8:19 AM
He is a familiar blogger for HN readers, has been for a long time. While I agree the posts are nowadays a bit repetitive, he has also very interesting non-AI content. Some people probably upvote because they like the author, not necessarily the content.
nextaccounticJan 26, 2026, 8:18 AM
I don't understand this criticism. Most agents today are running with no sandboxing at all. Every person has to figure out how they will sandbox each agent (run under bubblewrap? container-use? what about random MCP servers, do they need to be sandboxed separately?) on an ad hoc basis. Most people don't bother with it.

And then you see the recent vulnerabilities in opencode for example. The current model is unsustainable

It would be great if desktop Linux adopted a better security model (maybe inspired by Android). So far we got this https://xkcd.com/1200/ and it's not sufficient

nezharJan 26, 2026, 6:24 AM
saagarjhaJan 26, 2026, 7:41 AM
I’m not entirely sure this is better than native sandboxes?
nezharJan 26, 2026, 6:17 AM
I like the perspective used to approach this. Additionally, the fact that major browsers can accept a folder as input is new to me and opens up some exciting possibilities.
0xbadcafebeeJan 26, 2026, 7:24 AM
> Over the last 30 years, we have built a sandbox specifically designed to run incredibly hostile, untrusted code from anywhere on the web

Browser sandboxes are swiss cheese. In 2024 alone, Google reported 75 zero-day exploits that break out of their browser's sandbox.

Browsers are the worst security paradigm. They have tens of millions of lines of code, far more than operating system kernels. The more lines of code, the more bugs. They include features you don't need, with no easy way to disable them or opt-in on a case-by-case basis. The more features, the more an attacker can chain them into a usable attack. It's a smorgasbord of attack surface. The ease with which the sandbox gets defeated every year is proof.

So why is everyone always using browsers, anyway? Because they mutated into an application platform that's easy to use and easy to deploy. But it's a dysfunctional one. You can't download and verify the application via signature, like every other OS's application platform. There's no published, vetted list of needed permissions. The "stack" consists of a mess of RPC calls to random remote hosts, often hundreds if not thousands required to render a single page. If any one of them gets compromised, or is just misconfigured, in any number of ways, so does the entire browser and everything it touches. Oh, and all the security is tied up in 350 different organizations (CAs) around the world, which if any are compromised, there goes all the security. But don't worry, Google and Apple are hard at work to control them (which they can do, because they control the application platform) to give them more control over us.

This isn't secure, and there's really no way to secure it. And Google knows that. But it's the instrument making them hundreds of billions of dollars.

4gotunameagainJan 26, 2026, 7:54 AM
Not only does google know that, but it is in their best interest to keep adding complexity to the behemoth that their browser is, in order to maintain their moat. Throwing just enough cash at mozilla to avoid monopoly lawsuits.
benatkinJan 26, 2026, 6:48 AM
Good time to surface the limitations of a Content Security Policy: https://github.com/w3c/webappsec-csp/issues/92

Also the double iframe technique is important for preventing exfiltration through navigation, but you have to make sure you don't allow top navigation. The outer iframe will prevent the inner iframe from loading something outside of the frame-src origins. This could mean restricting it to only a server which would allow sending it to the server, but if it's your server or a server you trust that might be OK. Or it could mean srcdoc and/or data urls for local-only navigation.

I find the WebAssembly route a lot more likely to be able to produce true sandboxen.

AlienRobotJan 26, 2026, 7:58 AM
The browser being the sandbox isn't a good thing. It's frankly one of the greatest failures of personal computer operating systems.

Can you believe that if you download a calculator app it can delete your $HOME? What kind of idiot designed these systems?

zephenJan 26, 2026, 6:35 AM
An interesting technique.

The problems discussed by both Simon and Paul where the browser can absolutely trash any directory you give it is perhaps the paradigmatic example where git worktree is useful.

Because you can check out the branch for the browser/AI agent into a worktree, and the only file there that halfway matters is the single file in .git which explains where the worktree comes from.

It's really easy to fix that file up if it gets trashed, and it's really easy to use git to see exactly what the AI did.

dekerklasJan 26, 2026, 7:03 AM
[dead]
MOAAARRRJan 26, 2026, 6:32 AM
[dead]
zkmonJan 26, 2026, 8:18 AM
Coding agents may become trivial artifacts to be assembled by developers themselves from libraries, given the well-defined workflow. If it is a homegrown agent then you probably don't need a sandbox to run in.