It allows you to define different DNS-based rules that are resolved in a local daemon to IPs, then pushed to the eBPF filter to allow traffic. By doing it this way, we can still allow DNS-defined rules, but prevent contacting random IPs.
There's also no network performance penalty, since it's just DNS lookups and eBPF filters referencing memory.
It also means you don't have to tamper with the base image, which the agent could potentially manipulate to remove rules (unless you prevent root maybe).
It automatically manages the lifecycle of eBPF filters on cgroups and interfaces, so it works well for both containers and micro VMs (like Firecracker).
You implement a control plane, just like Envoy xDS, which you can manage the rules of each cgroup/interface. You can even manage DNS through the control plane to dynamically resolve records (which is helpful as a normal DNS server doesn't know which interface/cgroup a request might be coming from).
We specifically use this to allow our agents to only contact S3, pip, apt, and npm.
Maybe some of you will find it useful.
Happy to answer any questions.
Fence wraps any command in a sandbox that blocks network by default and restricts filesystem writes. Useful for running semi-trusted code (package installs, build scripts, unfamiliar repos) with controlled side effects, or even just blocking tools that phone home.
> fence curl https://example.com # -> blocked
> fence -t code -- npm install # -> template with registries allowed
> fence -m -- npm install # -> monitor mode: see what gets blocked
One use-case is to use it with AI coding agents to reduce the risk of running agents with fewer interactive permission prompts:
> fence -t code -- claude --dangerously-skip-permissions
You can import existing Claude Code permissions with `fence import --claude`.
Fence uses OS-native sandboxing (macOS sandbox-exec, Linux bubblewrap) + local HTTP/SOCKS proxies for domain filtering.
Why I built this: I work on Tusk Drift, a system to record and replay real traffic as API tests (https://github.com/Use-Tusk/tusk-drift-cli). I needed a way to sandbox the service under test during replays to block localhost outbound connections (Postgres, Redis) and force the app to use mocks instead of real services. I quickly realized that this could be a general purpose tool that would also be useful as a permission manager across CLI agents.
Limitations: Not strong containment against malware. Proxy-based filtering requires programs to respect `HTTP_PROXY`.
Curious if others have run into similar needs, and happy to answer any questions!
The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”.
Beni is basically:
A Live2D avatar that animates during the call (expressions + motion driven by the conversation)
Real-time voice conversation (streaming response, not “wait 10 seconds then speak”)
Long-term memory so the character can keep context across sessions
The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts.
Some implementation details (happy to share more if anyone’s curious):
Browser-based real-time calling, with audio streaming and client-side playback control
Live2D rendering on the front end, with animation hooks tied to speech / state
A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity
Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now.
What I’d love feedback on:
Does the “real-time call” loop feel responsive enough, or still too laggy?
Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser?
Thanks, and I’ll be around in the comments.