Most LLMs are "Black Boxes". I wanted to build a "Glass Box" where every output traces back to a certified source.
The Architecture:
The Kernel: Instead of vectors, state is represented as Integers. We use the Fundamental Theorem of Arithmetic (Unique Prime Factorization) to encode concepts. Example: If Time = 3 and Space = 5, then Spacetime = 15. This allows for reversible "time-travel debugging" by just dividing integers. The Hypervisor (Bija): It doesn't just run cycles; it runs "Frequency Modulations". The kernel itself is coded in specific Prime Frequencies (Hz). Execution is effectively a "Resonance" state between the Instruction and the Data, rather than a fetch-decode-execute pipeline. The Data (< 2MB): We compressed the entire Constitution of India, the IPC, and an Indo-European Etymology Dictionary into a <2MB bundle using a custom schema-based compression (Pingala). Why?
Sovereignty: The logic and data are locally owned. No API calls. Green AI: It runs on my MacBook's CPU with negligible heat/power. Vedic Logic: It implements Panini's Grammar rules (Ashtadhyayi) as a graph traversal algorithm rather than just statistical attention. It's definitely experimental, but it questions the "Scale is All You Need" dogma. Would love feedback on the reversible prime state machine logic.
Links:
Code: https://github.com/akulasairohit/meru-os Live Demo: https://huggingface.co/spaces/akulasairohit/panini-demo The Manifesto: https://www.linkedin.com/pulse/introducing-meru-os-worlds-fi...
Thanks, Rohit
The idea came from my own struggle with phone addiction. I wanted to read Quran daily but kept getting distracted. So I built this for myself, then shared it.
Some stats after 2 months: - 123K+ users - 64.9% returning user rate - 31M events tracked
Tech stack: - React Native - Firebase (Auth, Firestore, Analytics, Cloud Messaging) - RevenueCat for subscriptions - iOS Screen Time API + Android UsageStats
App Store: https://apps.apple.com/app/quran-unlock/id6754449406
Play Store: https://play.google.com/store/apps/details?id=com.app.quranu...
Would love feedback from the HN community!
It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.
At the same time, it’s clearly a product of a different era:
– single-process
– mostly synchronous code
– no real notion of concurrency or async
– dated UI and interaction model
Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.
Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.
I’m curious how others think about this:
– what ideas from DDD (or similar old tools) are still valuable?
– what would a “modern DDD” need to handle today’s software?
– do you think interactive debugging is still the right abstraction at all?
I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.
The problem: AI tools can generate functional UI, but it often lacks the polish of professionally designed systems. Instead of generating a modal from patterns, why not fetch the actual UntitledUI modal?
How it works: - AI says "add a settings modal" - MCP fetches the real component + all base dependencies (buttons, inputs, etc.) - Files are placed in your project with correct imports
Tools available: - search_components: Find by name/description - get_component_with_deps: Fetch with all dependencies - get_example: Fetch complete page templates (dashboards, landing pages)
Requires UntitledUI Pro license for premium components. Base components are free.
GitHub: https://github.com/sbilde/untitledui-mcp npm: npx untitledui-mcp
Would love feedback on the DX and any edge cases I've missed.
As some kind of additional background check.
Is this something that a recruiter may be able to do during, before or after some hiring process?
My gut feeling is that it wouldn't be legal in most places, but do some of them do it anyway?
Has any of you been part of hiring decisions at those type of companies and saw it happen or did it yourselves?
More details: It uses 200+ signals to calculate your actual odds of winning a deal, helping you spot risks before you walk into the room.
Why use it? Data-Driven: Analyzes your deal across 10 dimensions. 100% Private: Data lives in your browser, not my servers. 100% Free: No paywall.
Perfect for prep, in-session management, and debriefing.
Try it here: https://zopamind.com
Launching an AI coding assistant built specifically for VR/AR/XR development: https://vr.dev
The problem: Generic LLMs give confidently wrong answers about XR development. The APIs change constantly (Meta ships SDK updates monthly), training data is stale, and XR knowledge is sparse in general-purpose models. Developers end up with deprecated method signatures and patterns that broke two versions ago.
The solution: RAG over actual XR documentation, updated regularly.
Free to use during beta. Looking for feedback from anyone building XR applications.
What's the XR development question that AI has never gotten right for you? Drop it in the comments or email chat@vr.dev and I’ll post some bake-off comparison shots.
What it doesn't do (yet): index your own codebase/docs, IDE plugin, MCP tool set. Those are on the roadmap.
Feedback welcome here or at chat@vr.dev — especially on answer quality, missing doc sources, or features that would make this useful for XR workflows.
"Zoom into this area smoothly" "Speed up the next 5 seconds by 2x" "Highlight this button for 3 seconds" "Remove the zoom at 10 seconds"
You can render videos locally for free (single-threaded on your CPU), or use cloud rendering for faster processing. I'm trying to figure out if this approach actually saves time for people or if it's just a novelty. My main questions:
For folks who make tutorial/demo videos - what editing tasks eat up most of your time? What features would actually make you switch tools? Would you trust AI to correctly interpret your editing commands, or does that feel too unpredictable? What types of effects or edits would be most handy to control with natural language?
Curious to hear what would make this genuinely useful for your workflow vs just an interesting demo.