Hypura – A storage-tier-aware LLM inference scheduler for Apple Silicon

https://github.com/t8/hypura

Comments

simonwMar 24, 2026, 8:09 PM
Suggestion for the maintainers: the comparison table currently lists some pretty old models, Qwen 2.5 14B and Mixtral 8x7B and Llama 3.3 70B.

A lot of people are reporting incredible results with the Qwen 3.5 MoE models on Apple hardware right now (streaming experts - see https://simonwillison.net/2026/Mar/24/streaming-experts/) - it would be great to get some of those models into that table.

Maybe the 1T parameter Kimi K2.5 too if you can get that to work, see https://twitter.com/seikixtc/status/2036246162936910322 and https://twitter.com/danpacary/status/2036480556045836603

a7om_comMar 26, 2026, 3:46 PM
The Qwen 3.5 MoE local performance numbers are striking but the cloud pricing picture for the same models is equally interesting. On inference platforms right now Qwen class models are running significantly cheaper than closed equivalents, open source has about an 81% pricing advantage on equivalent platforms. The local vs cloud crossover math gets genuinely interesting for MoE models because the sparse expert loading that makes Hypura useful locally is the same property that makes cloud inference cheaper per token. Worth knowing both sides of that equation when deciding where to run.
ImustaskforhelpMar 24, 2026, 8:29 PM
Simon, A little offtopic but it seems that your website isn't working.

> An error occurred in the application and your page could not be served. If you are the application owner, check your logs for details. You can do this from the Heroku CLI with the command

I get this error when I go to simonwillison.net

Any random blog/link works for example though: https://simonwillison.net/2026/Mar/19/openai-acquiring-astra...

(I checked your website because I wanted to see if you had written something about trivy/litellm as well, I highly recommend checking out what has happened within litellm space if possible as I would love to read your thoughts on it)

Have a nice day simon!

Edit: now the website works but I am not sure what had gone wrong previously, (an issue from heroku maybe?) as its working now

Edit-2: after the website working, I am able to see that you have already made a post about it.

tatefMar 24, 2026, 9:29 PM
Thanks for sharing this! If you'd be interested in running the benchmark yourself with Hypura I'd happily merge into our stats. Otherwise will add to my todo list :)
abtinfMar 24, 2026, 9:18 PM
The lack of a token rate metric for the kimi example is disappointing.
zozbot234Mar 24, 2026, 10:11 PM
The latter link says they get ~1.7 tok/s which is quite impressive for a near-SOTA local model running on ordinary hardware.
vanyalandMar 24, 2026, 6:41 PM
For a lot of local workloads, sub-1 tok/s is useless in foreground and perfectly acceptable in background. If the choice is “this crashes” vs “this finishes overnight,” that’s still a meaningful capability jump.
joelthelionMar 25, 2026, 11:52 AM
How much are you going to spend on electricity though? Is this really going to be more cost-effective than just using openrouter?
austinthetacoMar 25, 2026, 12:54 PM
There are many other reasons someone might want to run a model locally outside of cost savings, ownership of data flow and use in locations without internet to name a couple.
hadlockMar 26, 2026, 5:18 PM
If my options are run Opus 4.6 in the cloud for $200/mo or run Opus 4.6 locally for $275, I am absolutely going to self-host 100% of the time. Sending all that data to the cloud presents tremendous legal risk for companies. There's currently no retention rules about privately hosted AI.
vicchenaiMar 24, 2026, 5:34 PM
the practical question is whether the read pattern is sequential enough to actually saturate nvme bandwidth or if the attention layer access pattern ends up being random enough to kill throughput. sequential reads on a decent nvme get you 5-7 GB/s, random reads drop to maybe 500 MB/s depending on queue depth.

for a 1T model youd need to stream something like 2TB of weights per forward pass at fp16. even at peak sequential thats 300+ seconds per token which is... not great for interactive use but maybe fine for batch inference where you dont care about latency.

still a cool proof of concept though. the gap between 'can run' and 'runs usefully' is where things get interesting.

p_ingMar 24, 2026, 6:27 PM
4K random read with a queue depth of 1 on an M1 Max is about 65MB/s.
tatefMar 24, 2026, 6:25 PM
Yes, definitely agree. It's more of a POC than a functional use case. However, for many smaller MoE models this method can actually be useful and capable of achieving multiple tokens/sec.
zozbot234Mar 24, 2026, 5:46 PM
> for a 1T model youd need to stream something like 2TB of weights per forward pass

Isn't this missing the point of MoE models completely? MoE inference is sparse, you only read a small fraction of the weights per layer. You still have a problem of each individual expert-layer being quite small (a few MiBs each give or take) but those reads are large enough for the NVMe.

visargaMar 24, 2026, 5:52 PM
But across a sequence you still have to load most of them.
marksullyMar 24, 2026, 4:42 PM
Where does "1T parameter model" come from? I can only see models with 70B params or less mentioned in the repo.
tatefMar 24, 2026, 6:27 PM
I'm referencing it as being possible, however I didn't share benchmarks because candidly the performance would be so slow it would only be useful for very specific tasks over long time horizons. The more practical use cases are less flashy but capable of achieving multiple tokens/sec (ie smaller MoE models where not all experts need to be loaded in memory simultaneously)
causalMar 24, 2026, 4:49 PM
Yeah title comes from nowhere in the link. No doubt it's possible but all that matters is speed and we learn nothing of that here...
baqMar 24, 2026, 5:09 PM
Intel Optane rolling in its grave.
aitchnyuMar 24, 2026, 6:30 PM
Memristors are also missing in this AI hype even when they were around the corner 10 years back.
moffkalastMar 24, 2026, 5:36 PM
Wouldn't be Intel if they didn't quit halfway through on a good thing.

Still, couldn't one get a RAID 0 card with four drives to saturate a 16x lane? That's already the max one could push through PCIe anyhow.

liuliuMar 24, 2026, 5:13 PM
Still have 4 brand new ones in my storage unit. Just in case these moments.

Joke aside (I do have them tho!), I don't think Optane is that much use (not to mention it is only 256GiB for my unit). It is useful legacy crutch if you have legacy software that is not designed to issue multiple reads / writes in parallel. If you do, it is really not faster than NVMe, especially these modern ones.

zozbot234Mar 24, 2026, 5:23 PM
It's not about being faster (except for small reads where latency dominates, which is actually relevant when reading a handful of expert-layers immediately after routing), it's the wearout resistance which opens up the possibility of storing KV-cache (including the "linear" KV-cache of recent Qwen, which is not append-only as it was with the pure attention model) and maybe even per-layer activations - though this has the least use given how ephemeral these are.
speedgooseMar 24, 2026, 5:29 PM
Is it too late for Intel to bring them back to life?
c0baltMar 24, 2026, 5:34 PM
Yes, their NAND division has been sold, it is now mostly under solidigm. Maybe solidigm could bring it back, but it seems unlikely (given the previous commercial failure).
walterbellMar 24, 2026, 6:58 PM
Nvidia and SK Hynix are bringing HBF to market for $$.
0ptan3Mar 24, 2026, 5:31 PM
pmem
shubhamintechMar 24, 2026, 7:50 PM
The MoE point matters here ie sparse activation means you're not reading all 2TB per forward pass, but the access pattern flips from sequential to random which is exactly the worst case for NVMe. Been thinking about this a lot for agent inference workloads where you want consistent latency more than peak throughput.
InsanityMar 24, 2026, 4:51 PM
This is a pretty cool project! Essentially this is like using Swap memory to extend your RAM, but in a 'smart' way so you don't overload the NVMe unnecessarily.

I do wonder in practice how the 'smarts' pan out, because putting a ton of stress on your NVMe during generation is probably not the best choice for it's longevity.

zozbot234Mar 24, 2026, 4:57 PM
This is not putting any stress or wear on the NVMe, it's a pure read workload.
tatefMar 24, 2026, 6:29 PM
Yes, exactly this.
embedding-shapeMar 24, 2026, 4:59 PM
> but in a 'smart' way so you don't overload the NVMe unnecessarily

"overloading NVMe"? What is that about? First time I've heard anything about it.

> because putting a ton of stress on your NVMe during generation

Really shouldn't "stress your NVMe", something is severely wrong if that's happening. I've been hammering my SSDs forever, and while write operations "hurt" the longevity of the flash cells themselves, the controller interface really shouldn't be affected by this at all, unless I'm missing something here.

tatefMar 24, 2026, 6:30 PM
Hypura reads tensor weights from the GGUF file on NVMe into RAM/GPU memory pools, then compute happens entirely in RAM/GPU.

There is no writing to SSDs on inference with this architecture.

embedding-shapeMar 24, 2026, 7:29 PM
Even if there was a ton of writing, I'm not sure where NVMe even comes in the picture, write durability is about the flash cells on SSDs, nothing to do with the interface, someone correct me if I'm wrong.
InsanityMar 24, 2026, 5:05 PM
I had assumed heat generation on the controller if it's continuously reading. But maybe it's not actually bad.
throwway120385Mar 24, 2026, 5:45 PM
Just pop a heatsink on it and call it good.
hrmtst93837Mar 24, 2026, 7:32 PM
People talk about "SSD endurance", but enough parallel I/O on M1/M2 can make the NVMe controller choke, with very weird latncy spikes.
msbhogaviMar 24, 2026, 11:51 PM
"As much memory as possible" is right for model capacity but misses bandwidth. Apple Silicon has distinct tiers: M4 Pro at 273 GB/s, M4 Max at 546 GB/s, M4 Ultra at 819 GB/s. Bandwidth determines tok/s once the model fits in memory. An M4 Max gives you 2x the decode speed of an M4 Pro on the same model.

For what Hypura does, the Max is the sweet spot. 64GB loads a 70B at Q4 with room to spare, and double the bandwidth of the Pro means generation is actually usable instead of just technically possible.

dev_tools_labMar 26, 2026, 11:21 AM
Thanks for this project. Prioritizing MoE models and adding an intelligent NVMe cache could improve efficiency, especially on the M4 Max where bandwidth makes usage more realistic.
astrangeMar 24, 2026, 9:41 PM
> Consumer hardware (MacBook Pro, Mac Studio) ships with fast unified memory and NVMe storage, but limited capacity. A 32 GB M1 Max cannot naively load a 40 GB model — the OS will swap-thrash until the OOM killer intervenes.

macOS doesn't have an "OOM killer" in that sense. (It has an out of swap space killer but it's pretty weak.)

So what will happen is, either your memory wiring will fail, or else it will get really slow and panic.

zozbot234Mar 24, 2026, 4:47 PM
It will be interesting to compare this to https://news.ycombinator.com/item?id=47476422 and https://news.ycombinator.com/item?id=47490070 . Very similar design except that this is apparently using mmap, which according to the earlier experiment incurs significant overhead.
salynchnewMar 24, 2026, 5:09 PM
It was written by an LLM, so... yeah.
jeffybefffy519Mar 24, 2026, 5:30 PM
Except this isnt using heavily quantised versions of the model thus reducing quality.
dev_tools_labMar 25, 2026, 10:06 AM
Nice work on the scheduler. Have you benchmarked parallel inference across multiple models? Running GPT, Claude and Gemini simultaneously on the same input is where latency becomes a real constraint.
zozbot234Mar 25, 2026, 11:41 AM
GPT-OSS exists but Claude and Gemini aren't available locally, lol.
dev_tools_labMar 25, 2026, 1:34 PM
True, Claude and Gemini aren’t local yet — I mostly meant running all available local models in parallel.

Even with just open-source LLMs, you can see interesting differences in flagged issues when cross-validating outputs.

dangoodmanUTMar 25, 2026, 3:02 PM
With unified memory and such a strong os-hardware integration, one would hope that swap could handle this task
EnPissantMar 24, 2026, 5:28 PM
You do not provide any comparison to llama.cpp with mmap.

You do not explain how any kind of predictor can work for MoE experts.

You do not explain how prediction can even be useful. I can predict the layers used in a dense model (all of them are used in order), but that doesn't help me much. It's still bottlenecked on bandwidth (hint: MoE doesn't change this).

root_axisMar 24, 2026, 5:50 PM
Are there any 1T parameter open source models?
zozbot234Mar 24, 2026, 5:52 PM
Kimi 2.5?
ai-inquisitorMar 24, 2026, 6:22 PM
That model is "open weight", not open source. We have no idea what data Moonshot trained on.
airspressoMar 25, 2026, 8:22 AM
I think we lost that terminology war. Open source models mean open weight. There are only a couple examples of fully open source models with open data and code, and the labs are not incentivized to go that far.
root_axisMar 24, 2026, 6:00 PM
Thanks, TIL.
nullbyteMar 24, 2026, 5:18 PM
I am curious how the TPS compares vs default OS virtual memory paging
speedgooseMar 24, 2026, 5:34 PM
I wonder how many minutes per token on GLM 5.
ameliusMar 24, 2026, 5:32 PM
This is <1 tok/s for the 40GB model.

Come on, "Run" is not the right word. "Crawl" is.

Headlines like that are misleading.

feznyngMar 24, 2026, 6:33 PM
Could still be useful; maybe for overnight async workloads? Tell your agent research xyz at night and wake up to a report.
maleldilMar 24, 2026, 6:50 PM
Assuming 1 token per second and "overnight" being 12 hours, that's 43 200 tokens. I'm not sure what you can meaningfully achieve with that.
zozbot234Mar 24, 2026, 9:02 PM
Sure, but if long-term throughput is a real limitation there's plenty of ways to address that while still not needing to keep anywhere close to all model weights in RAM (which is still the conventional approach with MoE). So the gain of a smaller memory footprint is quite real.
smlacyMar 24, 2026, 5:59 PM
Yes, and with virtually zero context, which makes an enormous difference for TTFT on the MoE models.
monksyMar 24, 2026, 5:04 PM
There needs to be something like this from Ollama. At the moment Ollama has a lot of flaws that prevent it from getting great performance. (My understanding is better GPU/CPU splits, etc). But Ollama is the only way to host an LLM and have it switch out on demand. Sigh.
zozbot234Mar 24, 2026, 5:16 PM
Ollama has very substandard support for mmap at present, which hurts inference with larger models. There are some recent pull requests in flight that should help address this to at least some extent https://github.com/ollama/ollama/pull/14525 https://github.com/ollama/ollama/pull/14134 https://github.com/ollama/ollama/pull/14864 but progress seems to be stalling. Their support for recent Qwen models seems to also have some bespoke incompatibilities with llama.cpp, which doesn't help matters; it's difficult to test the same model with both.
rubiquityMar 24, 2026, 5:10 PM
llama.cpp and llama-swap do this better than Ollama and with far more control.
circularfoyersMar 24, 2026, 6:51 PM
Don't even need to use llama-swap anymore now that llama-server supports the same functionality.
rubiquityMar 25, 2026, 12:13 AM
I did not know that. Thanks for sharing!
solozakiMar 26, 2026, 3:46 PM
hello
pugchatMar 26, 2026, 7:09 AM
[dead]
anshulbasia27Mar 24, 2026, 5:25 PM
OS paging would be significantly worse here. The kernel's page fault handler is reactive — it doesn't know you're about to read layer 47's FFN weights, so it can't prefetch. You stall on every fault, wait for the 4KB/16KB page to load, then resume. With 80 layers of dense FFN streaming, that's thousands of cold faults per token.

  What makes this approach faster is that the model's access pattern is completely deterministic during         
  inference. You know exactly which tensors are needed next because transformer layers execute sequentially. So
  you can issue large sequential reads and prefetch the next layer while the current one is computing on Metal. 
  The OS page cache can't do that — it has no concept of "layer N+1 comes after layer N."

  For MoE it's even more stark. The OS would page in all 8 experts on the first token that routes to each one,  
  then evict them under memory pressure with LRU, which has no idea that expert 3 fires 10x more often than
  expert 7. The neuron cache here is basically a domain-specific replacement policy.
zozbot234Mar 24, 2026, 5:26 PM
> The kernel's page fault handler is reactive — it doesn't know you're about to read layer 47's FFN weights, so it can't prefetch.

man 2 madvise

astrangeMar 24, 2026, 9:40 PM
That works for readahead but it's not good for random access. readv, aio, dispatch_io are better there.
zozbot234Mar 24, 2026, 9:54 PM
This claim is a bit apples and oranges (no pun intended!). madvise is all about providing hints to the kernel to tune the page cache and readahead (including possibly disabling readahead altogether). it's not about performing reads into private memory buffers, which is actually where the options you mentioned fit in.
astrangeMar 25, 2026, 3:08 PM
Triggering reads is also how you get pages into the page cache, so it helps to know how to do it.
EnPissantMar 24, 2026, 5:30 PM
That assumes you have significant work to do between fetches (so you can prefetch while using the current data). With LLM decode you don't.
a7om_comMar 25, 2026, 7:37 PM
[dead]
Yanko_11Mar 24, 2026, 6:01 PM
[dead]
anshulbasia27Mar 24, 2026, 5:24 PM
[dead]
aplomb1026Mar 24, 2026, 11:08 PM
[dead]
leontlovelessMar 24, 2026, 10:03 PM
[dead]
skillflow_aiMar 24, 2026, 11:44 PM
[dead]
paxrel_aiMar 25, 2026, 1:00 PM
[dead]
jee599Mar 24, 2026, 6:26 PM
[dead]
jeff_antseedMar 25, 2026, 7:20 AM
[dead]
tatefMar 24, 2026, 4:04 PM
[flagged]
password4321Mar 24, 2026, 4:50 PM
Don't post generated/AI-edited comments. HN is for conversation between humans

https://news.ycombinator.com/item?id=47340079

tatefMar 24, 2026, 6:36 PM
Noted, thanks. I had LLM help positioning this message but I did the initial draft along with edits. Will keep in mind for the future.
DennisPMar 24, 2026, 5:15 PM
That doesn't read like an AI-generated comment to me. He did mention he vibe-coded the project but that's not against the guidelines.
Retr0idMar 24, 2026, 5:26 PM
It's either written by an LLM, or written by someone who learned to write by reading LLM output
password4321Mar 24, 2026, 5:27 PM
Vibe-coded project is fine.

At least prompt your LLM to dodge the obvious tells when commenting!

Forgeties79Mar 24, 2026, 5:23 PM
gptzero says 99% chance it’s AI-generated

It certainly has a lot of telltale signs

Izikiel43Mar 24, 2026, 5:17 PM
> The core insight:

That's a telltale sign of ai written text.

causalMar 24, 2026, 4:50 PM
You need to change the title or actually include 1T parameter model content.
frikkMar 24, 2026, 4:46 PM
This is interesting work, thank you for sharing. What hardware would you buy today for experimenting? Seems like the new gen of macbook pros are pretty powerful?
tatefMar 24, 2026, 6:38 PM
Yes definitely. I use a M1 Max with 32gb of RAM daily and it's about on par from a performance standpoint with the new base M5 Pro 24gb. You can check the benchmarks in the repo if you're interested in seeing specific performance metrics, but investing in Apple hardware with as much memory as possible will generally get you furthest in this game.
WithinReasonMar 24, 2026, 4:55 PM
Have you ever generated access frequency statistics for the experts in these models, something like a histogram?
GracanaMar 24, 2026, 5:18 PM
ktransformers can do dynamic placement of experts and could presumably produce such a histogram, though currently its activation statistics are just a ".pt" file. https://github.com/kvcache-ai/ktransformers/blob/main/doc/en...

FWIW I never got it to work and did not dig into it much.

lostmsuMar 24, 2026, 4:47 PM
Why would llama with --mmap crash?
zozbot234Mar 24, 2026, 4:58 PM
This doesn't surprise me all that much, mmap support gets little attention in general and interacts poorly with GPU-side inference. (And that's with it being default, you don't even really need to specify it as a CLI option.) OP has raised a discussion with the llama.cpp folks https://github.com/ggml-org/llama.cpp/discussions/20852 but little interest so far
lostmsuMar 25, 2026, 11:54 AM
But if mmap already works why would there be any interest?

Besides, discussions are for users. He didn't open PRs or issues.

erikcwMar 24, 2026, 5:37 PM
Simon Willison wrote a good post about Dan Woods’ work on “Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally”.

[0] https://simonwillison.net/2026/Mar/18/llm-in-a-flash/