Three types of LLM workloads and how to serve them

https://modal.com/llm-almanac/workloads

Comments

rippeltippelJan 21, 2026, 10:26 PM
> Gallia est omnis divisor in partes tres.

OCD-driven fix: The correct Latin quote is "Gallia est omnis divisa in partes tres".

charles_irlJan 22, 2026, 3:14 AM
oof ty, willfix
ZsoltTJan 22, 2026, 2:30 AM
> we recommend using SGLang with excess tensor parallelism and EAGLE-3 speculative decoding on live edge Hopper/Blackwell GPUs accessed via low-overhead, prefix-aware HTTP proxies

lord

charles_irlJan 22, 2026, 3:46 AM
Sorry to lead with a bunch of jargon! Wanted to make it obvious that we'd give concrete recommendations instead of palaver.

The technical terms there are later explained and diagrammed, and the recommendations derived from something close to first principles (e.g. roofline analysis).

omneityJan 22, 2026, 6:47 PM
Very cool insights, thanks for sharing!

Do you have benchmarks for the SGLang vs vLLM latency and throughput question? Not to challenge your point, but I’d like to reproduce these results and fiddle with the configs a bit, also on different models & hardware combos.

(happy modal user btw)