Review date: 2026-05-16 Reviewer: Zhongzhu Zhou Paper: An Interpretable Latency Model for Speculative Decoding in LLM Serving Authors: Linghao Kong, Megan Flynn, Michael Peng, Nir Shavit, Mark Kurtz, Alexandre Marques (MIT and Red Hat AI) arXiv: 2605.15051v1, 2026-05-14 Status: Preprint. Experiments use GuideLLM 0.5.2 driving vLLM 0.13.0.
Short answer
This paper is the first one I have read that tries to answer a question I keep hitting in production speculative decoding (SD) post-mortems: why does my measured speedup look great at batch size one and then quietly disappear once the request rate climbs?
The literature so far has been algorithmic. Leviathan/Chen showed that SD works in isolation; EAGLE-1/2/3 and PARD made drafters stronger; Medusa and Lookahead Decoding traded the draft model for extra heads or no draft at all. All of these papers measure speedup at fixed batch size, usually one. As soon as you put a draft-verify cycle inside vLLM, the picture changes because batch size is no longer something you choose. It is something the scheduler produces from the arrival stream, and it interacts with KV-cache pressure, continuous batching, and chunked prefill in non-obvious ways.
Kong et al. propose a tiny analytical model that captures this. They observe that, for a stable pre-saturation regime, average request latency obeys a roofline-style decomposition , where is load-independent and is the marginal cost per concurrent in-flight request. Little's Law lets them eliminate the unobservable batch size by substituting , giving the closed-form
This is the only equation you actually need to internalize for the paper. The rest is bookkeeping: extend it to SD by writing and as sums of prefill, verify, and draft costs weighted by the expected accepted-token count , then add an MoE-aware correction that captures load-dependent expert coverage.
The empirical payoff is large. They sweep RPS, prefill length, decode length, acceptance rate , draft length , and verifier/drafter sizes across the Qwen3 family, Llama-3.1-{8B,70B}, gpt-oss-20b, on A100 and H100, and show:
- All non-MoE measurements collapse onto a single universal curve when normalized by and .
- SD-induced speedup obeys where . The sign of speedup's slope with respect to load is determined entirely by whether or .
- In real configurations usually exceeds 1, which mechanistically explains the well-known "SD speedup erodes under load" phenomenon.
- The draft length that minimizes is consistently larger than the that minimizes . A single tuned at batch size one is therefore generally wrong for high-throughput serving.
- MoE serving deviates from dense scaling at low load because few experts are activated, and the deviation closes as load grows. The expert-coverage correction lifts from 0.83–0.91 to 0.97–0.99 across MoE models.
What I like about this paper is that it is a modeling contribution, not a benchmark contribution. The model fits in one screen, runs cheaply (sweep nine RPS points per config, fit with SciPy), and gives operators a knob: estimate and at deploy time, then read off the best from eq. (3). That is much more useful than yet another "we measured X tok/s on Y hardware" table.
What I want to push back on is treated below in §6: the model is a mean-latency steady-state description. It deliberately drops everything that operators usually care about most—tail latency, preemption, bursty arrivals, queueing variance—and the framing risks giving readers a false sense of completeness. I think the right way to read this work is as a first-order serving model in the same sense that the roofline model is a first-order hardware model. It is correct in its regime and wrong outside it, and the boundary should be made louder.
1. Prerequisites
This section is for readers who have shipped LLM inference but have not internalized the queueing/throughput side of speculative decoding. I cover autoregressive decoding, prefill vs decode, continuous batching, the roofline decomposition, Little's Law, and the EAGLE-style draft-verify cycle.
1.1 Autoregressive decoding and prefill/decode asymmetry
A transformer decodes tokens one at a time: at step the model conditions on tokens and produces a distribution over token , which is then sampled. Two execution phases differ markedly:
- Prefill: the full prompt of length is processed in parallel in a single forward pass. The attention computation is but the workload is compute-bound and GPU-friendly. Prefill cost roughly scales as per request.
- Decode: each output token is produced sequentially. Each decode step costs in FLOPs but is memory-bandwidth-bound because the entire weight matrix must be streamed from HBM to compute one token. This is the underutilization that motivates speculative decoding in the first place.
The asymmetry matters: prefill has high arithmetic intensity, decode has very low. Batching helps decode (it amortizes weight streaming over multiple sequences) but interacts with KV-cache memory pressure.
1.2 Continuous batching and chunked prefill
Naive batching waits for the slowest request to finish before processing the next batch. Modern servers do continuous batching (introduced by Orca, popularized by vLLM): each iteration the scheduler picks whatever decode steps are ready, mixes them with new arrivals, and runs one forward pass. The "batch size" you see in metrics is the effective batch, a random variable that the scheduler produces from the workload.
Chunked prefill splits a long prefill into chunks so a long prompt does not block the decode queue. Combined with continuous batching, it means batch size is no longer a control variable you tune; it emerges from the request stream. The paper's whole point is to deal with this fact head-on instead of pretending you can set batch size manually.
1.3 Roofline-style decomposition
Williams et al.'s roofline model decomposes kernel time into a fixed cost (kernel launch, weight loading) and a load-dependent cost (FLOPs or memory traffic per element). For inference servers this generalizes: per-request latency contains terms that do not depend on concurrent batch size (e.g., loading weights once and reusing them across the batch becomes a load-independent overhead per request in the limit) and terms that scale with (per-token compute and memory pressure). The paper writes this as
and asks SciPy to fit from data.
1.4 Little's Law
Little's Law is a queueing identity from Little (1961): in any stable system the average number of in-flight items equals the arrival rate times the average residence time :
It is non-parametric—it does not assume Poisson arrivals, exponential service, or any other distribution. For LLM serving the units are convenient: (requests per second), = average per-request latency (seconds), = average number of in-flight requests. Substituting into gives eq. (1) above. The unobservable vanishes.
1.5 Speculative decoding in one paragraph
Speculative decoding (Leviathan et al. 2023; Chen et al. 2023) runs a small draft model to generate candidate tokens autoregressively, then runs the large verifier model once on all candidates in parallel. A rejection sampler accepts the longest prefix consistent with the verifier's distribution and commits those tokens; the rest are discarded and the cycle restarts. EAGLE-1/2/3 (Li et al. 2024–2025) replaces the standalone drafter with a lightweight feature-level adapter trained on the verifier's hidden states, which is what the paper uses for most experiments.
Two parameters matter: acceptance rate (probability the verifier accepts any given drafted token, modeled as i.i.d.) and draft length (how many tokens the drafter proposes per cycle). The expected number of accepted tokens per cycle is
Speedup is bounded by in the ideal case, but the verifier still has to run, and the drafter is not free.
1.6 What the paper calls "load"
Throughout the paper, load = request rate = RPS, not GPU utilization or queue length. The model is fit between two endpoints per configuration: synchronous baseline (RPS small enough that every request runs alone) and the throughput ceiling just before vLLM starts preempting. Nine evenly spaced RPS values are swept. Saturation/preemption is explicitly excluded as a regime where the model does not apply.
2. Method
2.1 The base latency model
Start from the additive decomposition . Little's Law gives . Solving for :
Properties worth noting:
- At RPS , . The intercept on the y-axis is the synchronous (batch size 1) latency.
- As , . The denominator going to zero defines the saturation boundary; the paper fits below this asymptote.
- Normalize: define and . Every configuration collapses to regardless of model size, prefill/decode lengths, or hardware.
Figure 1 of the paper shows that collapse beautifully for dense models. It is the strongest validation in the work.
2.2 Adding speculative decoding to and
The same closed form fits SD configurations one at a time (Figure 2a), which means SD does not change the shape of the latency curve, only its parameters. But a single does not work across all (Figure 4a). The natural fix is to make and explicit functions of .
Cycle anatomy: each SD cycle does (i) one verifier forward pass over tokens in parallel, (ii) drafter steps. Prefill happens once per request. Define per-stage fixed and load-dependent costs . With = decode length and = expected accepted tokens per cycle, the number of SD cycles per request is . Therefore
Note the asymmetry: in the verifier term does not scale with , only the drafter term does. Empirically the verifier's per-step load-dependent cost stays roughly constant as grows, presumably because verification is dominated by memory traffic of the weights rather than the token count under continuous batching. The paper highlights this and uses the simpler form.
Plugging closes the loop. Now are explicit functions of through six small coefficients per system configuration.
2.3 Speedup form and the test
Define ratios , , . Then
This is the workhorse formula. Three immediate consequences:
- Zero-load speedup equals . SD essentially always achieves (it pays fewer verifier steps), so synchronous speedup is real.
- Slope of speedup with respect to load is controlled solely by the sign of :
- : speedup grows with load. Achieved only at very high .
- : speedup shrinks with load. The common case.
- Hyperbolic saturation: explodes near , so whichever sign the load effect has, it dominates near saturation.
This is the cleanest single explanation I have seen for why SD often "wins on the bench, loses in production." It is not a measurement artifact and it is not a drafter bug. It is a structural property of where speculation puts its cost.
2.4 The MoE correction
MoE models systematically beat the dense model's predicted synchronous latency at low load, because not all experts are touched in a single decode step. The paper writes the expected expert coverage under i.i.d. routing as
where is experts per token, is total experts, and is the effective routed-token count. For non-speculative inference ; for SD verification since each verification step processes tokens against the expert MoE. Coefficients then split into a "low-coverage" part and a "saturation increment" weighted by :
The lift in fit quality (Figure 8, Section 4.6) is non-trivial: the on the lower-load half of the sweep climbs from 0.902 → 0.997 (gpt-oss-20b), 0.830 → 0.976 (Qwen3-30B-A3B), 0.906 → 0.989 (Qwen3-235B-A22B). That is a real improvement, especially at low load where the sparse activation effect is largest.
2.5 What's actually being fit
For a fixed model and prefill/decode pair, the dense-only fit estimates two parameters . The speculation-aware fit estimates six , pooled across all . The MoE-aware fit adds coverage splits ( vs ) and an effective . Curve fitting is non-linear but well-conditioned because varies enough across the sweep to constrain the cycle-level coefficients.
3. Experimental setup
- Driver: GuideLLM 0.5.2 sweeping RPS in nine even steps between synchronous and the throughput ceiling.
- Server: vLLM 0.13.0 with continuous batching and chunked prefill.
- Verifiers: Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct, gpt-oss-20b, Qwen3-{0.6B, 1.7B, 8B, 14B, 30B-A3B, 32B, 235B-A22B}.
- Drafters: EAGLE-3-style lightweight drafters for most setups; vanilla SD with Llama-3.1-8B drafting for Llama-3.1-70B and Qwen3-1.7B drafting for Qwen3-14B.
- Prompt/decode sweep: prefill ∈ {256, 512, 768, 1024}, decode ∈ {256, 512, 768, 1024}; 16 combinations. Inputs are simulated token sequences from Pride and Prejudice.
- SD sweep: (rejection sampling overridden to fix ), .
- Hardware: A100 SXM (single GPU for ≤32B dense), 4× A100 for Llama-3.1-70B, 8× A100 for Qwen3-235B-A22B; H100 cross-check for dense Qwen3.
- Fitter: SciPy
curve_fit.
A practical note: by overriding rejection sampling and imposing an acceptance rate, the authors decouple measurement from drafter quality. This is the right move methodologically—it lets them isolate the effect of from the effect of drafter architecture—but it also means real-world deployments where is endogenous to the drafter need a separate measurement of at runtime.
4. Key results, figure by figure
4.1 Figure 1 — universal latency collapse
For all dense configurations, normalized latency collapses onto across Qwen3-{0.6B, 1.7B, 8B, 14B, 32B} and Llama-3.1-{8B, 70B}. The collapse is striking because nothing in the underlying physics says it should work this cleanly: KV-cache pressure, attention quadratic blow-up, scheduler quirks could all bend the curve. The clean fit is the empirical justification for using Little's Law in the first place.
4.2 Figure 2 — SD preserves the shape, changes the parameters
(a) The Little's-Law curve still fits each separately. (b) For Qwen3-8B at 1024/1024, uniformly across settings (SD reduces fixed cost), but usually exceeds 1, with the exception of . The plot also shows that the that minimizes is much larger than the that minimizes , foreshadowing the load/no-load tradeoff.
4.3 Figure 3 — minimum and across prefill/decode
For each , take the best for and best for . stays under 1 everywhere—SD's synchronous benefit is universal. is still above 1 in most configurations except the very high corner. The takeaway: synchronous speedup does not generalize to high RPS unless is unusually large or is reduced.
4.4 Figure 4 — single-vs-speculation-aware fits
(a) A single pair fitted across all misses badly; configurations diverge. (b) Fitted per-config values vary smoothly in . (c) The speculation-aware collapse all onto one curve. (d) values from the unified fit track the per-config closely, validating the parameterization.
4.5 Figure 5 — scaling of cycle coefficients with verifier/drafter size
Across Qwen3-{0.6B, 1.7B, 8B, 14B, 32B} the cycle-level coefficients scale approximately linearly with verifier parameter count for and with drafter parameter count for . The cleanest scaling is (verifier fixed cost) — nearly perfectly linear in verifier size, as expected from weight loading dominating per-request overhead.
4.6 Figure 6 — length dependence
is roughly linear in prefill length with model-specific slopes. scales with an effective token count , which is exactly the time-averaged KV-cache size during decode. This is a nice mechanistic detail.
4.7 Figure 7 — leave--out generalization
Train on of the prefill/decode combinations, predict on the remaining for . Held-out stays near full-data fit for the bigger Qwen3 models (≥8B): for and across . The scaling trends are predictive, not just descriptive.
4.8 Figure 8 — MoE-aware lift
For gpt-oss-20b, Qwen3-30B-A3B, and Qwen3-235B-A22B, the expert-coverage correction reduces residuals at low RPS where sparse activation has the biggest leverage. The lift is largest when is small, because small means few routed tokens, which means few experts covered, which means the most deviation from dense scaling.
5. What I learned that I will actually use
Three things I am taking away into my own work.
Use Little's Law to escape the batch-size trap. The reason every SD paper I have read disagrees on speedup numbers is that they each pick different fixed batch sizes. There is no "right" batch size in continuous-batching servers. Sweeping RPS and fitting removes the dependence on this ill-defined knob. I will steal this for any future serving evaluation.
Tune as a function of load. The model gives an explicit prediction: at high load, smaller wins because the verifier cost dominates and large inflates . This is the opposite of the conventional advice from batch-size-one SD papers, which prefer large to amortize verifier overhead. A serving controller could in principle estimate and continuously and pick from a small lookup table. The paper hints at this; I think it is the most actionable deployment finding.
is the operational red flag. If you profile a draft-verify configuration and the fitted exceeds 1, you have a synchronous-only optimization. It will lose to vanilla decoding past some load threshold, and the threshold is computable from derived by setting speedup to 1 in eq. (2). I would compute this number for every SD deployment going forward.
6. Limitations and where I'd push back
6.1 Mean latency only
The model targets average . It does not say anything direct about p95 or p99, both of which matter more in production. Section A.5 (referenced in the body but not in the part I quoted) says the model "remains a good approximation for p95 latency and for p99 latency, especially on larger models," which I read as: it sort of works but it is not the right tool. Tail latency is dominated by scheduling variance, which Little's Law explicitly averages away. I would not deploy an SD configuration based on this model alone if my SLO is on p99.
6.2 Pre-saturation regime only
The fit explicitly excludes the throughput ceiling. In production the most interesting regime is exactly the saturation regime—that is where SLOs break. The paper is honest about this ("we exclude the saturation boundary, where preemption makes latency unstable") but the limitation should be in the abstract, not the appendix. A reader could plausibly believe the model covers their target operating point when it does not.
6.3 i.i.d. acceptance assumption
Eq. (3) uses which assumes per-token acceptance is i.i.d. with constant . Real drafters have position-dependent acceptance (acceptance drops as tokens get further from the verifier's last hidden state), correlated rejections, and sequence-conditioned variability. The paper enforces a constant by overriding rejection sampling, so the model is internally consistent, but its extrapolation to "what will my real drafter do in production" requires an independent measurement of effective .
6.4 Single-GPU bias
Most experiments run on a single A100 or H100. Llama-3.1-70B uses 4× A100 and Qwen3-235B-A22B uses 8× A100, but the multi-GPU regime is otherwise unexplored. Tensor parallelism, pipeline parallelism, and especially expert parallelism in MoE change the load-dependent cost structure substantively. I would want to see the MoE-aware extension tested under expert-parallel routing (Tutti, DisagMoE) before trusting eq. (5) on a frontier MoE deployment.
6.5 Adaptive drafting and tree verification not covered
EAGLE-2 and dynamic lookahead change at runtime; tree verification (Medusa, SpecInfer) replaces flat with a tree of candidate continuations. The paper's framework absorbs these "through additional cost terms" but does not test them. The most-deployed SD variants in 2026 use adaptive drafting, so this is a real gap.
6.6 No closed-loop control
The model gives a static mapping but does not propose a controller. In a real server, and are estimated with noise and lag, and switching has cost (it changes kernel selection and KV-cache budget). I would want to see a stability analysis of a simple controller that picks from a sliding-window estimate of and . This is the natural follow-on paper.
6.7 What I'm convinced of anyway
Despite the above, I think the core contribution is robust. The universal latency collapse in Figure 1 is the kind of empirical regularity that is hard to fake. The speedup formula in eq. (2) gives a mechanistic explanation of a phenomenon (SD speedup erosion under load) that the field had been hand-waving. And the MoE coverage correction is a nice piece of small-physics modeling. The paper is short and useful in a way that benchmark-heavy SD papers are not.
7. Reproducibility
- Code: The paper does not link a public repository, which is a minor disappointment given that the analytical model and SciPy fitting code would fit in one notebook. GuideLLM (Neural Magic) and vLLM are public.
- Hardware: A100 SXM and H100 are commercially accessible; the experiment is GPU-hour-bounded rather than algorithmically complex.
- Method: All equations, sweep ranges, and fit procedures are explicit in the body. The SciPy
curve_fitrecipe is standard. A diligent reader could reproduce the main fits in a weekend on a single A100. - Caveats: The exact vLLM version (0.13.0) matters because scheduler heuristics affect ; fitting on a newer vLLM will give different numerical coefficients but the same model structure.
8. Verdict
This is a good, sober paper that does one thing well: it gives a tiny analytical model for SD latency under realistic serving load. The math is light, the empirics are broad, and the operational implications are clear. The main weakness is the scope—mean latency, pre-saturation, i.i.d. acceptance—which the paper acknowledges but, in my view, undersells. Read it if you operate SD in production, if you evaluate SD methods, or if you write serving systems. Skip the cycle-level coefficient details if you only want the deployment guidance: it amounts to "small at high load, large at low load, check in your config." I expect this paper to influence how the next round of EAGLE/PARD/Medusa benchmarks are reported. It deserves to.
Appendix A. Derivation notes
A.1 From to eq. (1)
. Valid only when , i.e., below saturation.
A.2 Speedup limit at zero load
. SD's "best case" is ; achieving it requires synchronous execution.
A.3 Break-even load
Setting Speedup = 1 in eq. (2): . If then and the SD speedup crosses unity at . Above , vanilla decoding wins.
A.4 Expected accepted tokens with i.i.d.
. The term reflects the verifier's own committed token regardless of drafter acceptance. Per-cycle decoded tokens are and the number of cycles per request is for decode length .
A.5 Why the verifier cost is treated as -independent in
If verification were strictly in load-dependent cost, would scale with . Empirically the verifier's load-dependent term is roughly flat in because the dominant cost is streaming weights and the KV cache, not per-token compute. The paper absorbs the small -dependence into instead, which fits better.
9. Worked example: a concrete deployment walkthrough
Equations are easier to trust once you push numbers through them. Here is a plausible production scenario consistent with the paper's measurements on Qwen3-8B at 1024 prefill / 1024 decode tokens on a single A100. I am inventing the exact numerics for illustration; the qualitative shape matches Figures 2–4.
9.1 Setup
Suppose profiling on the deployment gives, for the dense baseline (no SD):
- , .
Sanity check: at , . At , . At , . The latency wall is steep near , exactly as Little's Law predicts.
9.2 SD configuration A: aggressive draft
EAGLE-3 drafter at with measured gives tokens per cycle. The verifier therefore runs cycles per request instead of 1024. Suppose the speculation-aware fit returns:
- , .
Then and . Eq. (2) becomes:
- . Synchronous: SD beats dense by 54%.
- . At RPS 8 the speedup has eroded from 1.54 to 1.13.
- Break-even: , i.e., .
Above RPS 8.9, configuration A is slower than dense decoding. If the deployment routinely sees RPS in the 9–12 range, this configuration is actively harmful at peak.
9.3 SD configuration B: conservative draft
Reduce to at the same : . Verifier does cycles instead. The smaller draft length raises but reduces the drafter contribution per cycle. Suppose:
- s, s/req.
Then , .
- . Lower than A.
- .
- Break-even: , .
Configuration B has a lower synchronous ceiling but its break-even is much higher. Crossover between A and B happens where their speedups are equal. Setting the two expressions equal and solving for in the simple linear approximation gives , i.e., . Hmm—that's beyond A's break-even, so in practice the cleaner statement is: below RPS 8.9 use A, above 8.9 use B (or fall back to dense). A more refined search would pick A at low RPS, B at mid-RPS, and dense at peak.
9.4 The deployment lesson
The same drafter, same verifier, same , two values of , two completely different speedup curves. A controller that picks dynamically from can keep speedup above 1 across the whole operating range while a single fixed- deployment would either underperform at low load (B) or actively lose at high load (A). This is the actionable consequence of the paper.
10. Comparison with prior LLM serving and SD analyses
It is worth situating this work against three nearby threads.
TurboSpec / SD goodput optimization (Liu et al., 2024): Goodput-style work treats SD configuration as an empirical search problem. Given a workload trace, find and the drafter that maximize tokens/sec under an SLO. This is operationally useful but provides no insight into why one configuration beats another. The Kong et al. model is complementary: it can be used as a fast inner loop for goodput search because it eliminates the need to actually measure every combination on the deployment cluster.
Sarathi-Serve / chunked prefill scheduling (Agrawal et al., 2024): Sarathi-Serve is a scheduling contribution—it changes how prefill and decode are interleaved. The Kong et al. model is descriptive and treats scheduling as a black box that produces an effective . The two are orthogonal but interact: a better scheduler changes , which changes the break-even RPS for SD. A natural follow-on would be to fit the model on both vanilla vLLM and Sarathi-Serve and report the delta in .
DistServe / Splitwise (disaggregated serving): Disaggregated prefill/decode changes the cost structure substantially—prefill on one pool of GPUs, decode on another. Eq. (1) still applies per pool, but and for the decode pool would be measured at the decode-pool level. SD lives in the decode pool, so the speedup analysis transfers directly, but with new numerical values. This is one of the cleanest extensions left on the table.
Roofline / arithmetic intensity literature: The Kong et al. decomposition is a natural lift of the roofline model from kernel-level to request-level granularity. Roofline says "fixed cost + per-byte cost"; Kong et al. says "fixed cost + per-concurrent-request cost." The aesthetic is consistent: keep the model linear in the cost decomposition, then let Little's Law translate concurrency to load.
11. What I would write as the next paper
If I were following this up, here are three directions ranked by what I think is most valuable.
-
Closed-loop controller with stability guarantee. Estimate from a sliding window, pick from a lookup table generated by the speculation-aware fit, and prove (or empirically demonstrate) that the controller does not chatter. Switching costs should be modeled, since changing changes kernel launches and KV-cache allocation. This is the single most useful follow-up for operators.
-
Tail-latency extension. Apply the model under M/G/1-with-batch queueing assumptions and predict p95/p99 from . Even a coarse upper bound would be more useful than the current "p99 sort of works" appendix note.
-
MoE expert-parallel correction. The current treats expert routing as i.i.d. uniform. Expert parallel deployments add cross-node communication whose cost depends on which experts get hit by which requests. A version of that accounts for non-uniform expert hot-spotting would generalize the MoE-aware fit to multi-node serving.
11.5 Misconceptions the paper corrected for me
A handful of intuitions I held going in turned out to be wrong, listed here for engineers reading fast.
Misconception 1: "Batch-size-one SD speedups extrapolate to production." This is the paper's most direct target. Zero-load speedup is just the intercept of a hyperbola; what determines the production experience is the sign and magnitude of .
Misconception 2: "Larger draft length is always better." At batch size one, larger amortizes verifier overhead—this is one of the central claims of the original Leviathan paper. But Figure 2(b) shows that the minimizing is far smaller than the minimizing . In production both need to be picked separately.
Misconception 3: "EAGLE-3 solved the load problem for SD." EAGLE-3 gives a stronger drafter with higher and smaller , but it does not change the basic fact that must drop below 1 for the speedup to grow with load. In the paper's experiments is still the common case, EAGLE-3 or not.
Misconception 4: "MoE models don't benefit from SD." At low load MoE benefits from sparse activation, and SD layers on top: the two speedups compose. The caveat is that rebounds as grows, since longer verification rounds cover more experts and erode the sparse-activation savings. The takeaway again is "pick small at high load."
Misconception 5: "Saturation latency can be papered over by adding GPUs." Adding GPUs typically reduces both and , with the fixed cost dropping faster. The curve shifts down and the saturation threshold moves right—but a configuration with does not become just because you doubled the hardware. Buying GPUs cannot rescue a bad SD configuration.
12. Personal verdict
I will keep the speedup formula on a sticky note. It is the one piece of analysis I was missing every time I tuned an SD configuration in vLLM, and it explains things that operators had been complaining about for a year without a satisfying answer. The paper is short, narrowly scoped, and useful. I would rather have ten of these than one more 80-page systems benchmark.
End of review.