FLUX.2 is live! High-fidelity image generation made simple.

Delivering AI at scale requires multiple things working together: open-weight models, powerful hardware, and inference optimizations. DeepInfra was among the first to deploy production workloads on the NVIDIA Blackwell platform — and we're seeing up to 20x cost reductions when combining Mixture of Experts (MoE) architectures with Blackwell optimizations. Here's what we've built and learned.
The NVIDIA Blackwell platform delivers a step change in AI inference performance. DeepInfra has built a comprehensive optimization stack to take full advantage of these capabilities.
The gains come from three layers working together: hardware acceleration from the NVIDIA Blackwell architecture, efficiency from open-weight MoE models, and DeepInfra's inference optimizations built on NVIDIA TensorRT-LLM, including speculative decoding and advanced memory management.
Layers of inference optimizations.
The real gains come from combining multiple optimizations. Here's what that looks like comparing a dense 405B model to an MoE model with 400B total parameters but only 17B active per token:
| Dense 405B Model | MoE 400B (FP8) Model | MoE 400B (FP8) Model | MoE 400B (NVFP4) Model | |
|---|---|---|---|---|
| Platform | NVIDIA H200 | NVIDIA H200 | NVIDIA HGX™ B200 | NVIDIA HGX™ B200 |
| Cost/1M tokens | $1.00 | $0.20 | $0.10 | $0.05 |
| Cost vs Dense | baseline | 5x lower cost | 10x lower cost | 20x lower cost |
From $1.00/1M tokens (Dense 405B) to $0.05/1M tokens (MoE NVFP4 on NVIDIA Blackwell) — 20x more cost effective.
20x cheaper than dense. Similar parameter scale, fraction of the cost.
Each layer compounds: MoE architecture reduces active compute, the Blackwell platform accelerates throughput, with NVFP4 quantization cutting memory and compute further.
Traditional autoregressive generation produces one token at a time. DeepInfra leverages Multi-Token Prediction (MTP) and Eagle speculative decoding to accelerate generation. By predicting multiple tokens simultaneously and verifying them in parallel, these techniques improve throughput on supported models.
Beyond the core optimizations, DeepInfra implements:
What does this look like in practice? For Latitude, AI is the experience.
AI Dungeon serves 1.5 million monthly active users generating dynamic, AI-powered game narratives, and Latitude is expanding with Voyage, an upcoming AI RPG platform where players can create or play worlds with full freedom of action. Model responses aren't just part of the gameplay loop — they're the centerpiece of everything Latitude builds. Every improvement in model performance directly translates to better player experiences, higher engagement and retention, and ultimately drives revenue and growth.
AI Dungeon generates both narrative text and imagery in real-time as players explore dynamic stories.
Running large open-source MoE models on DeepInfra's Blackwell-based platform allows Latitude to deliver fast, reliable responses at a cost that scales with their player base.
Players can choose from multiple AI-generated continuations, each requiring real-time inference.
"For AI Dungeon, the model response is the game. Every millisecond of latency, every quality improvement matters — it directly impacts how players feel, how long they stay, and whether they come back. DeepInfra on NVIDIA Blackwell gives us the performance we need at a cost that actually works at scale."
— Nick Walton, CEO, Latitude
A key factor in Latitude's success is the flexibility that open-weight models provide. Rather than being locked into a single model, they can evaluate and deploy the best model for each specific use case — whether optimizing for creative storytelling, fast responses, or cost efficiency.
DeepInfra's platform supports this experimentation by offering a broad catalog of optimized open-source models, all benefiting from the NVIDIA Blackwell platform performance gains. Customers like Latitude can test different models, compare performance characteristics, and deploy the optimal configuration for their workload — without infrastructure overhead.
The combination of the NVIDIA Blackwell platform, DeepInfra's inference optimizations, and the flexibility of open-weight models creates a powerful platform for AI-native applications. As models continue to grow in capability and efficiency, this infrastructure foundation will enable the next generation of AI experiences.
For developers and companies looking to reduce inference costs, the path forward is clear: modern hardware, purpose-built optimizations, and the freedom to choose the right model for the job.
To learn more about DeepInfra's Blackwell deployment or discuss your inference needs, visit deepinfra.com.
Seed Anchoring and Parameter Tweaking with SDXL Turbo: Create Stunning Cubist ArtIn this blog post, we're going to explore how to create stunning cubist art using SDXL Turbo using some advanced image generation techniques.
Guaranteed JSON output on Open-Source LLMs.DeepInfra is proud to announce that we have released "JSON mode" across all of our text language models. It is available through the "response_format" object, which currently supports only {"type": "json_object"}
Our JSON mode will guarantee that all tokens returned in the output of a langua...
How to deploy Databricks Dolly v2 12b, instruction tuned casual language model.Databricks Dolly is instruction tuned 12 billion parameter casual language model based on EleutherAI's pythia-12b.
It was pretrained on The Pile, GPT-J's pretraining corpus.
[databricks-dolly-15k](http...© 2026 Deep Infra. All rights reserved.