NVIDIA Nemotron 3 Super - blazing-fast agentic AI, ready to deploy today!

We are excited to announce that DeepInfra is an official launch partner for NVIDIA Nemotron 3 Super, the latest open model in the Nemotron family.
Nemotron 3 Super is purpose-built for complex multi-agent applications, delivering high reasoning accuracy, fast inference, and a 1M token context window—all while running efficiently on a single NVIDIA GPU. On DeepInfra, the model is available from day one with zero setup, low latency, and no operational overhead.
With its hybrid MoE architecture and 120B total / 12B active parameters, Super is designed to run multiple collaborating agents per application efficiently. Paired with DeepInfra's high-efficiency inference platform and usage-based pricing, you can build and scale agentic workflows using only a few lines of code.
Nemotron 3 Super uses a hybrid architecture that combines Mixture of Experts (MoE) with the Mamba transformer design — the same architectural foundation as Nemotron 3 Nano, scaled and optimized for heavier agentic workloads.
Most layers rely on Mamba for high-throughput sequence processing, while transformer layers handle complex reasoning.
On top of this architecture, Latent MoE activates 4 experts for the inference cost of one, and Multi-Token Prediction (MTP) accelerates long-form generation by predicting multiple tokens per forward pass.
These design choices enable:
The 1M-token context window is a core part of the model's design. For agentic systems, this means holding full conversation history, tool call traces, and plan state across long workflows — without losing coherence or truncating critical context mid-task.
Nemotron 3 Super achieves leading scores on reasoning and agentic benchmarks including AIME 2025, IFBench, TerminalBench, and RULER. For more details on the model, check out the technical blog from NVIDIA.
Like the rest of the Nemotron family, Nemotron 3 Super is fully open: weights, training datasets, and development recipes are all publicly available. NVIDIA trained the model on high-quality synthetic data generated from frontier open reasoning models, giving teams full transparency to inspect, customize, and fine-tune the model for their specific needs.
Nemotron 3 Super is accessible via DeepInfra's OpenAI-compatible API. Here's how to get started in a few lines of code.
Install the client:
pip install openai
Run your first inference:
from openai import OpenAI
client = OpenAI(
api_key="<your-deepinfra-api-key>",
base_url="https://api.deepinfra.com/v1/openai",
)
response = client.chat.completions.create(
model="nvidia/NVIDIA-Nemotron-3-Super-120B-A12B",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Summarize the key findings from this medical billing report."},
],
)
print(response.choices[0].message.content)
Streaming is supported out of the box:
stream = client.chat.completions.create(
model="nvidia/NVIDIA-Nemotron-3-Super-120B-A12B",
messages=[
{"role": "user", "content": "Walk me through a multi-step plan to optimize revenue cycle management."}
],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
DeepInfra operates with a zero-retention policy. Inputs, outputs, and user data are not stored. The platform is SOC 2 and ISO 27001 certified, following industry best practices for security and privacy. More information is available in our DeepInfra Trust Center.
Visit the Nemotron 3 Super model page on DeepInfra to explore pricing and start inference instantly. You can check out our documentation to learn more about the broader model ecosystem and developer resources.
Have questions or need help?
Reach out to us at feedback@deepinfra.com, join our Discord, or connect with us on X (@DeepInfra) — we're happy to help.
Introducing GPU Instances: On-Demand GPU Compute for AI WorkloadsLaunch dedicated GPU containers in minutes with our new GPU Instances feature, designed for machine learning training, inference, and compute-intensive workloads.
Pricing 101: Token Math & Cost-Per-Completion Explained<p>LLM pricing can feel opaque until you translate it into a few simple numbers: input tokens, output tokens, and price per million. Every request you send—system prompt, chat history, RAG context, tool-call JSON—counts as input; everything the model writes back counts as output. Once you know those two counts, the cost of a completion is […]</p>
Long Context models incomingMany users requested longer context models to help them summarize bigger chunks
of text or write novels with ease.
We're proud to announce our long context model selection that will grow bigger in the comming weeks.
Models
Mistral-based models have a context size of 32k, and amazon recently r...© 2026 Deep Infra. All rights reserved.