We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

GLM-5.1 - state-of-the-art agentic engineering, now available on DeepInfra!

Introducing Nemotron 3 Super on DeepInfra
Published on 2026.03.11 by Aray Sultanbekova
Introducing Nemotron 3 Super on DeepInfra

Introducing NVIDIA Nemotron 3 Super on DeepInfra

We are excited to announce that DeepInfra is an official launch partner for NVIDIA Nemotron 3 Super, the latest open model in the Nemotron family.

Nemotron 3 Super is purpose-built for complex multi-agent applications, delivering high reasoning accuracy, fast inference, and a 1M token context window—all while running efficiently on a single NVIDIA GPU. On DeepInfra, the model is available from day one with zero setup, low latency, and no operational overhead.

With its hybrid MoE architecture and 120B total / 12B active parameters, Super is designed to run multiple collaborating agents per application efficiently. Paired with DeepInfra's high-efficiency inference platform and usage-based pricing, you can build and scale agentic workflows using only a few lines of code.

What makes Nemotron 3 Super different

Nemotron 3 Super uses a hybrid architecture that combines Mixture of Experts (MoE) with the Mamba transformer design — the same architectural foundation as Nemotron 3 Nano, scaled and optimized for heavier agentic workloads.

Most layers rely on Mamba for high-throughput sequence processing, while transformer layers handle complex reasoning.

On top of this architecture, Latent MoE activates 4 experts for the inference cost of one, and Multi-Token Prediction (MTP) accelerates long-form generation by predicting multiple tokens per forward pass.

These design choices enable:

  • Up to 5x higher throughput compared to previous Nemotron Super model
  • Up to 2x higher accuracy compared to previous Nemotron Super model
  • Consistent performance across variable, multi-step workloads

The 1M-token context window is a core part of the model's design. For agentic systems, this means holding full conversation history, tool call traces, and plan state across long workflows — without losing coherence or truncating critical context mid-task.

Nemotron 3 Super achieves leading scores on reasoning and agentic benchmarks including AIME 2025, IFBench, TerminalBench, and RULER. For more details on the model, check out the technical blog from NVIDIA.

Like the rest of the Nemotron family, Nemotron 3 Super is fully open: weights, training datasets, and development recipes are all publicly available. NVIDIA trained the model on high-quality synthetic data generated from frontier open reasoning models, giving teams full transparency to inspect, customize, and fine-tune the model for their specific needs.


Getting started on DeepInfra

Nemotron 3 Super is accessible via DeepInfra's OpenAI-compatible API. Here's how to get started in a few lines of code.

Install the client:

pip install openai
copy

Run your first inference:

from openai import OpenAI

client = OpenAI(
    api_key="<your-deepinfra-api-key>",
    base_url="https://api.deepinfra.com/v1/openai",
)

response = client.chat.completions.create(
    model="nvidia/NVIDIA-Nemotron-3-Super-120B-A12B",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Summarize the key findings from this medical billing report."},
    ],
)

print(response.choices[0].message.content)
copy

Streaming is supported out of the box:

stream = client.chat.completions.create(
    model="nvidia/NVIDIA-Nemotron-3-Super-120B-A12B",
    messages=[
        {"role": "user", "content": "Walk me through a multi-step plan to optimize revenue cycle management."}
    ],
    stream=True,
)

for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="", flush=True)
copy

Enterprise-Grade Security and Privacy

DeepInfra operates with a zero-retention policy. Inputs, outputs, and user data are not stored. The platform is SOC 2 and ISO 27001 certified, following industry best practices for security and privacy. More information is available in our DeepInfra Trust Center.

Start Building

Visit the Nemotron 3 Super model page on DeepInfra to explore pricing and start inference instantly. You can check out our documentation to learn more about the broader model ecosystem and developer resources.

Have questions or need help?

Reach out to us at feedback@deepinfra.com, join our Discord, or connect with us on X (@DeepInfra) — we're happy to help.

Related articles
Inference LoRA adapter modelInference LoRA adapter modelLearn how to inference LoRA adapter model.
Fork of Text Generation Inference.Fork of Text Generation Inference.The text generation inference open source project by huggingface looked like a promising framework for serving large language models (LLM). However, huggingface announced that they will change the license of code with version v1.0.0. While the previous license Apache 2.0 was permissive, the new on...
Llama 3.1 70B Instruct API from DeepInfra: Snappy Starts, Fair Pricing, Production Fit - Deep InfraLlama 3.1 70B Instruct API from DeepInfra: Snappy Starts, Fair Pricing, Production Fit - Deep Infra<p>Llama 3.1 70B Instruct is Meta’s widely-used, instruction-tuned model for high-quality dialogue and tool use. With a ~131K-token context window, it can read long prompts and multi-file inputs—great for agents, RAG, and IDE assistants. But how “good” it feels in practice depends just as much on the inference provider as on the model: infra, batching, [&hellip;]</p>