We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

DeepInfra is now a supported Hugging Face Inference Provider
Published on 2026.04.29 by Aray Sultanbekova
DeepInfra is now a supported Hugging Face Inference Provider

DeepInfra is now a supported Hugging Face Inference Provider

DeepInfra is officially live as an Inference Provider on the Hugging Face Hub. You can now call DeepInfra-hosted models directly from Hugging Face model pages, through our OpenAI-compatible router (use it with any OpenAI SDK), or via the Hugging Face SDKs in Python and JavaScript.

What's new

Hugging Face's Inference Providers system lets developers run inference against partner platforms without leaving the Hub. As of today, DeepInfra is one of those partners.

At launch, we support chat completion and text generation tasks. That covers most open-weight LLMs people deploy in production — DeepSeek V4, Kimi-K2.6, GLM-5.1, Llama, Qwen, Mistral, and many more. Support for our other model categories (text-to-image, text-to-video, embeddings, speech) will roll out next.

You can browse every DeepInfra-supported model here: 👉 huggingface.co/models?inference_provider=deepinfra

How to use it

You have two ways to authenticate, and both work with the same code.

Option 1 — Use your DeepInfra API key. Add it to your Hugging Face provider settings. Requests go directly to DeepInfra and are billed to your DeepInfra account at standard rates.

Option 2 — Use your Hugging Face token. Hugging Face will route your request to DeepInfra and bill it to your HF account. PRO users get $2 of inference credits each month; free users get a small monthly quota.

Python

from huggingface_hub import InferenceClient

client = InferenceClient()

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages=[
        {"role": "user", "content": "Write a Fibonacci function with memoization."}
    ],
)

print(completion.choices[0].message)
copy

JavaScript

import { InferenceClient } from "@huggingface/inference";

const client = new InferenceClient(process.env.HF_TOKEN);

const completion = await client.chatCompletion({
  model: "deepseek-ai/DeepSeek-V4-Pro:deepinfra",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(completion.choices[0].message);
copy

Using the OpenAI SDK

The Hugging Face router is OpenAI-compatible, so existing OpenAI code works with one line changed — point base_url at the HF router:

from openai import OpenAI

client = OpenAI(
    base_url="https://router.huggingface.co/v1",
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages=[{"role": "user", "content": "Hello!"}],
)
copy

The only thing that changes is the :deepinfra suffix on the model id.

What this means for our users

If you already use DeepInfra, nothing changes — your existing API and account work exactly as they always have. What's new is reach.

  • Discoverability. Every Hugging Face model page that runs on DeepInfra now shows us as a supported provider, with one-click code snippets in Python, JavaScript, and cURL.
  • Same pricing, no markup. Hugging Face passes through DeepInfra's per-token rates without any added fees. You pay the same whether you call us directly or via the HF router.
  • Drop-in for HF-based workflows. If your team already uses Hugging Face for model search, evaluation, or agent tooling (Pi, OpenCode, Hermes Agents, VS Code with Copilot, and more), DeepInfra is now a one-line provider swap.
  • Try before you buy. Use the Inference Playground to test any DeepInfra-supported model in the browser before wiring it into your stack.
Related articles
Inference Economics: True AI Costs at ScaleInference Economics: True AI Costs at Scale<p>Most teams discover their inference economics the same way: a production bill arrives that looks nothing like the number they expected. The per-token price seemed small enough during testing. Then real traffic showed up, agents started chaining calls, RAG pipelines bloated the context window, and suddenly the math looked completely different. Token prices have fallen [&hellip;]</p>
Art That Talks Back: A Hands-On Tutorial on Talking ImagesArt That Talks Back: A Hands-On Tutorial on Talking ImagesTurn any image into a talking masterpiece with this step-by-step guide using DeepInfra’s GenAI models.
NVIDIA Nemotron 3 Nano 30B API Benchmarks: Latency & CostNVIDIA Nemotron 3 Nano 30B API Benchmarks: Latency & Cost<p>About NVIDIA Nemotron 3 Nano 30B A3B NVIDIA Nemotron 3 Nano 30B A3B is a large language model trained from scratch by NVIDIA, designed as a unified model for both reasoning and non-reasoning tasks. It is part of the Nemotron 3 family — NVIDIA&#8217;s most efficient family of open models, built for agentic AI applications. [&hellip;]</p>