DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

DeepInfra is officially live as an Inference Provider on the Hugging Face Hub. You can now call DeepInfra-hosted models directly from Hugging Face model pages, through our OpenAI-compatible router (use it with any OpenAI SDK), or via the Hugging Face SDKs in Python and JavaScript.
Hugging Face's Inference Providers system lets developers run inference against partner platforms without leaving the Hub. As of today, DeepInfra is one of those partners.
At launch, we support chat completion and text generation tasks. That covers most open-weight LLMs people deploy in production — DeepSeek V4, Kimi-K2.6, GLM-5.1, Llama, Qwen, Mistral, and many more. Support for our other model categories (text-to-image, text-to-video, embeddings, speech) will roll out next.
You can browse every DeepInfra-supported model here: 👉 huggingface.co/models?inference_provider=deepinfra
You have two ways to authenticate, and both work with the same code.
Option 1 — Use your DeepInfra API key. Add it to your Hugging Face provider settings. Requests go directly to DeepInfra and are billed to your DeepInfra account at standard rates.
Option 2 — Use your Hugging Face token. Hugging Face will route your request to DeepInfra and bill it to your HF account. PRO users get $2 of inference credits each month; free users get a small monthly quota.
from huggingface_hub import InferenceClient
client = InferenceClient()
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages=[
{"role": "user", "content": "Write a Fibonacci function with memoization."}
],
)
print(completion.choices[0].message)
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient(process.env.HF_TOKEN);
const completion = await client.chatCompletion({
model: "deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message);
The Hugging Face router is OpenAI-compatible, so existing OpenAI code works with one line changed — point base_url at the HF router:
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages=[{"role": "user", "content": "Hello!"}],
)
The only thing that changes is the :deepinfra suffix on the model id.
If you already use DeepInfra, nothing changes — your existing API and account work exactly as they always have. What's new is reach.
Inference Economics: True AI Costs at Scale<p>Most teams discover their inference economics the same way: a production bill arrives that looks nothing like the number they expected. The per-token price seemed small enough during testing. Then real traffic showed up, agents started chaining calls, RAG pipelines bloated the context window, and suddenly the math looked completely different. Token prices have fallen […]</p>
Art That Talks Back: A Hands-On Tutorial on Talking ImagesTurn any image into a talking masterpiece with this step-by-step guide using DeepInfra’s GenAI models.
NVIDIA Nemotron 3 Nano 30B API Benchmarks: Latency & Cost<p>About NVIDIA Nemotron 3 Nano 30B A3B NVIDIA Nemotron 3 Nano 30B A3B is a large language model trained from scratch by NVIDIA, designed as a unified model for both reasoning and non-reasoning tasks. It is part of the Nemotron 3 family — NVIDIA’s most efficient family of open models, built for agentic AI applications. […]</p>
© 2026 DeepInfra. All rights reserved.