FLUX.2 is live! High-fidelity image generation made simple.

Many users requested longer context models to help them summarize bigger chunks of text or write novels with ease.
We're proud to announce our long context model selection that will grow bigger in the comming weeks.
Mistral-based models have a context size of 32k, and amazon recently released a model fine-tuned specifically on longer contexts.
We also recently released the highly praised Yi models. Keep in mind they don't support chat, just the old-school text completion (new models are in the works):
Introducing GPU Instances: On-Demand GPU Compute for AI WorkloadsLaunch dedicated GPU containers in minutes with our new GPU Instances feature, designed for machine learning training, inference, and compute-intensive workloads.
Best API for Kimi K2.5: Why DeepInfra Leads in Speed, TTFT, and Scalability<p>Kimi K2.5 is positioned as Moonshot AI’s “do-it-all” model for modern product workflows: native multimodality (text + vision/video), Instant vs. Thinking modes, and support for agentic / multi-agent (“swarm”) execution patterns. In real applications, though, model capability is only half the story. The provider’s inference stack determines the things your users actually feel: time-to-first-token (TTFT), […]</p>
How to deploy Databricks Dolly v2 12b, instruction tuned casual language model.Databricks Dolly is instruction tuned 12 billion parameter casual language model based on EleutherAI's pythia-12b.
It was pretrained on The Pile, GPT-J's pretraining corpus.
[databricks-dolly-15k](http...© 2026 Deep Infra. All rights reserved.