GLM-5.1 - state-of-the-art agentic engineering, now available on DeepInfra!

Qwen3.5 122B A10B is Alibaba Cloud’s mid-tier multimodal foundation model, released in February 2026. It is a multimodal vision-language Mixture-of-Experts model supporting text, image, and video inputs, designed for native multimodal agent applications. It features 122 billion total parameters with 10 billion activated per token through a hybrid architecture that integrates Gated Delta Networks with sparse Mixture-of-Experts across 256 experts — delivering high-throughput inference with minimal latency and cost overhead.
The model supports a 262k token context window (extensible to 1M via YaRN), operates in both thinking and non-thinking modes, and offers expanded support for 201 languages and dialects. Qwen3.5 122B A10B scores 42 on the Artificial Analysis Intelligence Index — well above average among comparable models — and is released under the Apache 2.0 license, enabling commercial use and third-party hosting.
Qwen3.5 122B A10B is now available across multiple inference providers — but they’re not created equal. This analysis breaks down which one delivers the best performance, lowest cost, and fastest response times for your use case.
| Provider | Blended ($/1M) | Input ($/1M) | Output ($/1M) | Speed (t/s) | Latency (TTFT) | E2E Response (s) | Func | JSON | Context |
|---|---|---|---|---|---|---|---|---|---|
| DeepInfra (FP8) | $0.94 | $0.29 | $2.90 | 155.5 | 0.59s | 16.67 / 12.86 | Yes | No | 262k |
| Alibaba Cloud | $1.10 | $0.40 | $3.20 | 137.9 | 2.32s | 20.44 / 14.50 | Yes | Yes | 262k |
| Novita | $1.10 | $0.40 | $3.20 | 83.8 | 1.72s | 31.56 / 23.87 | Yes | Yes | 262k |
| GMI (FP8) | $1.10 | $0.40 | $3.20 | 90.7 | 2.42s | 29.98 / 22.04 | Yes | Yes | 262k |
Based on benchmarks across 4 tracked providers, DeepInfra (FP8) is the recommended API for production-scale Qwen3.5 122B A10B deployment. It ranks #1 across all three primary metrics — speed, latency, and cost — while undercutting the market price by 17%. The only trade-off is the absence of JSON mode, which is worth noting for structured output workflows.
DeepInfra’s FP8 implementation dominates across all key performance and pricing metrics, making it the clear recommendation for the vast majority of production use cases.
At $0.94 per 1M blended tokens, DeepInfra is approximately 17% cheaper than every other provider in the benchmark — all of which are priced at $1.10. Combined with the fastest output speed and the lowest TTFT in the field, it is the only provider that wins across all three critical dimensions simultaneously.
The one trade-off worth flagging: DeepInfra (FP8) does not currently support JSON mode. Developers requiring deterministic structured outputs should either use prompt engineering to enforce JSON structure or consider Alibaba Cloud as an alternative.
As the model’s creator, Alibaba Cloud offers a solid balance of performance and full feature support, making it the natural fallback for teams requiring JSON mode.
Alibaba Cloud delivers competitive throughput (137.9 t/s) with complete feature support including JSON mode. Its latency (2.32s TTFT) is notably higher than DeepInfra, making it less suitable for real-time interactive applications. For batch workloads or structured output pipelines where JSON mode is required, it is the recommended alternative to DeepInfra.
Both Novita and GMI are priced identically at $1.10/1M blended and offer full feature support (Function Calling + JSON Mode), but neither matches DeepInfra on performance.
For developers already integrated into either ecosystem or with specific regional availability requirements, both are viable options. However, given that DeepInfra outperforms both on every metric while costing 17% less, neither represents the optimal choice for new deployments.
For the vast majority of Qwen3.5 122B A10B deployments, DeepInfra (FP8) is the clear recommendation. It ranks #1 in speed, latency, and cost simultaneously — a rare combination in inference provider benchmarks.
GLM-4.7-Flash API Benchmarks: Latency, Throughput & Cost<p>About GLM-4.7-Flash GLM-4.7-Flash is Z.AI’s open-weights reasoning model released in January 2026. Built on a Mixture-of-Experts (MoE) Transformer architecture, it features 30 billion total parameters with only ~3 billion active per inference — making it exceptionally efficient for its capability class. The model is designed as a lightweight, cost-effective alternative to Z.AI’s flagship GLM-4.7, optimized […]</p>
Juggernaut FLUX is live on DeepInfra!Juggernaut FLUX is live on DeepInfra!
At DeepInfra, we care about one thing above all: making cutting-edge AI models accessible. Today, we're excited to release the most downloaded model to our platform.
Whether you're a visual artist, developer, or building an app that relies on high-fidelity ...© 2026 Deep Infra. All rights reserved.