Nemotron 3 Nano Omni — the first multimodal model in the Nemotron 3 family, now on DeepInfra!

GLM-4.7-Flash is Z.AI’s open-weights reasoning model released in January 2026. Built on a Mixture-of-Experts (MoE) Transformer architecture, it features 30 billion total parameters with only ~3 billion active per inference — making it exceptionally efficient for its capability class. The model is designed as a lightweight, cost-effective alternative to Z.AI’s flagship GLM-4.7, optimized for coding, agentic workflows, and multi-step reasoning tasks.
GLM-4.7-Flash supports up to 200K context tokens and achieves state-of-the-art performance among open-source models in its size category on benchmarks like SWE-Bench and GPQA. Its efficient architecture enables deployment on consumer hardware while still delivering competitive performance against larger proprietary models.
GLM-4.7-Flash is now available across multiple inference providers — but they’re not created equal. This analysis breaks down which one delivers the best performance, lowest cost, and fastest response times for your use case.
| Provider | Why Notable | Blended ($/1M) | Input ($/1M) | Output ($/1M) | Latency (TTFT) | Speed (t/s) | Context | JSON | Func | E2E (s) |
|---|---|---|---|---|---|---|---|---|---|---|
| DeepInfra | Best overall: lowest cost + lowest latency; JSON mode supported | $0.14 | $0.06 | $0.40 | 0.75s | 74.6 | 203k | Yes | Yes | 34.28 / 26.82 |
| Amazon Bedrock | Best for throughput-intensive workloads; fastest generation speed | $0.15 | $0.07 | $0.40 | 0.90s | 228.6 | 200k | No | Yes | 11.83 / 8.75 |
| Novita | JSON mode + function calling; not recommended due to high latency | $0.15 | $0.07 | $0.40 | 9.49s | 50.1 | 200k | Yes | Yes | 59.35 / 39.90 |
Based on benchmarks across tracked providers, DeepInfra is the recommended API for production-scale GLM-4.7-Flash deployment. It offers the lowest latency (0.75s TTFT), the lowest blended cost ($0.14/1M tokens), and full support for both JSON Mode and Function Calling. For use cases requiring maximum throughput, Amazon Bedrock leads at 228.6 t/s.
DeepInfra holds the top spot for developers prioritizing interactivity and cost-efficiency. In the context of reasoning models — which inherently require thinking time — minimizing network and processing latency is critical. DeepInfra excels here, offering the snappiest response times in the benchmark.
With a TTFT of 0.75s, DeepInfra is the only provider suitable for real-time conversational agents — a decisive advantage for chatbots and interactive applications where user retention drops as wait times increase. It also offers the lowest Total Cost of Ownership with an input price of $0.06/1M tokens and a blended rate of $0.14.
Unlike Amazon Bedrock, DeepInfra supports JSON Mode, ensuring reliable structured data extraction for agentic workflows. Combined with Function Calling support, it provides a robust environment for building structured data extraction pipelines. While its throughput (74.6 t/s) is lower than Amazon’s, it is sufficiently fast for most reading-speed applications and significantly outperforms Novita.
Amazon Bedrock demonstrates superior raw computational power, making it the ideal choice for offline tasks where immediate latency is less critical than total completion time.
At 228.6 tokens per second, Bedrock is 3x faster than DeepInfra and 4.5x faster than Novita. Its end-to-end response time for a 500-token output is just 11.83 seconds. For applications requiring bulk text generation where the user is not waiting for the first word to appear — such as background report generation — Amazon’s infrastructure is unmatched.
The key trade-off is the lack of native JSON Mode support. This requires additional prompt engineering and increases the risk of malformed outputs in structured workflows, making it less ideal for complex agentic integrations despite the raw speed advantage.
While Novita offers a feature set comparable to DeepInfra — supporting both JSON Mode and Function Calling — it currently struggles with infrastructure performance.
A TTFT of 9.49 seconds makes this provider unusable for real-time user-facing applications — nearly 13x slower than DeepInfra. It also ranks last in generation speed (50.1 t/s) while matching Amazon’s higher price point ($0.15 blended). Novita is best reserved as a backup provider or for specific non-time-sensitive workloads where other providers are unavailable.
| Metric | DeepInfra | Amazon Bedrock | Novita |
|---|---|---|---|
| Latency (TTFT) | 0.75s ✓ Winner | 0.90s | 9.49s |
| Output Speed (t/s) | 74.6 | 228.6 ✓ Winner | 50.1 |
| Blended Price (/1M) | $0.14 ✓ Winner | $0.15 | $0.15 |
| JSON Mode | Yes ✓ | No | Yes |
| Function Calling | Yes | Yes | Yes |
| Context Window | 203k ✓ | 200k | 200k |
DeepInfra is the cheapest provider with a blended price of $0.14 per million tokens and an input price of $0.06 per million tokens.
No. As of the current benchmark, Amazon Bedrock supports Function Calling but does not list native JSON Mode support for this model, unlike DeepInfra and Novita.
It depends on the metric. DeepInfra is the fastest to start (lowest latency/TTFT at 0.75s), making it best for chat and interactive applications. Amazon Bedrock is the fastest to finish (highest throughput at 228.6 t/s), making it best for generating long documents or code blocks.
GLM-4.7-Flash uses a Mixture-of-Experts (MoE) architecture with 30B total parameters but only ~3B active per inference. This design allows it to deliver strong performance while requiring significantly less compute than dense models of comparable capability.
For the vast majority of GLM-4.7-Flash use cases, DeepInfra is the recommended provider. It successfully balances the key pillars of API performance: fastest to start (0.75s latency), cheapest to run ($0.14/1M tokens), and full support for JSON Mode and Function Calling.
Deep Infra Launches Access to NVIDIA Nemotron Models for Vision, Retrieval, and AI SafetyDeep Infra is serving the new, open NVIDIA Nemotron vision language and OCR AI models from day zero of their release. As a leading inference provider committed to performance and cost-efficiency, we're making these cutting-edge models available at the industry's best prices, empowering developers to build specialized AI agents without compromising on budget or performance.
Function Calling in DeepInfra: Extend Your AI with Real-World Logic<p>Modern large language models (LLMs) are incredibly powerful at understanding and generating text, but until recently they were largely static: they could only respond based on patterns in their training data. Function calling changes that. It lets language models interact with external logic — your own code, APIs, utilities, or business systems — while still […]</p>
Kimi K2.6 API Benchmarks: Latency, TPS & Cost Analysis (2026)<p>About Kimi K2.6 Kimi K2.6 is an open-source frontier model from Moonshot AI, released on April 20, 2026. It is a native multimodal agentic model built for long-horizon coding, autonomous execution, and swarm-based task orchestration. The model uses a Mixture-of-Experts (MoE) architecture with 1 trillion total parameters and 32 billion activated parameters per token, using […]</p>
© 2026 Deep Infra. All rights reserved.