NVIDIA Nemotron 3 Super - blazing-fast agentic AI, ready to deploy today!

GLM-4.7-Flash is Z.AI’s open-weights reasoning model released in January 2026. Built on a Mixture-of-Experts (MoE) Transformer architecture, it features 30 billion total parameters with only ~3 billion active per inference — making it exceptionally efficient for its capability class. The model is designed as a lightweight, cost-effective alternative to Z.AI’s flagship GLM-4.7, optimized for coding, agentic workflows, and multi-step reasoning tasks.
GLM-4.7-Flash supports up to 200K context tokens and achieves state-of-the-art performance among open-source models in its size category on benchmarks like SWE-Bench and GPQA. Its efficient architecture enables deployment on consumer hardware while still delivering competitive performance against larger proprietary models.
GLM-4.7-Flash is now available across multiple inference providers — but they’re not created equal. This analysis breaks down which one delivers the best performance, lowest cost, and fastest response times for your use case.
| Provider | Why Notable | Blended ($/1M) | Input ($/1M) | Output ($/1M) | Latency (TTFT) | Speed (t/s) | Context | JSON | Func | E2E (s) |
|---|---|---|---|---|---|---|---|---|---|---|
| DeepInfra | Best overall: lowest cost + lowest latency; JSON mode supported | $0.14 | $0.06 | $0.40 | 0.75s | 74.6 | 203k | Yes | Yes | 34.28 / 26.82 |
| Amazon Bedrock | Best for throughput-intensive workloads; fastest generation speed | $0.15 | $0.07 | $0.40 | 0.90s | 228.6 | 200k | No | Yes | 11.83 / 8.75 |
| Novita | JSON mode + function calling; not recommended due to high latency | $0.15 | $0.07 | $0.40 | 9.49s | 50.1 | 200k | Yes | Yes | 59.35 / 39.90 |
Based on benchmarks across tracked providers, DeepInfra is the recommended API for production-scale GLM-4.7-Flash deployment. It offers the lowest latency (0.75s TTFT), the lowest blended cost ($0.14/1M tokens), and full support for both JSON Mode and Function Calling. For use cases requiring maximum throughput, Amazon Bedrock leads at 228.6 t/s.
DeepInfra holds the top spot for developers prioritizing interactivity and cost-efficiency. In the context of reasoning models — which inherently require thinking time — minimizing network and processing latency is critical. DeepInfra excels here, offering the snappiest response times in the benchmark.
With a TTFT of 0.75s, DeepInfra is the only provider suitable for real-time conversational agents — a decisive advantage for chatbots and interactive applications where user retention drops as wait times increase. It also offers the lowest Total Cost of Ownership with an input price of $0.06/1M tokens and a blended rate of $0.14.
Unlike Amazon Bedrock, DeepInfra supports JSON Mode, ensuring reliable structured data extraction for agentic workflows. Combined with Function Calling support, it provides a robust environment for building structured data extraction pipelines. While its throughput (74.6 t/s) is lower than Amazon’s, it is sufficiently fast for most reading-speed applications and significantly outperforms Novita.
Amazon Bedrock demonstrates superior raw computational power, making it the ideal choice for offline tasks where immediate latency is less critical than total completion time.
At 228.6 tokens per second, Bedrock is 3x faster than DeepInfra and 4.5x faster than Novita. Its end-to-end response time for a 500-token output is just 11.83 seconds. For applications requiring bulk text generation where the user is not waiting for the first word to appear — such as background report generation — Amazon’s infrastructure is unmatched.
The key trade-off is the lack of native JSON Mode support. This requires additional prompt engineering and increases the risk of malformed outputs in structured workflows, making it less ideal for complex agentic integrations despite the raw speed advantage.
While Novita offers a feature set comparable to DeepInfra — supporting both JSON Mode and Function Calling — it currently struggles with infrastructure performance.
A TTFT of 9.49 seconds makes this provider unusable for real-time user-facing applications — nearly 13x slower than DeepInfra. It also ranks last in generation speed (50.1 t/s) while matching Amazon’s higher price point ($0.15 blended). Novita is best reserved as a backup provider or for specific non-time-sensitive workloads where other providers are unavailable.
| Metric | DeepInfra | Amazon Bedrock | Novita |
|---|---|---|---|
| Latency (TTFT) | 0.75s ✓ Winner | 0.90s | 9.49s |
| Output Speed (t/s) | 74.6 | 228.6 ✓ Winner | 50.1 |
| Blended Price (/1M) | $0.14 ✓ Winner | $0.15 | $0.15 |
| JSON Mode | Yes ✓ | No | Yes |
| Function Calling | Yes | Yes | Yes |
| Context Window | 203k ✓ | 200k | 200k |
DeepInfra is the cheapest provider with a blended price of $0.14 per million tokens and an input price of $0.06 per million tokens.
No. As of the current benchmark, Amazon Bedrock supports Function Calling but does not list native JSON Mode support for this model, unlike DeepInfra and Novita.
It depends on the metric. DeepInfra is the fastest to start (lowest latency/TTFT at 0.75s), making it best for chat and interactive applications. Amazon Bedrock is the fastest to finish (highest throughput at 228.6 t/s), making it best for generating long documents or code blocks.
GLM-4.7-Flash uses a Mixture-of-Experts (MoE) architecture with 30B total parameters but only ~3B active per inference. This design allows it to deliver strong performance while requiring significantly less compute than dense models of comparable capability.
For the vast majority of GLM-4.7-Flash use cases, DeepInfra is the recommended provider. It successfully balances the key pillars of API performance: fastest to start (0.75s latency), cheapest to run ($0.14/1M tokens), and full support for JSON Mode and Function Calling.
Long Context models incomingMany users requested longer context models to help them summarize bigger chunks
of text or write novels with ease.
We're proud to announce our long context model selection that will grow bigger in the comming weeks.
Models
Mistral-based models have a context size of 32k, and amazon recently r...
Introducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIn this blog post, we will query for the details of a recently released expansion pack for Elden Ring, a critically acclaimed game released in 2022, using the Tavily tool with the ChatDeepInfra model.
Using this boilerplate, one can automate the process of searching for information with well-writt...
Introducing GPU Instances: On-Demand GPU Compute for AI WorkloadsLaunch dedicated GPU containers in minutes with our new GPU Instances feature, designed for machine learning training, inference, and compute-intensive workloads.© 2026 Deep Infra. All rights reserved.