FLUX.2 is live! High-fidelity image generation made simple.

Flan-UL2 is probably the best open source model available right now for chatbots. In this post we will show you how to get started with it very easily. Flan-UL2 is large - 20B parameters. It is fine tuned version of the UL2 model using Flan dataset. Because this is quite a large model it is not easy to deploy it on your own machine. If you rent a GPU in AWS, it will cost you around $1.5 per hour or $1080 per month. Using DeepInfra model deployments you only pay for the inference time, and we do not charge for cold starts. Our pricing is $0.0005 per second of running inference on Nvidia A100. Which translates to about $0.0001 per token generated by Flan-UL2.
Also check out the model page https://deepinfra.com/google/flan-ul2. You can
run inferences, check the docs/API for running inferences via curl.
First, you'll need to get an API key from the DeepInfra dashboard.
You can deploy the google/flan-ul2 model easily through the web dashboard or API. The model will be automatically deployed when you first make an inference request.
You can use it with our REST API. Here's how to use it with curl:
curl -X POST \
-d '{"prompt": "Hello, how are you?"}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer YOUR_API_KEY" \
'https://api.deepinfra.com/v1/inference/google/flan-ul2'
To see the full documentation of how to call this model, check out the model page on the DeepInfra website or the API documentation.
If you want a list of all the models you can use on DeepInfra, you can visit the models page on our website or use the API to get a list of available models.
There is no easier way to get started with arguably one of the best open source LLM. This was quite easy right? You did not have to deal with docker, transformers, pytorch, etc. If you have any question, just reach out to us on our Discord server.
GLM-4.6 API: Get fast first tokens at the best $/M from Deepinfra's API - Deep Infra<p>GLM-4.6 is a high-capacity, “reasoning”-tuned model that shows up in coding copilots, long-context RAG, and multi-tool agent loops. With this class of workload, provider infrastructure determines perceived speed (first-token time), tail stability, and your unit economics. Using ArtificialAnalysis (AA) provider charts for GLM-4.6 (Reasoning), DeepInfra (FP8) pairs a sub-second Time-to-First-Token (TTFT) (0.51 s) with the […]</p>
Introducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIn this blog post, we will query for the details of a recently released expansion pack for Elden Ring, a critically acclaimed game released in 2022, using the Tavily tool with the ChatDeepInfra model.
Using this boilerplate, one can automate the process of searching for information with well-writt...
Search That Actually Works: A Guide to LLM RerankersSearch relevance isn’t a nice-to-have feature for your site or app. It can make or break the entire user experience.
When a customer searches "best laptop for video editing" and gets results for gaming laptops or budget models, they leave empty-handed.
Embeddings help you find similar content, bu...© 2025 Deep Infra. All rights reserved.