FLUX.2 is live! High-fidelity image generation made simple.

The text generation inference open source project by huggingface looked like a promising framework for serving large language models (LLM). However, huggingface announced that they will change the license of code with version v1.0.0. While the previous license Apache 2.0 was permissive, the new one is restrictive for our use cases.
We decided to fork the project and continue to maintain it under the Apache 2.0 license. We will continue to contribute to the project and keep it up to date. We will accept pull requests from the community, and we will keep the project truly open source and free to use.
Here is a link to the code: https://github.com/deepinfra/text-generation-inference
We hope that in time a community of other developers and organizations that want to keep this project truly open source will form around it.
Sadly it is becoming more and more common for popular open source projects to change their license after they gain some traction. This happened with MongoDB, Grafana, ElasticSearch, and many others. As a developer, when you decide to adopt a particular open source project, you start investing time and effort into using it. You build your application around it, and you start depending on it. Then, suddenly, the license changes, and you might be forced to find an alternative.
Imagine if meta changes the license of pytorch. Or if tomorrow huggingface decides to change the license of transformers in a similar way to prohibit commercial use.
We believe that the changing of the license of open source projects mid-flight is a unfriendly move towards the community.
If you need any help, just reach out to us on our Discord server.
Accelerating Reasoning Workflows with Nemotron 3 Nano on DeepInfraDeepInfra is an official launch partner for NVIDIA Nemotron 3 Nano, the newest open reasoning model in the Nemotron family. Our goal is to give developers, researchers, and teams the fastest and simplest path to using Nemotron 3 Nano from day one.
Guaranteed JSON output on Open-Source LLMs.DeepInfra is proud to announce that we have released "JSON mode" across all of our text language models. It is available through the "response_format" object, which currently supports only {"type": "json_object"}
Our JSON mode will guarantee that all tokens returned in the output of a langua...
Llama 3.1 70B Instruct API from DeepInfra: Snappy Starts, Fair Pricing, Production Fit - Deep Infra<p>Llama 3.1 70B Instruct is Meta’s widely-used, instruction-tuned model for high-quality dialogue and tool use. With a ~131K-token context window, it can read long prompts and multi-file inputs—great for agents, RAG, and IDE assistants. But how “good” it feels in practice depends just as much on the inference provider as on the model: infra, batching, […]</p>
© 2026 Deep Infra. All rights reserved.