🚀 New models by Bria.ai, generate and edit images at scale 🚀

Databricks Dolly is instruction tuned 12 billion parameter casual language model based on EleutherAI's pythia-12b. It was pretrained on The Pile, GPT-J's pretraining corpus. databricks-dolly-15k open source instruction following dataset was used to tune the model.
To get started, you'll need an API key from DeepInfra.
You can deploy the databricks/dolly-v2-12b model easily through the web dashboard or by using our API. The model will be automatically deployed when you first run an inference request.
You can use it with our REST API. Here's how to call the model using curl:
curl -X POST \
-d '{"prompt": "Who is Elvis Presley?"}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer YOUR_API_KEY" \
'https://api.deepinfra.com/v1/inference/databricks/dolly-v2-12b'
We charge per inference request execution time, $0.0005 per second. Inference runs on Nvidia A100 cards. To see the full documentation of how to call this model, check out the model page on our website.
You can browse all available models on our models page.
If you have any question, just reach out to us on our Discord server.
Model Distillation Making AI Models EfficientAI Model Distillation Definition & Methodology
Model distillation is the art of teaching a smaller, simpler model to perform as well as a larger one. It's like training an apprentice to take over a master's work—streamlining operations with comparable performance . If you're struggling with depl...
Art That Talks Back: A Hands-On Tutorial on Talking ImagesTurn any image into a talking masterpiece with this step-by-step guide using DeepInfra’s GenAI models.
Use OpenAI API clients with LLaMasGetting started
# create a virtual environment
python3 -m venv .venv
# activate environment in current shell
. .venv/bin/activate
# install openai python client
pip install openai
Choose a model
meta-llama/Llama-2-70b-chat-hf
[meta-llama/L...© 2025 Deep Infra. All rights reserved.