We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

Inference LoRA adapter model
Published on 2024.12.06 by Askar Aitzhan
Inference LoRA adapter model

Understanding LoRA inference

Concepts

  • Base model: The original model that is used as a starting point.
  • LoRA adapter model: A small model that is used to adapt the base model for a specific task.
  • LoRA Rank: The rank of the matrix that is used to adapt the model.

What you need to inference with LoRA adapter model

  1. Supported base model
  2. LoRA adapter model hosted on HuggingFace
  3. HuggingFace token if the LoRA adapter model is private
  4. DeepInfra account

How to inference with LoRA adapter in DeepInfra

  1. Go to the dashboard
  2. Click on the 'New Deployment' button
  3. Click on the 'LoRA Model' tab
  4. Fill the form:
    • LoRA model name: model name used to reference the deployment
    • Hugging Face Model Name: Hugging Face model name
    • Hugging Face Token: (optional) Hugging Face token if the LoRA adapter model is private
  5. Click on the 'Upload' button

Note: The list of supported base models is listed on the same page. If you need a base model that is not listed, please contact us at feedback@deepinfra.com

Rate limits on LoRA adapter model

Rate limit will apply on combined traffic of all LoRA adapter models with the same base model. For example, if you have 2 LoRA adapter models with the same base model, and have rate limit of 200. Those 2 LoRA adapter models combined will have rate limit of 200.

Pricing on LoRA adapter model

Pricing is 50% higher than base model.

How is LoRA adapter model speed compared to base model speed?

LoRA adapter model speed is lower than base model, because there is additional compute and memory overhead to apply the LoRA adapter. From our benchmarks, the LoRA adapter model speed is about 50-60% slower than base model.

How to make LoRA adapter model faster?

You could merge the LoRA adapter with the base model to reduce the overhead. And use custom deployment, the speed will be close to the base model.

Related articles
Kimi K2 0905 API from Deepinfra: Practical Speed, Predictable Costs, Built for Devs - Deep InfraKimi K2 0905 API from Deepinfra: Practical Speed, Predictable Costs, Built for Devs - Deep Infra<p>Kimi K2 0905 is Moonshot’s long-context Mixture-of-Experts update designed for agentic and coding workflows. With a context window up to ~256K tokens, it can ingest large codebases, multi-file documents, or long conversations and still deliver structured, high-quality outputs. But real-world performance isn’t defined by the model alone—it’s determined by the inference provider that serves it: [&hellip;]</p>
Building Efficient AI Inference on NVIDIA Blackwell PlatformBuilding Efficient AI Inference on NVIDIA Blackwell PlatformDeepInfra delivers up to 20x cost reductions on NVIDIA Blackwell by combining MoE architectures, NVFP4 quantization, and inference optimizations — with a Latitude case study.
Function Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep InfraFunction Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep Infra<p>Modern large language models (LLMs) are incredibly powerful at understanding and generating text, but until recently they were largely static: they could only respond based on patterns in their training data. Function calling changes that. It lets language models interact with external logic — your own code, APIs, utilities, or business systems — while still [&hellip;]</p>