We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

The easiest way to build AI applications with Llama 2 LLMs.
Published on 2023.08.02 by Nikola Borisov
The easiest way to build AI applications with Llama 2 LLMs.

The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the license allows you to use them for commercial purposes. So now is the time to build your next AI application with Llama 2 hosted by DeepInfra, and save a ton of money compared to OpenAI's API.

Picking the right model

There are 3 different sizes of Llama 2 models as well as chat variants of each size:

Depending on the application you are building, you might want to use a different model. Smaller models are faster and cheaper to run, per token generated. Larger models take longer to run and cost more per token generated, but they are more accurate.

Getting started

Simply create an account on DeepInfra and get yourself an API Key.

# set the API key as an environment variable
AUTH_TOKEN=<your-api-key>
copy

Each model has a detailed API documentation page that will guide you through the process of using it. For example, here is the API documentation for the llama-2-7b-chat model.

Running inference

Making an inference request is as easy as making a POST request to our API.

curl -X POST \
    -d '{"input": "Who is Bill Clinton?"}'  \
    -H "Authorization: bearer $AUTH_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/meta-llama/Llama-2-7b-chat-hf'
copy

And you will get output like this:

{
   "inference_status" : {
      "cost" : 0.00454849982634187,
      "runtime_ms" : 9097,
      "status" : "succeeded"
   },
   "request_id" : "RKQsJyO5n7ZLif------",
   "results" : [
      {
         "generated_text" : "Who is Bill Clinton?\n\nAnswer: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001. He was born on August 19, 1946, in Hope, Arkansas, and grew up in a poor family. Clinton graduated from Georgetown University and received a Rhodes Scholarship to study at Oxford University. He later attended Yale Law School and became a professor of law at the University of Arkansas.\n\nClinton entered politics in the 1970s and served as Attorney General of Arkansas from 1979 to 1981. He was elected Governor of Arkansas in 1982 and served four terms, from 1983 to 1992. In 1992, Clinton was elected President of the United States, defeating incumbent President George H.W. Bush.\n\nDuring his presidency, Clinton implemented several notable policies, including the Don't Ask, Don't Tell Repeal Act, which allowed LGBT individuals to serve openly in the military, and the North American Free"
      }
   ]
}
copy

It is easy to build AI applications with Llama 2 models hosted by DeepInfra.

If you need any help, just reach out to us on our Discord server.

Related articles
From Precision to Quantization: A Practical Guide to Faster, Cheaper LLMsFrom Precision to Quantization: A Practical Guide to Faster, Cheaper LLMs<p>Large language models live and die by numbers—literally trillions of them. How finely we store those numbers (their precision) determines how much memory a model needs, how fast it runs, and sometimes how good its answers are. This article walks from the basics to the deep end: we’ll start with how computers even store a [&hellip;]</p>
Pricing 101: Token Math & Cost-Per-Completion ExplainedPricing 101: Token Math & Cost-Per-Completion Explained<p>LLM pricing can feel opaque until you translate it into a few simple numbers: input tokens, output tokens, and price per million. Every request you send—system prompt, chat history, RAG context, tool-call JSON—counts as input; everything the model writes back counts as output. Once you know those two counts, the cost of a completion is [&hellip;]</p>
Long Context models incomingLong Context models incomingMany users requested longer context models to help them summarize bigger chunks of text or write novels with ease. We're proud to announce our long context model selection that will grow bigger in the comming weeks. Models Mistral-based models have a context size of 32k, and amazon recently r...