We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

The easiest way to build AI applications with Llama 2 LLMs.
Published on 2023.08.02 by Nikola Borisov
The easiest way to build AI applications with Llama 2 LLMs.

The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the license allows you to use them for commercial purposes. So now is the time to build your next AI application with Llama 2 hosted by DeepInfra, and save a ton of money compared to OpenAI's API.

Picking the right model

There are 3 different sizes of Llama 2 models as well as chat variants of each size:

Depending on the application you are building, you might want to use a different model. Smaller models are faster and cheaper to run, per token generated. Larger models take longer to run and cost more per token generated, but they are more accurate.

Getting started

Simply create an account on DeepInfra and get yourself an API Key.

# set the API key as an environment variable
AUTH_TOKEN=<your-api-key>
copy

Each model has a detailed API documentation page that will guide you through the process of using it. For example, here is the API documentation for the llama-2-7b-chat model.

Running inference

Making an inference request is as easy as making a POST request to our API.

curl -X POST \
    -d '{"input": "Who is Bill Clinton?"}'  \
    -H "Authorization: bearer $AUTH_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/meta-llama/Llama-2-7b-chat-hf'
copy

And you will get output like this:

{
   "inference_status" : {
      "cost" : 0.00454849982634187,
      "runtime_ms" : 9097,
      "status" : "succeeded"
   },
   "request_id" : "RKQsJyO5n7ZLif------",
   "results" : [
      {
         "generated_text" : "Who is Bill Clinton?\n\nAnswer: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001. He was born on August 19, 1946, in Hope, Arkansas, and grew up in a poor family. Clinton graduated from Georgetown University and received a Rhodes Scholarship to study at Oxford University. He later attended Yale Law School and became a professor of law at the University of Arkansas.\n\nClinton entered politics in the 1970s and served as Attorney General of Arkansas from 1979 to 1981. He was elected Governor of Arkansas in 1982 and served four terms, from 1983 to 1992. In 1992, Clinton was elected President of the United States, defeating incumbent President George H.W. Bush.\n\nDuring his presidency, Clinton implemented several notable policies, including the Don't Ask, Don't Tell Repeal Act, which allowed LGBT individuals to serve openly in the military, and the North American Free"
      }
   ]
}
copy

It is easy to build AI applications with Llama 2 models hosted by DeepInfra.

If you need any help, just reach out to us on our Discord server.

Related articles
Building a Voice Assistant with Whisper, LLM, and TTSBuilding a Voice Assistant with Whisper, LLM, and TTSLearn how to create a voice assistant using Whisper for speech recognition, LLM for conversation, and TTS for text-to-speech.
Enhancing Open-Source LLMs with Function Calling FeatureEnhancing Open-Source LLMs with Function Calling FeatureWe're excited to announce that the Function Calling feature is now available on DeepInfra. We're offering Mistral-7B and Mixtral-8x7B models with this feature. Other models will be available soon. LLM models are powerful tools for various tasks. However, they're limited in their ability to per...
Lzlv model for roleplaying and creative workLzlv model for roleplaying and creative workRecently an interesting new model got released. It is called Lzlv, and it is basically a merge of few existing models. This model is using the Vicuna prompt format, so keep this in mind if you are using our raw [API](/lizpreciatior/lzlv_70b...