FLUX.2 is live! High-fidelity image generation made simple.

As DeepInfra, we are excited to announce our integration with LlamaIndex. LlamaIndex is a powerful library that allows you to index and search documents using various language models and embeddings. In this blog post, we will show you how to chat with books using DeepInfra and LlamaIndex.
We will be using the Project Gutenberg library to get the text of the book "Crime and Punishment" by Fyodor Dostoevsky. We will then use the Meta Llama 3 70B language model and the MiniLM embedding model to chat with the book.
First, let's create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate
Here are the required packages to install:
llama-index
llama-index-llms-deepinfra
llama-index-embeddings-deepinfra
Let's install them:
pip install llama-index llama-index-llms-deepinfra llama-index-embeddings-deepinfra
Before getting started, we also need to get the API key for DeepInfra. You can get your DeepInfra API key from here.
Let's create a .env file in the root directory of the project and add the following lines:
DEEPINFRA_API_TOKEN=YOUR_DEEPINFRA_API_KEY
Here's a Python script to chat with the book "Crime and Punishment":
import requests
from dotenv import load_dotenv, find_dotenv
import re
_ = load_dotenv(find_dotenv())
from llama_index.core import VectorStoreIndex, Document
from llama_index.llms.deepinfra import DeepInfraLLM
from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel
LLM = "meta-llama/Meta-Llama-3-70B-Instruct"
EMBEDDING = "sentence-transformers/all-MiniLM-L12-v2"
BOOK_TITLE = "Crime and Punishment"
def maybe_get_gutenberg_book_id(title):
url = f"http://gutendex.com/books/?search={title}"
response = requests.get(url)
books = response.json()["results"]
for book in books:
if title.lower() in book["title"].lower():
return book["id"]
return None
def get_document(book_id):
url = f"https://www.gutenberg.org/files/{book_id}/{book_id}-0.txt"
response = requests.get(url)
text = response.text
# Get rid of binary characters.
text = re.sub(r"[^\x00-\x7F]+", "", text)
return Document(text=text)
if __name__ == "__main__":
llm = DeepInfraLLM(LLM, max_tokens=1000)
embed_model = DeepInfraEmbeddingModel(EMBEDDING)
book_id = maybe_get_gutenberg_book_id(BOOK_TITLE)
document = get_document(book_id)
index = VectorStoreIndex.from_documents([document], embed_model=embed_model)
chat_engine = index.as_chat_engine(
llm=llm, embed_model=embed_model, max_iterations=20
)
response = chat_engine.chat(
"Summarize the discussion between Raskolnikov and Pyotr Petrovich"
)
print(response)
# The conversation between Raskolnikov and Pyotr Petrovich takes place at the office of...
Voila! You have successfully chatted with the book "Crime and Punishment" using DeepInfra and LlamaIndex. You can now use this code snippet to chat with any book of your choice. Enjoy reading!
For more information on LlamaIndex, please visit our LLM documentation and Embedding documentation.
Feel free to experiment with other books and questions to explore the capabilities of DeepInfra. See you in the next blog post!
Happy chatting! 📚🦙
Use OpenAI API clients with LLaMasGetting started
# create a virtual environment
python3 -m venv .venv
# activate environment in current shell
. .venv/bin/activate
# install openai python client
pip install openai
Choose a model
meta-llama/Llama-2-70b-chat-hf
[meta-llama/L...
Introducing GPU Instances: On-Demand GPU Compute for AI WorkloadsLaunch dedicated GPU containers in minutes with our new GPU Instances feature, designed for machine learning training, inference, and compute-intensive workloads.
Build an OCR-Powered PDF Reader & Summarizer with DeepInfra (Kimi K2)<p>This guide walks you from zero to working: you’ll learn what OCR is (and why PDFs can be tricky), how to turn any PDF—including those with screenshots of tables—into text, and how to let an LLM do the heavy lifting to clean OCR noise, reconstruct tables, and summarize the document. We’ll use DeepInfra’s OpenAI-compatible API […]</p>
© 2026 Deep Infra. All rights reserved.