We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

Art That Talks Back: A Hands-On Tutorial on Talking Images
Published on 2025.03.07 by Oguz Vuruskaner
Art That Talks Back: A Hands-On Tutorial on Talking Images

Imagine going to an art gallery where paintings tell their stories. That’s what "Talking Images" do in practice. This tutorial shows you how to make art speak using DeepInfra models. We are going to use:

1-) deepseek-ai/Janus-Pro-7B
2-) hexgrad/Kokoro-82M

Setting Up Environment

First, let’s set up your environment. You’ll need these packages. Here’s the content of requirements.txt:

gradio
requests
python-dotenv
pillow
scipy
numpy
copy

Venv Environment Setup

Show Venv Tutorial

python -m venv venv && (venv\Scripts\activate.bat 2>nul || source venv/bin/activate) && pip install -r requirements.txt
copy

Create .env File

Next, create a .env file in your project folder. Copy your DEEPINFRA_API_TOKEN into it. Your .env file should look like this:

DEEPINFRA_API_TOKEN=your-api-token-here
copy

Replace your-api-token-here with your actual DeepInfra API token.

The Code

Here’s the Python code that makes your images talk. It uses Janus-Pro-7B to describe the image and Kokoro-82M to turn that description into audio.

import os
from io import BytesIO
import gradio as gr
import base64
import requests
from dotenv import load_dotenv, find_dotenv
from scipy.io import wavfile
import numpy as np

_ = load_dotenv(find_dotenv())

def analyze_image(image) -> str:
    url = "https://api.deepinfra.com/v1/inference/deepseek-ai/Janus-Pro-7B"
    headers = {"Authorization": f"bearer {api_token}"}
    buffered = BytesIO()
    if image.mode == "RGBA":
        image = image.convert("RGB")
    format = "JPEG" if image.format == "JPEG" else "PNG"
    image.save(buffered, format=format)
    files = {"image": ("my_image." + format.lower(), buffered.getvalue(), f"image/{format.lower()}")}
    data = {
        "question": "I am this image. You must describe me in my own voice using 'I'. State my colors, shapes, mood, and any notable features with precise detail. Examples: 'I have clouds,' 'I contain sharp lines.' Be vivid, thorough, and factual."
    }
    response = requests.post(url, headers=headers, files=files, data=data)
    return response.json()["response"]

def text_to_speech(text: str) -> tuple:
    url = "https://api.deepinfra.com/v1/inference/hexgrad/Kokoro-82M"
    headers = {
        "Authorization": f"bearer {api_token}",
        "Content-Type": "application/json"
    }
    data = {
        "text": text
    }
    response = requests.post(url, json=data, headers=headers)
    res_json = response.json()
    audio_base64 = res_json["audio"].split(",")[1]
    audio_bytes = base64.b64decode(audio_base64)
    audio_io = BytesIO(audio_bytes)
    sample_rate, audio_data = wavfile.read(audio_io)
    return sample_rate, audio_data

def make_image_talk(image):
    description = analyze_image(image)
    sample_rate, audio_data = text_to_speech(description)
    return sample_rate, audio_data

if __name__ == "__main__":
    api_token = os.environ.get("DEEPINFRA_API_TOKEN")
    interface = gr.Interface(
        fn=make_image_talk,
        inputs=gr.Image(type="pil"),
        outputs=gr.Audio(type="numpy"),
        title="Art That Talks Back",
        description="Upload an image and hear it talk!"
    )
    interface.launch()
copy

Final Look

app.jpg

Try It Yourself!

Ready to hear your own art talk back? Grab yourself an image, run the code, and upload it. Do not forget to follow us on Linkedin and on X.

Related articles
Qwen3.5 122B A10B API Benchmarks: Latency, Throughput & CostQwen3.5 122B A10B API Benchmarks: Latency, Throughput & Cost<p>About Qwen3.5 122B A10B Qwen3.5 122B A10B is Alibaba Cloud&#8217;s mid-tier multimodal foundation model, released in February 2026. It is a multimodal vision-language Mixture-of-Experts model supporting text, image, and video inputs, designed for native multimodal agent applications. It features 122 billion total parameters with 10 billion activated per token through a hybrid architecture that integrates [&hellip;]</p>
Step 3.5 Flash API Benchmarks: Latency, Throughput & CostStep 3.5 Flash API Benchmarks: Latency, Throughput & Cost<p>About Step 3.5 Flash Step 3.5 Flash is an open-weights reasoning model released in February 2026 by StepFun. It leverages a sparse Mixture of Experts (MoE) architecture with 196 billion total parameters and only 11 billion active parameters per token during inference — delivering state-of-the-art performance at a fraction of the cost of dense models. [&hellip;]</p>
Inference Economics: True AI Costs at ScaleInference Economics: True AI Costs at Scale<p>Most teams discover their inference economics the same way: a production bill arrives that looks nothing like the number they expected. The per-token price seemed small enough during testing. Then real traffic showed up, agents started chaining calls, RAG pipelines bloated the context window, and suddenly the math looked completely different. Token prices have fallen [&hellip;]</p>