🚀 New models by Bria.ai, generate and edit images at scale 🚀
google/
$0.002
/ 1M tokens
ChatGPT said: EmbeddingGemma is a 300M parameter multilingual open embedding model from Google DeepMind, designed for efficient deployment even on low-resource devices, producing high-quality text vector representations for tasks such as search, classification, clustering, and semantic similarity.
You need to login to use this model
LoginSettings
ServiceTier
The service tier used for processing the request. When set to 'priority', the request will be processed with higher priority.
Normalize
whether to normalize the computed embeddings
Dimensions
The number of dimensions in the embedding. If not provided, the model's default will be used.If provided bigger than model's default, the embedding will be padded with zeros. (Default: empty, 32 ≤ dimensions ≤ 8192)
Custom Instruction
Custom instruction prepending to each input. If empty, no instruction will be used.. (Default: empty)
[
[
0,
0.5,
1
],
[
1,
0.5,
0
]
]
Model Page: EmbeddingGemma
Resources and Technical Documentation:
Terms of Use: Terms
Authors: Google DeepMind
EmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages.
The small size and on-device focus makes it possible to deploy in environments with limited resources such as mobile phones, laptops, or desktops, democratizing access to state of the art AI models and helping foster innovation for everyone.
Input:
Output:
These model weights are designed to be used with Sentence Transformers, using the Gemma 3 implementation from Hugging Face Transformers as the backbone.
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("google/embeddinggemma-300m")
# Run inference with queries and documents
query = "Which planet is known as the Red Planet?"
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
]
query_embeddings = model.encode_query(query)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# (768,) (4, 768)
# Compute similarities to determine a ranking
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.3011, 0.6359, 0.4930, 0.4889]])
NOTE: EmbeddingGemma activations do not support float16
. Please use float32
or bfloat16
as appropriate for your hardware.
This model was trained on a dataset of text data that includes a wide variety of sources totaling approximately 320 billion tokens. Here are the key components:
The combination of these diverse data sources is crucial for training a powerful multilingual embedding model that can handle a wide variety of different tasks and data formats.
Here are the key data cleaning and filtering methods applied to the training data:
EmbeddingGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e), for more details refer to the Gemma 3 model card.
Training was done using JAX and ML Pathways. For more details refer to the Gemma 3 model card.
The model was evaluated against a large collection of different datasets and metrics to cover different aspects of text understanding.
MTEB (Multilingual, v2) | ||
---|---|---|
Dimensionality | Mean (Task) | Mean (TaskType) |
768d | 61.15 | 54.31 |
512d | 60.71 | 53.89 |
256d | 59.68 | 53.01 |
128d | 58.23 | 51.77 |
MTEB (English, v2) | ||
---|---|---|
Dimensionality | Mean (Task) | Mean (TaskType) |
768d | 68.36 | 64.15 |
512d | 67.80 | 63.59 |
256d | 66.89 | 62.94 |
128d | 65.09 | 61.56 |
MTEB (Code, v1) | ||
---|---|---|
Dimensionality | Mean (Task) | Mean (TaskType) |
768d | 68.76 | 68.76 |
512d | 68.48 | 68.48 |
256d | 66.74 | 66.74 |
128d | 62.96 | 62.96 |
MTEB (Multilingual, v2) | ||
---|---|---|
Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
Q4_0 (768d) | 60.62 | 53.61 |
Q8_0 (768d) | 60.93 | 53.95 |
Mixed Precision* (768d) | 60.69 | 53.82 |
MTEB (English, v2) | ||
---|---|---|
Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
Q4_0 (768d) | 67.91 | 63.64 |
Q8_0 (768d) | 68.13 | 63.85 |
Mixed Precision* (768d) | 67.95 | 63.83 |
MTEB (Code, v1) | ||
---|---|---|
Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
Q4_0 (768d) | 67.99 | 67.99 |
Q8_0 (768d) | 68.70 | 68.70 |
Mixed Precision* (768d) | 68.03 | 68.03 |
Note: QAT models are evaluated after quantization
* Mixed Precision refers to per-channel quantization with int4 for embeddings, feedforward, and projection layers, and int8 for attention (e4_a8_f4_p4).
EmbeddingGemma can generate optimized embeddings for various use cases—such as document retrieval, question answering, and fact verification—or for specific input types—either a query or a document—using prompts that are prepended to the input strings.
Query prompts follow the form task: {task description} | query:
where the task description varies by the use case, with the default task description being search result
. Document-style prompts follow the form title: {title | "none"} | text:
where the title is either none
(the default) or the actual title of the document. Note that providing a title, if available, will improve model performance for document prompts but may require manual formatting.
Use the following prompts based on your use case and input data type. These may already be available in the EmbeddingGemma configuration in your modeling framework of choice.
Use Case (task type enum) | Descriptions | Recommended Prompt |
---|---|---|
Retrieval (Query) | Used to generate embeddings that are optimized for document search or information retrieval | task: search result | query: {content} |
Retrieval (Document) | title: {title | "none"} | text: {content} | |
Question Answering | task: question answering | query: {content} | |
Fact Verification | task: fact checking | query: {content} | |
Classification | Used to generate embeddings that are optimized to classify texts according to preset labels | task: classification | query: {content} |
Clustering | Used to generate embeddings that are optimized to cluster texts based on their similarities | task: clustering | query: {content} |
Semantic Similarity | Used to generate embeddings that are optimized to assess text similarity. This is not intended for retrieval use cases. | task: sentence similarity | query: {content} |
Code Retrieval | Used to retrieve a code block based on a natural language query, such as sort an array or reverse a linked list. Embeddings of the code blocks are computed using retrieval_document. | task: code retrieval | query: {content} |
These models have certain limitations that users should be aware of.
Open embedding models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Semantic Similarity: Embeddings optimized to assess text similarity, such as recommendation systems and duplicate detection
Classification: Embeddings optimized to classify texts according to preset labels, such as sentiment analysis and spam detection
Clustering: Embeddings optimized to cluster texts based on their similarities, such as document organization, market research, and anomaly detection
Retrieval
Question Answering: Embeddings for questions in a question-answering system, optimized for finding documents that answer the question, such as chatbox.
Fact Verification: Embeddings for statements that need to be verified, optimized for retrieving documents that contain evidence supporting or refuting the statement, such as automated fact-checking systems.
Training Data
Language Ambiguity and Nuance
Risks identified and mitigations:
At the time of release, this family of models provides high-performance open embedding model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown superior performance to other, comparably-sized open model alternatives.
© 2025 Deep Infra. All rights reserved.