Skip to content

🚶🏻‍♂️ Getting Started

Here you will learn how to use the fastembed package to embed your data into a vector space. The package is designed to be easy to use and fast. It is built on top of the ONNX standard, which allows for fast inference on a variety of hardware (called Runtimes in ONNX).

Quick Start

The fastembed package is designed to be easy to use. We'll be using TextEmbedding class. It takes a list of strings as input and returns a generator of vectors.

> 💡 You can learn more about generators from Python Wiki

!pip install -Uqq fastembed
import numpy as np

from fastembed import TextEmbedding


# Example list of documents
documents: list[str] = [
    "This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.",
    "fastembed is supported by and maintained by Qdrant.",
]

# This will trigger the model download and initialization
embedding_model = TextEmbedding()
print("The model BAAI/bge-small-en-v1.5 is ready to use.")

embeddings_generator = embedding_model.embed(documents)
embeddings_list = list(embeddings_generator)
len(embeddings_list[0])  # Vector of 384 dimensions
Fetching 9 files:   0%|          | 0/9 [00:00<?, ?it/s]
The model BAAI/bge-small-en-v1.5 is ready to use.

384

> 💡 Why do we use generators? > > We use them to save memory mostly. Instead of loading all the vectors into memory, we can load them one by one. This is useful when you have a large dataset and you don't want to load all the vectors at once.

embeddings_generator = embedding_model.embed(documents)

for doc, vector in zip(documents, embeddings_generator):
    print("Document:", doc)
    print(f"Vector of type: {type(vector)} with shape: {vector.shape}")
Document: This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.
Vector of type: <class 'numpy.ndarray'> with shape: (384,)
Document: fastembed is supported by and maintained by Qdrant.
Vector of type: <class 'numpy.ndarray'> with shape: (384,)

embeddings_list = np.array(list(embedding_model.embed(documents)))
embeddings_list.shape
(2, 384)

We're using BAAI/bge-small-en-v1.5 a state of the art Flag Embedding model. The model does better than OpenAI text-embedding-ada-002. We've made it even faster by converting it to ONNX format and quantizing the model for you.

Format of the Document List

  1. List of Strings: Your documents must be in a list, and each document must be a string
  2. For Retrieval Tasks with our default: If you're working with queries and passages, you can add special labels to them:
  3. Queries: Add "query:" at the beginning of each query string
  4. Passages: Add "passage:" at the beginning of each passage string

Beyond the default model

The default model is built for speed and efficiency. If you need a more accurate model, you can use the TextEmbedding class to load any model from our list of available models. You can find the list of available models using TextEmbedding.list_supported_models().

multilingual_large_model = TextEmbedding("intfloat/multilingual-e5-large")
Fetching 8 files:   0%|          | 0/8 [00:00<?, ?it/s]
np.array(
    list(multilingual_large_model.embed(["Hello, world!", "你好世界", "¡Hola Mundo!", "नमस्ते!"]))
).shape  # Vector of 1024 dimensions
(4, 1024)

Next: Checkout how to use FastEmbed with Qdrant for similarity search: FastEmbed with Qdrant