Late Interaction Text Embedding Models
As of version 0.3.0 FastEmbed supports Late Interaction Text Embedding Models and currently available with one of the most popular embedding model of the family - ColBERT.
What is a Late Interaction Text Embedding Model?
Late Interaction Text Embedding Model is a kind of information retrieval model which performs query and documents interactions at the scoring stage.
In order to better understand it, we can compare it to the models without interaction.
For instance, if you take a sentence-transformer model, compute embeddings for your documents, compute embeddings for your queries, and just compare them by cosine similarity, then you're retrieving points without interaction.
It is a pretty much easy and straightforward approach, however we might be sacrificing some precision due to its simplicity. It is caused by several facts: - there is no interaction between queries and documents at the early stage (embedding generation) nor at the late stage (during scoring). - we are trying to encapsulate all the document information in only one pooled embedding, and obviously, some information might be lost.
Late Interaction Text Embedding models are trying to address it by computing embeddings for each token in queries and documents, and then finding the most similar ones via model specific operation, e.g. ColBERT (Contextual Late Interaction over BERT) uses MaxSim operation. With this approach we can have not only a better representation of the documents, but also make queries and documents more aware one of another.
For more information on ColBERT and MaxSim operation, you can check out this blogpost by Jina AI.
ColBERT in FastEmbed
FastEmbed provides a simple way to use ColBERT model, similar to the ones it has with TextEmbedding
.
from fastembed import LateInteractionTextEmbedding
LateInteractionTextEmbedding.list_supported_models()
embedding_model = LateInteractionTextEmbedding("colbert-ir/colbertv2.0")
documents = [
"ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT.",
"On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process",
]
queries = [
"Are there any other late interaction text embedding models except ColBERT?",
"What is the difference between late interaction and early interaction text embedding models?",
]
NOTE: ColBERT computes query and documents embeddings differently, make sure to use the corresponding methods.
document_embeddings = list(
embedding_model.embed(documents)
) # embed and qury_embed return generators,
# which we need to evaluate by writing them to a list
query_embeddings = list(embedding_model.query_embed(queries))
document_embeddings[0].shape, query_embeddings[0].shape
Don't worry about query embeddings having the bigger shape in this case. ColBERT authors recommend to pad queries with [MASK] tokens to 32 tokens. They also recommends to truncate queries to 32 tokens, however we don't do that in FastEmbed, so you can put some straight into the queries.
MaxSim operator
Qdrant will support ColBERT as of the next version (v1.10), however, at the moment, you can compute embedding similarities manually.
import numpy as np
def compute_relevance_scores(query_embedding: np.array, document_embeddings: np.array, k: int):
"""
Compute relevance scores for top-k documents given a query.
:param query_embedding: Numpy array representing the query embedding, shape: [num_query_terms, embedding_dim]
:param document_embeddings: Numpy array representing embeddings for documents, shape: [num_documents, max_doc_length, embedding_dim]
:param k: Number of top documents to return
:return: Indices of the top-k documents based on their relevance scores
"""
# Compute batch dot-product of query_embedding and document_embeddings
# Resulting shape: [num_documents, num_query_terms, max_doc_length]
scores = np.matmul(query_embedding, document_embeddings.transpose(0, 2, 1))
# Apply max-pooling across document terms (axis=2) to find the max similarity per query term
# Shape after max-pool: [num_documents, num_query_terms]
max_scores_per_query_term = np.max(scores, axis=2)
# Sum the scores across query terms to get the total score for each document
# Shape after sum: [num_documents]
total_scores = np.sum(max_scores_per_query_term, axis=1)
# Sort the documents based on their total scores and get the indices of the top-k documents
sorted_indices = np.argsort(total_scores)[::-1][:k]
return sorted_indices
sorted_indices = compute_relevance_scores(
np.array(query_embeddings[0]), np.array(document_embeddings), k=3
)
print("Sorted document indices:", sorted_indices)
print(f"Query: {queries[0]}")
for index in sorted_indices:
print(f"Document: {documents[index]}")
Use-case recommendation
Despite ColBERT allows to compute embeddings independently and spare some workload offline, it still computes more resources than no interaction models. Due to this, it might be more reasonable to use ColBERT not as a first-stage retriever, but as a re-ranker.
The first-stage retriever would then be a no-interaction model, which e.g. retrieves first 100 or 500 examples, and leave the final ranking to the ColBERT model.