FastEmbed Multi GPU
Fastembed Multi-GPU Tutorial
This tutorial demonstrates how to leverage multi-GPU support in Fastembed. Fastembed supports embedding text and images utilizing modern GPUs for acceleration. Let's explore how to use Fastembed with multiple GPUs step by step.
Prerequisites
To get started, ensure you have the following installed:
- Python 3.9 or later
- Fastembed (pip install fastembed-gpu
)
- Refer to this tutorial if you have issues with GPU dependencies
- Access to a multi-GPU server
Multi-GPU using cuda argument with TextEmbedding Model
from fastembed import TextEmbedding
# define the documents to embed
docs = ["hello world", "flag embedding"] * 100
# define gpu ids
device_ids = [0, 1]
if __name__ == "__main__":
# initialize a TextEmbedding model using CUDA
text_model = TextEmbedding(
model_name="sentence-transformers/all-MiniLM-L6-v2",
cuda=True,
device_ids=device_ids,
lazy_load=True,
)
# generate embeddings
text_embeddings = list(text_model.embed(docs, batch_size=2, parallel=len(device_ids)))
print(text_embeddings)
In this snippet:
- cuda=True
enables GPU acceleration.
- device_ids=[0, 1]
specifies GPUs to use. Replace [0, 1]
with available GPU IDs.
- lazy_load=True
NOTE: When using multi-GPU settings, it is important to configure parallel
and lazy_load
properly to avoid inefficiencies:
parallel
: This parameter enables multi-GPU support by spawning child processes for each GPU specified in device_ids. To ensure proper utilization, the value of parallel
must match the number of GPUs in device_ids. If using a single GPU, this parameter is not necessary.
lazy_load
: Enabling lazy_load
prevents redundant memory usage. Without lazy_load
, the model is initially loaded into the memory of the first GPU by the main process. When child processes are spawned for each GPU, the model is reloaded on the first GPU, causing redundant memory consumption and inefficiencies.