OpenAIEmbedder
as the default embedder, but other embedders are supported as well. Here is an example:
- OpenAI
- Cohere
- Gemini
- AWS Bedrock
- Azure OpenAI
- Fireworks
- HuggingFace
- Jina
- Mistral
- Ollama
- Qdrant FastEmbed
- Together
- Voyage AI
Batch Embeddings
Many embedding providers support processing multiple texts in a single API call, known as batch embedding. This approach offers several advantages: it reduces the number of API requests, helps avoid rate limits, and significantly improves performance when processing large amounts of text. To enable batch processing, set theenable_batch
flag to True
when configuring your embedder.
The batch_size
paramater can be used to control the amount of texts sent per batch.
- Azure OpenAI
- Cohere
- Fireworks
- Gemini
- Jina
- Mistral
- Nebius
- OpenAI
- Together
- Voyage