M2-BERT 80M 32K Retrieval

80M checkpoint of M2-BERT, pretrained with sequence length 32768, and it has been fine-tuned for long-context retrieval.

Try this model
BGE-Base-EN v1.5

This model maps any text to a low-dimensional dense vector using FlagEmbedding.

Try this model
M2-BERT 80M 8K Retrieval

80M checkpoint of M2-BERT, pretrained with sequence length 8192, and it has been fine-tuned for long-context retrieval.

Try this model
M2-BERT 80M 2K Retrieval

80M checkpoint of M2-BERT, pretrained with sequence length 2048, and it has been fine-tuned for long-context retrieval.

Try this model
UAE-Large v1

Universal English sentence embedding model by WhereIsAI with 1024-dim embeddings and 512 context length support.

Try this model
BGE-Large-EN v1.5

BAAI v1.5 large maps text to dense vectors for retrieval, classification, clustering, semantic search, and LLM databases.

Try this model

Let's stay in touch.

Get Contact
cta-area