80M checkpoint of M2-BERT, pretrained with sequence length 32768, and it has been fine-tuned for long-context retrieval.
Try this modelThis model maps any text to a low-dimensional dense vector using FlagEmbedding.
Try this model80M checkpoint of M2-BERT, pretrained with sequence length 8192, and it has been fine-tuned for long-context retrieval.
Try this model80M checkpoint of M2-BERT, pretrained with sequence length 2048, and it has been fine-tuned for long-context retrieval.
Try this modelUniversal English sentence embedding model by WhereIsAI with 1024-dim embeddings and 512 context length support.
Try this modelBAAI v1.5 large maps text to dense vectors for retrieval, classification, clustering, semantic search, and LLM databases.
Try this model