LLaMA-2 Chat (7B)

Chat-optimized LLM leveraging public datasets and 1M+ human annotations.

Try this model
LLaMA-2 Chat (13B)

Chat-optimized LLM leveraging public datasets and 1M+ human annotations.

Try this model
Llama 3 8B Instruct Reference

Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.

Try this model
Llama 3 8B Instruct Lite

Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.

Try this model
Llama 3 70B Instruct Turbo

Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.

Try this model
Llama 3 70B Instruct Reference

Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.

Try this model
Gryphe MythoMax L2 Lite (13B)

Experimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.

Try this model
MythoMax-L2

Experimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.

Try this model
Typhoon 2 8B Instruct

Instruct Thai large language model with 8 billion parameters based on Llama3.1-8B.

Try this model
Typhoon 2 70B Instruct

Instruct Thai large language model with 70 billion parameters, based on Llama3.1-70B.

Try this model
Qwen 2

Transformer-based decoder-only LLM, pretrained on extensive data, offering improvements over the previous Qwen model.

Try this model
Nous Hermes 2 - Mixtral 8x7B-DPO

Flagship Nous Research MoE model trained on 1M+ GPT-4 and high-quality open dataset entries, excelling across diverse tasks.

Try this model
Mixtral-8x22B Instruct v0.1

Instruct fine-tuned version of Mixtral-8x22B-v0.1.

Try this model
Llama 3.1 70B

Multilingual LLM pre-trained and instruction-tuned, surpassing open and closed models on key benchmarks.

Try this model
Mistral (7B) Instruct v0.3

Instruct fine-tuned version of the Mistral-7B-v0.3.

Try this model
Mistral (7B) Instruct v0.2

Improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.

Try this model
Mixtral 8x7B Instruct v0.1

Pretrained generative Sparse Mixture of Experts.

Try this model
Mistral Instruct

Instruct fine-tuned version of Mistral-7B-v0.1

Try this model
Qwen2.5 7B Instruct Turbo

Instruction-tuned 7.61B Qwen2.5 causal LLM with 131K context, RoPE, SwiGLU, RMSNorm, and advanced attention mechanisms.

Try this model
Gemma-2 Instruct (9B)

Lightweight, SOTA open models from Google, leveraging research and tech behind the Gemini models.

Try this model

Let's stay in touch.

Get Contact
cta-area