Chat-optimized LLM leveraging public datasets and 1M+ human annotations.
Try this model
Chat-optimized LLM leveraging public datasets and 1M+ human annotations.
Try this model
Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this model
Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this model
Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this model
Auto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this model
Experimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.
Try this model
Experimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.
Try this model
Instruct Thai large language model with 8 billion parameters based on Llama3.1-8B.
Try this model
Instruct Thai large language model with 70 billion parameters, based on Llama3.1-70B.
Try this model
Transformer-based decoder-only LLM, pretrained on extensive data, offering improvements over the previous Qwen model.
Try this model
Flagship Nous Research MoE model trained on 1M+ GPT-4 and high-quality open dataset entries, excelling across diverse tasks.
Try this model
Multilingual LLM pre-trained and instruction-tuned, surpassing open and closed models on key benchmarks.
Try this model
Improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.
Try this model
Instruction-tuned 7.61B Qwen2.5 causal LLM with 131K context, RoPE, SwiGLU, RMSNorm, and advanced attention mechanisms.
Try this model
Lightweight, SOTA open models from Google, leveraging research and tech behind the Gemini models.
Try this model