Chat-optimized LLM leveraging public datasets and 1M+ human annotations.
Try this modelChat-optimized LLM leveraging public datasets and 1M+ human annotations.
Try this modelAuto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this modelAuto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this modelAuto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this modelAuto-regressive LLM with optimized transformers, SFT, and RLHF for alignment with helpfulness and safety preferences.
Try this modelExperimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.
Try this modelExperimental merge of MythoLogic-L2 and Huginn using tensor intermingling for enhanced front and end tensor integration.
Try this modelInstruct Thai large language model with 8 billion parameters based on Llama3.1-8B.
Try this modelInstruct Thai large language model with 70 billion parameters, based on Llama3.1-70B.
Try this modelTransformer-based decoder-only LLM, pretrained on extensive data, offering improvements over the previous Qwen model.
Try this modelFlagship Nous Research MoE model trained on 1M+ GPT-4 and high-quality open dataset entries, excelling across diverse tasks.
Try this modelMultilingual LLM pre-trained and instruction-tuned, surpassing open and closed models on key benchmarks.
Try this modelImproved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.
Try this modelInstruction-tuned 7.61B Qwen2.5 causal LLM with 131K context, RoPE, SwiGLU, RMSNorm, and advanced attention mechanisms.
Try this modelLightweight, SOTA open models from Google, leveraging research and tech behind the Gemini models.
Try this model