LLM trained on 2T tokens with double Llama 1's context length, available in 7B, 13B, and 70B parameter sizes.
Try this model8B Llama 3.1 model fine-tuned for content safety, moderating prompts and responses in 8 languages with MLCommons alignment.
Try this model11B Llama 3.2 model fine-tuned for content safety, detecting harmful multimodal prompts and text in image reasoning use cases.
Try this model8B Llama 3-based safeguard model for classifying LLM inputs and outputs, detecting unsafe content and policy violations.
Try this model7B Llama 2-based safeguard model for classifying LLM inputs and outputs, detecting unsafe content and policy violations.
Try this model7.3B model surpassing Llama 2 13B, nearing CodeLlama 7B on code, with GQA for speed and SWA for efficient long-sequence handling.
Try this model