2B instruct Gemma model by Google: lightweight, open, text-to-text LLM for QA, summarization, reasoning, and resource-efficient deployment.
Try this model
Multimodal LLM optimized for visual recognition, image reasoning, captioning, and answering image-related questions.
Try this model
MoE LLM trained from scratch and specialized in few-turn interactions for enhanced performance.
Try this model
Lightweight Gemma 3 model with 128K context, vision-language input, and multilingual support for on-device AI.
Try this model
Lightweight Gemma 3 model (1B) with 128K context, vision-language input, and multilingual support for on-device AI.
Try this model
Most lightweight Gemma 3 model, with 128K context, vision-language input, and multilingual support for on-device AI.
Try this model
Small Qwen 1.5B distilled with reasoning capabilities from Deepseek R1. Beats GPT-4o on MATH-500 whilst being a fraction of the size.
Try this model
Qwen 14B distilled with reasoning capabilities from Deepseek R1. Outperforms GPT-4o in math & matches o1-mini on coding.
Try this model
Llama 70B distilled with reasoning capabilities from Deepseek R1. Surpasses GPT-4o with 94.5% on MATH-500 & matches o1-mini on coding.
Try this model
Multilingual LLM pre-trained and instruction-tuned, surpassing open and closed models on key benchmarks.
Try this model
Lightweight, SOTA open models from Google, leveraging research and tech behind the Gemini models.
Try this model
Multilingual LLM pre-trained and instruction-tuned, surpassing open and closed models on key benchmarks.
Try this model
70B multilingual LLM, pretrained and instruction-tuned, excels in dialogue use cases, surpassing open and closed models.
Try this model
Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.
Try this model
Custom NVIDIA LLM optimized to enhance the helpfulness and relevance of generated responses to user queries.
Try this model
Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.
Try this model
24B model rivaling GPT-4o mini, and larger models like Llama 3.3 70B. Ideal for chat use cases like customer support, translation and summarization.
Try this model
Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.
Try this model
Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.
Try this model