Best-in-class open-source LLM trained with IDA for alignment, reasoning, and self-reflective, agentic applications.
Chat
Code
Reasoning
API Usage
How to use Cogito V1 Preview Qwen 14BModel CardPrompting Cogito V1 Preview Qwen 14BApplications & Use CasesAPI Usage
Endpoint
deepcogito/cogito-v1-preview-qwen-14B
RUN INFERENCE
curl -X POST "https://aitradestore.com/api/v1/chat/completions"
-H "Authorization: Bearer $AITRADESTORE_API_KEY"
-H "Content-Type: application/json"
-d '{
"model": "deepcogito/cogito-v1-preview-qwen-14B",
"messages": [{"role": "user", "content": "Given two binary strings `a` and `b`, return their sum as a binary string"}]
}'
JSON RESPONSE
RUN INFERENCE
from aitradestore import AiTradeStore client = AiTradeStore()
response = client.chat.completions.create(
model="deepcogito/cogito-v1-preview-qwen-14B",
messages=[{"role": "user", "content": "Given two binary strings `a` and `b`, return their sum as a binary string"}], ) print(response.choices[0].message.content)
JSON RESPONSE
RUN INFERENCE
import AiTradeStore from "ai-tradestore"; const aitradestore = new AiTradeStore(); const response = await aitradestore.chat.completions.create({
messages: [{"role": "user", "content": "Given two binary strings `a` and `b`, return their sum as a binary string"}],
model: "deepcogito/cogito-v1-preview-qwen-14B", });
console.log(response.choices[0].message.content)
JSON RESPONSE
Model Provider:
Deep Cogito
Type:
Chat
Variant:
Parameters:
14B
Deployment:
✔️ Dedicated
Quantization
Context length:
128K
Pricing:
How to use Cogito V1 Preview Qwen 14B
Model details
The Cogito LLMs are instruction tuned generative models (text in/text out). All models are released under an open license for commercial use.
- Cogito models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models).
- The LLMs are trained using Iterated Distillation and Amplification (IDA) - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement.
- The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts.
- In both standard and reasoning modes, Cogito v1-preview models outperform their size equivalent counterparts on common industry benchmarks.
- Each model is trained in over 30 languages and supports a context length of 128k.
Evaluations
We compare our models against the state of the art size equivalent models in direct mode as well as the reasoning mode. For the direct mode, we compare against Llama / Qwen instruct counterparts. For reasoning, we use Deepseek's R1 distilled counterparts / Qwen's QwQ model.

Livebench Global Average:

For detailed evaluations, please refer to the Blog Post.