Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use
10K+
The Ministral 3 family consists of compact, efficient multimodal language models designed for edge deployment and local inference. All three variants—3B, 8B, and 14B—offer strong instruction-following capabilities, vision support, and broad hardware compatibility. Each model is released in GGUF format across multiple quantization levels, enabling flexible trade-offs between performance and resource usage. All variants are post-trained for instruction tasks, making them ideal for:
| Attribute | Details |
|---|---|
| Provider | Mistral AI |
| Architecture | mistral3 |
| Languages | Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic. |
| Tool calling | ✅ |
| Input modalities | Text, Images |
| Output modalities | Text |
| License | Apache 2.0 |
docker model run ministral3-vllm
Private AI deployments where advanced capabilities meet practical hardware constraints:
Content type
Model
Digest
sha256:66da51373…
Size
9.7 GB
Last updated
4 months ago
docker model pull ai/ministral3-vllmPulls:
668
Last week