Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use
50K+
GGUF version by Unsloth
The Ministral 3 family consists of compact, efficient multimodal language models designed for edge deployment and local inference. All three variants—3B, 8B, and 14B—offer strong instruction-following capabilities, vision support, and broad hardware compatibility. Each model is released in GGUF format across multiple quantization levels, enabling flexible trade-offs between performance and resource usage. All variants are post-trained for instruction tasks, making them ideal for:
| Attribute | Details |
|---|---|
| Provider | Mistral AI |
| Architecture | mistral3 |
| Languages | Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic. |
| Tool calling | ✅ |
| Input modalities | Text, Images |
| Output modalities | Text |
| License | Apache 2.0 |
| Model variant | Parameters | Quantization | Context window | VRAM¹ | Size |
|---|---|---|---|---|---|
ai/ministral3:8Bai/ministral3:8B-Q4_K_Mai/ministral3:latest | 8B | MOSTLY_Q4_K_M | 262K tokens | 5.89 GiB | 4.83 GB |
ai/ministral3:14B | 14B | MOSTLY_Q4_K_M | 262K tokens | 8.87 GiB | 7.78 GB |
ai/ministral3:14B-BF16 | 14B | MOSTLY_BF16 | 262K tokens | 25.35 GiB | 25.16 GB |
ai/ministral3:14B-UD-Q8_K_XL | 14B | MOSTLY_Q8_0 | 262K tokens | 16.13 GiB | 15.93 GB |
ai/ministral3:8B-BF16 | 8B | MOSTLY_BF16 | 262K tokens | 16.15 GiB | 15.81 GB |
ai/ministral3:3B-Q4_K_M | 3B | MOSTLY_Q4_K_M | 262K tokens | 3.19 GiB | 1.99 GB |
ai/ministral3:3B-BF16 | 3B | MOSTLY_BF16 | 262K tokens | 7.59 GiB | 6.39 GB |
¹: VRAM estimated based on model characteristics.
latest→8B
docker model run ai/ministral3
Private AI deployments where advanced capabilities meet practical hardware constraints:
Content type
Model
Digest
sha256:10d946aea…
Size
5.6 GB
Last updated
4 months ago
docker model pull ai/ministral3Pulls:
2,233
Last week