ai/ministral3-vllm

Verified Publisher

By Docker

Updated 4 months ago

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

Model
4

10K+

ai/ministral3-vllm repository overview

Ministral 3 Instruct 2512

logo

Description

The Ministral 3 family consists of compact, efficient multimodal language models designed for edge deployment and local inference. All three variants—3B, 8B, and 14B—offer strong instruction-following capabilities, vision support, and broad hardware compatibility. Each model is released in GGUF format across multiple quantization levels, enabling flexible trade-offs between performance and resource usage. All variants are post-trained for instruction tasks, making them ideal for:

  • Chat-based applications
  • Assistants and agents
  • On-device inference
  • CPU and GPU constrained environments
  • Multimodal (vision + text) use cases

Characteristics

AttributeDetails
ProviderMistral AI
Architecturemistral3
LanguagesSupports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
Tool calling
Input modalitiesText, Images
Output modalitiesText
LicenseApache 2.0

Use this AI model with Docker Model Runner

docker model run ministral3-vllm

Use cases

Private AI deployments where advanced capabilities meet practical hardware constraints:

  • Private/custom chat and AI assistant deployments in constrained environments
  • Advanced local agentic use cases
  • Fine-tuning and specialization
  • And more... Bringing advanced AI capabilities to most environments.

Tag summary

Content type

Model

Digest

sha256:66da51373

Size

9.7 GB

Last updated

4 months ago

docker model pull ai/ministral3-vllm

This week's pulls

Pulls:

668

Last week