Mistral AI logo on iPhone display screen
News

Mistral 3 Arrives With Large 3 and Ministral Models for Enterprise and Edge AI

1 minute read
Michelle Hawley avatar
By
SAVED
Mistral 3 launches with enterprise-grade open multimodal models, strong benchmarks, edge-optimized versions and broad cloud availability.

Mistral AI has dropped the latest version of its open multimodal and multilingual AI model: Mistral 3. 

Mistral 3 includes three small, dense models (14B, 8B and 3B) along with Mistral Large 3, a mixture-of-experts (MoE) trained on 41 billion active parameters and 675 billion total parameters. 

This latest model series reportedly achieves closed-source-level results while still maintaining the transparency and control of open-source models, the company asserts.

Table of Contents

How Does Mistral 3 Compare to Other Leading Models? 

According to company officials, Mistral Large 3 beats out both DeepSeek V3.1 and Kimi K2 for general prompts and multilingual prompts. 
Mistral 3 Large benchmarks

The AI company claims its new model is "one of the best permissive open weight models in the world," trained on 3,000 of NVIDIA's H200 GPUs. Mistral 3 can understand text, images and complex logic across more than 40 native languages. 

On the LMArena leaderboard, Mistral Large 3 ranks at No. 2 in the OSS non-reasoning models category and No. 6 amongst OSS models overall.

LMArena Leaderboard

Related Article: Introducing Kimi K2 Thinking, China's ‘Most Capable' Open-Source Model

A Deeper Dive Into the Mistral 3 Models 

Mistral worked with NVIDIA, vLLM and Red Hat to make Mistral Large 3 run faster and be more accessible for those in the open-source community. This optimized model: 

  • Uses less memory
  • Runs more efficiently 
  • Works on NVIDIA’s newest supercomputer-style systems OR
  • Works on a single powerful server that many companies already have

More developers can run the model themselves, instead of needing massive infrastructure.

For edge and local use cases, the Ministral 3 series is available in three model sizes: 3B, 8B and 14B parameters. Each model size offers base, instruct and reasoning variants, and are optimized to run on: 

  • Workstations
  • Consumer PCs and laptops
  • Edge devices and robots

As a result, the same AI model family can run anywhere — from data centers to small devices.

How to Use Mistral 3

Mistral 3 is available on Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face (Large 3 & Ministral), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI and Together AI.

It will also soon be available on NVIDIA NIM and AWS SageMaker. 

Learning Opportunities

According to company officials, its latest offerings are ideal for enterprise use cases as well as edge-optimized solutions. Use cases include: 

  • Coding
  • Creative collaboration
  • Document analysis
  • Tool-use workflows

fa-regular fa-lightbulb Have a tip to share with our editorial team? Drop us a line:

About the Author
Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As editorial director at Simpler Media Group, she oversees the day-to-day operations of VKTR, covering the world of enterprise AI and managing a network of contributing writers. She's also the host of CMSWire's CMO Circle and co-host of CMSWire's CX Decoded. With an MFA in creative writing and background in both news and marketing, she offers unique insights on the topics of tech disruption, corporate responsibility, changing AI legislation and more. She currently resides in Pennsylvania with her husband and two dogs. Connect with Michelle Hawley:

Main image: mehaniq41 | Adobe Stock
Featured Research