Mistral AI
The following Mistral AI models are available in Amazon Bedrock:
| Model | Description |
|---|---|
| Mistral Small | Mistral Small is Mistral AI's cost-efficient model optimized for low-latency tasks like classification, translation, and customer support. |
| Ministral 14B 3.0 | Ministral 14B 3.0 is Mistral AI's 14-billion parameter edge model optimized for on-device deployment with strong performance on knowledge and reasoning tasks. |
| Ministral 3 8B | Ministral 3 8B is Mistral AI's 8-billion parameter compact model for edge and mobile deployment with efficient inference. |
| Ministral 3B | Ministral 3B is Mistral AI's ultra-compact 3-billion parameter model for on-device tasks requiring minimal compute. |
| Mistral Large 3 | Mistral Large 3 is Mistral AI's 675-billion parameter model with strong performance on coding, reasoning, and multilingual tasks. |
| Voxtral Small 24B 2507 | Voxtral Small 24B is Mistral AI's speech-to-text model with 24 billion parameters for high-accuracy transcription and voice understanding. |
| Magistral Small 2509 | Magistral Small 2509 is Mistral AI's reasoning model that uses chain-of-thought to solve complex math, coding, and logic problems. |
| Voxtral Mini 3B 2507 | Voxtral Mini 3B is Mistral AI's compact speech-to-text model for real-time transcription and voice understanding on edge devices. |
| Devstral 2 123B | Devstral 2 123B is Mistral AI's 123-billion parameter coding model optimized for software engineering tasks including code generation, debugging, and refactoring. |
| Pixtral Large | Pixtral Large is Mistral AI's 124-billion parameter multimodal model that processes text and images for visual reasoning and document understanding. |
| Mistral Large | Mistral Large is Mistral AI's flagship model with strong reasoning, multilingual support, and a 32K context window for complex enterprise tasks. |
| Mixtral 8x7B Instruct | Mixtral 8x7B Instruct is Mistral AI's sparse mixture-of-experts model with 8 experts and 7B parameters each, delivering strong performance at faster inference speeds. |
| Mistral 7B Instruct | Mistral 7B Instruct is Mistral AI's 7-billion parameter instruction-tuned model with grouped-query attention and sliding window attention for efficient long-context inference. |