Mistral AI - Amazon Bedrock
Services or capabilities described in AWS documentation might vary by Region. To see the differences applicable to the AWS European Sovereign Cloud Region, see the AWS European Sovereign Cloud User Guide.

Mistral AI

The following Mistral AI models are available in Amazon Bedrock:

Model Description
Mistral SmallMistral Small is Mistral AI's cost-efficient model optimized for low-latency tasks like classification, translation, and customer support.
Ministral 14B 3.0Ministral 14B 3.0 is Mistral AI's 14-billion parameter edge model optimized for on-device deployment with strong performance on knowledge and reasoning tasks.
Ministral 3 8BMinistral 3 8B is Mistral AI's 8-billion parameter compact model for edge and mobile deployment with efficient inference.
Ministral 3BMinistral 3B is Mistral AI's ultra-compact 3-billion parameter model for on-device tasks requiring minimal compute.
Mistral Large 3Mistral Large 3 is Mistral AI's 675-billion parameter model with strong performance on coding, reasoning, and multilingual tasks.
Voxtral Small 24B 2507Voxtral Small 24B is Mistral AI's speech-to-text model with 24 billion parameters for high-accuracy transcription and voice understanding.
Magistral Small 2509Magistral Small 2509 is Mistral AI's reasoning model that uses chain-of-thought to solve complex math, coding, and logic problems.
Voxtral Mini 3B 2507Voxtral Mini 3B is Mistral AI's compact speech-to-text model for real-time transcription and voice understanding on edge devices.
Devstral 2 123BDevstral 2 123B is Mistral AI's 123-billion parameter coding model optimized for software engineering tasks including code generation, debugging, and refactoring.
Pixtral LargePixtral Large is Mistral AI's 124-billion parameter multimodal model that processes text and images for visual reasoning and document understanding.
Mistral LargeMistral Large is Mistral AI's flagship model with strong reasoning, multilingual support, and a 32K context window for complex enterprise tasks.
Mixtral 8x7B InstructMixtral 8x7B Instruct is Mistral AI's sparse mixture-of-experts model with 8 experts and 7B parameters each, delivering strong performance at faster inference speeds.
Mistral 7B InstructMistral 7B Instruct is Mistral AI's 7-billion parameter instruction-tuned model with grouped-query attention and sliding window attention for efficient long-context inference.