Magistral Small 2509 - Amazon Bedrock
Services or capabilities described in AWS documentation might vary by Region. To see the differences applicable to the AWS European Sovereign Cloud Region, see the AWS European Sovereign Cloud User Guide.

Magistral Small 2509

Mistral AI — Magistral Small 2509

Model Details

Magistral Small 2509 is Mistral AI's reasoning model that uses chain-of-thought to solve complex math, coding, and logic problems. For more information about model development and performance, see the model/service card.

  • Model launch date: Sep 2025

  • Model EOL date: N/A

  • End User License Agreements and Terms of Use: View

  • Model lifecycle: Active

  • Context window: 128K tokens

  • Max output tokens: 40K

  • Reasoning: Supported

Input Modalities Output Modalities APIs supported Endpoints supported
No AudioNo EmbeddingNo ResponsesYes bedrock-runtime
Yes ImageNo ImageNo Chat CompletionsYes bedrock-mantle
No SpeechNo SpeechNo Invoke
Yes TextYes TextNo Converse
No VideoNo Video
Note

Whenever possible, we recommend you use the bedrock-mantle endpoint.

Capabilities and Features

Bedrock Features

Features supported using bedrock-mantle endpoint

Features supported using bedrock-runtime endpoint

Pricing

For pricing, please refer to the Amazon Bedrock Pricing page.

Programmatic Access

Use the following model IDs and endpoint URLs to access this model programmatically. For more information about the available APIs and endpoints, see APIs supported and Endpoints supported.

Endpoint Model ID In-Region endpoint URL Geo inference ID Global inference ID
bedrock-runtime mistral.magistral-small-2509 https://bedrock-runtime.{region}.amazonaws.com Not supported Not supported
bedrock-mantle mistral.magistral-small-2509 https://bedrock-mantle.{region}.api.aws/v1 Not supported Not supported

For example, if region is us-east-1 (N. Virginia), then the bedrock-runtime endpoint URL will be "https://bedrock-runtime.us-east-1.amazonaws.com" and for bedrock-mantle will be "https://bedrock-mantle.us-east-1.api.aws/v1".

Service Tiers

Amazon Bedrock offers multiple service tiers to match your workload requirements. Standard provides pay-per-token access with no commitment. Priority offers higher throughput with a time-based commitment. Flex provides lower-cost access for flexible, non-time-sensitive workloads. Reserved provides dedicated throughput with a term commitment for predictable workloads. For more information, see service tiers.

Standard Priority Flex Reserved
Yes Yes Yes No

Regional Availability

Regional availability at a glance

Bedrock offers three inference options: In-Region keeps requests within a single Region for strict compliance, Geo Cross-Region routes across Regions within a geography (US, EU, etc.) for higher throughput while respecting data residency, and Global Cross-Region routes anywhere worldwide for maximum throughput when there are no residency constraints. Refer to the Regional availability page for more details.

Region In-Region Geo Global
us-east-1 (N. Virginia)YesNoNo
us-east-2 (Ohio)YesNoNo
us-west-2 (Oregon)YesNoNo
eu-south-1 (Milan)YesNoNo
eu-west-1 (Ireland)YesNoNo
eu-west-2 (London)YesNoNo
ap-northeast-1 (Tokyo)YesNoNo
ap-south-1 (Mumbai)YesNoNo
ap-southeast-2 (Sydney)YesNoNo
sa-east-1 (São Paulo)YesNoNo

Quotas and Limits

Your AWS account has default quotas to maintain the performance of the service and to ensure appropriate usage of Amazon Bedrock. The default quotas assigned to an account might be updated depending on regional factors, payment history, fraudulent usage, and/or approval of a quota increase request. For more details, please refer to Quotas documentation.

Quota Default value
On-demand requests per minute10,000
On-demand tokens per minute100,000,000
Max tokens per day144,000,000,000

These are default quotas shown for us-east-1. To see quotas and limits for your account, please log in to your AWS Console.

Sample Code

Step 1 - AWS Account: If you have an AWS account already, skip this step. If you are new to AWS, sign up for an AWS account.

Step 2 - API key: Go to the Amazon Bedrock console and generate a long-term API key.

Step 3 - Get the SDK: To use this getting started guide, you must have Python already installed. Then install the relevant software depending on the APIs you are using.

Responses/Chat Completions API
pip install boto3 openai
Invoke/Converse API
pip install boto3

Step 4 - Set environment variables: Configure your environment to use the API key for authentication.

Responses/Chat Completions API
OPENAI_API_KEY="<provide your Bedrock API key>" OPENAI_BASE_URL="https://bedrock-mantle.<your-region>.api.aws/v1"
Invoke/Converse API
AWS_BEARER_TOKEN_BEDROCK="<provide your Bedrock API key>"

Step 5 - Run your first inference request: Save the file as bedrock-first-request.py

Responses API
from openai import OpenAI client = OpenAI() response = client.responses.create( model="mistral.magistral-small-2509", input="Can you explain the features of Amazon Bedrock?" ) print(response)
Chat Completions API
from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="mistral.magistral-small-2509", messages=[{"role": "user", "content": "Can you explain the features of Amazon Bedrock?"}] ) print(response)
Invoke API
import json import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.invoke_model( modelId='mistral.magistral-small-2509', body=json.dumps({ 'messages': [{ 'role': 'user', 'content': 'Can you explain the features of Amazon Bedrock?'}], 'max_tokens': 1024 }) ) print(json.loads(response['body'].read()))
Converse API
import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.converse( modelId='mistral.magistral-small-2509', messages=[ { 'role': 'user', 'content': [{'text': 'Can you explain the features of Amazon Bedrock?'}] } ] ) print(response)