Qwen3 235B A22B 2507 - Amazon Bedrock
Services or capabilities described in AWS documentation might vary by Region. To see the differences applicable to the AWS European Sovereign Cloud Region, see the AWS European Sovereign Cloud User Guide.

Qwen3 235B A22B 2507

Qwen — Qwen3 235B A22B 2507

Model Details

Qwen3 235B A22B is Qwen's 235-billion parameter mixture-of-experts model with 22 billion active parameters, supporting text and code generation with a 128K context window. For more information about model development and performance, see the model/service card.

  • Model launch date: Apr 28, 2025

  • Model EOL date: N/A

  • End User License Agreements and Terms of Use: View

  • Model lifecycle: Active

  • Context window: 128K tokens

  • Max output tokens: 8K

  • Reasoning: Supported

Input Modalities Output Modalities APIs supported Endpoints supported
No AudioNo EmbeddingNo ResponsesYes bedrock-runtime
No ImageNo ImageYes Chat CompletionsYes bedrock-mantle
No SpeechNo SpeechYes Invoke
Yes TextYes TextYes Converse
No VideoNo Video
Note

Whenever possible, we recommend you use the bedrock-mantle endpoint.

Capabilities and Features

Bedrock Features

Features supported using bedrock-mantle endpoint

Features supported using bedrock-runtime endpoint

Pricing

For pricing, please refer to the Amazon Bedrock Pricing page.

Programmatic Access

Use the following model IDs and endpoint URLs to access this model programmatically. For more information about the available APIs and endpoints, see APIs supported and Endpoints supported.

Endpoint Model ID In-Region endpoint URL Geo inference ID Global inference ID
bedrock-runtime qwen.qwen3-235b-a22b-2507-v1:0 https://bedrock-runtime.{region}.amazonaws.com Not supported Not supported
bedrock-mantle qwen.qwen3-235b-a22b-2507 https://bedrock-mantle.{region}.api.aws/v1 Not supported Not supported

For example, if region is us-east-1 (N. Virginia), then the bedrock-runtime endpoint URL will be "https://bedrock-runtime.us-east-1.amazonaws.com" and for bedrock-mantle will be "https://bedrock-mantle.us-east-1.api.aws/v1".

Service Tiers

Amazon Bedrock offers multiple service tiers to match your workload requirements. Standard provides pay-per-token access with no commitment. Priority offers higher throughput with a time-based commitment. Flex provides lower-cost access for flexible, non-time-sensitive workloads. Reserved provides dedicated throughput with a term commitment for predictable workloads. For more information, see service tiers.

Standard Priority Flex Reserved
Yes Yes Yes No

Regional Availability

Regional availability at a glance

Bedrock offers three inference options: In-Region keeps requests within a single Region for strict compliance, Geo Cross-Region routes across Regions within a geography (US, EU, etc.) for higher throughput while respecting data residency, and Global Cross-Region routes anywhere worldwide for maximum throughput when there are no residency constraints. Refer to the Regional availability page for more details.

Region In-Region Geo Global
us-east-2 (Ohio)YesNoNo
us-west-2 (Oregon)YesNoNo
eu-central-1 (Frankfurt)YesNoNo
eu-north-1 (Stockholm)YesNoNo
eu-south-1 (Milan)YesNoNo
eu-west-2 (London)YesNoNo
ap-northeast-1 (Tokyo)YesNoNo
ap-south-1 (Mumbai)YesNoNo
ap-southeast-2 (Sydney)YesNoNo
ap-southeast-3 (Jakarta)YesNoNo

Sample Code

Step 1 - AWS Account: If you have an AWS account already, skip this step. If you are new to AWS, sign up for an AWS account.

Step 2 - API key: Go to the Amazon Bedrock console and generate a long-term API key.

Step 3 - Get the SDK: To use this getting started guide, you must have Python already installed. Then install the relevant software depending on the APIs you are using.

Responses/Chat Completions API
pip install boto3 openai
Invoke/Converse API
pip install boto3

Step 4 - Set environment variables: Configure your environment to use the API key for authentication.

Responses/Chat Completions API
OPENAI_API_KEY="<provide your Bedrock API key>" OPENAI_BASE_URL="https://bedrock-mantle.<your-region>.api.aws/v1"
Invoke/Converse API
AWS_BEARER_TOKEN_BEDROCK="<provide your Bedrock API key>"

Step 5 - Run your first inference request: Save the file as bedrock-first-request.py

Responses API
from openai import OpenAI client = OpenAI() response = client.responses.create( model="qwen.qwen3-235b-a22b-2507", input="Can you explain the features of Amazon Bedrock?" ) print(response)
Chat Completions API
from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="qwen.qwen3-235b-a22b-2507", messages=[{"role": "user", "content": "Can you explain the features of Amazon Bedrock?"}] ) print(response)
Invoke API
import json import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.invoke_model( modelId='qwen.qwen3-235b-a22b-2507-v1:0', body=json.dumps({ 'messages': [{ 'role': 'user', 'content': 'Can you explain the features of Amazon Bedrock?'}], 'max_tokens': 1024 }) ) print(json.loads(response['body'].read()))
Converse API
import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.converse( modelId='qwen.qwen3-235b-a22b-2507-v1:0', messages=[ { 'role': 'user', 'content': [{'text': 'Can you explain the features of Amazon Bedrock?'}] } ] ) print(response)