Llama 4 Maverick 17B Instruct - Amazon Bedrock
Services or capabilities described in AWS documentation might vary by Region. To see the differences applicable to the AWS European Sovereign Cloud Region, see the AWS European Sovereign Cloud User Guide.

Llama 4 Maverick 17B Instruct

Meta — Llama 4 Maverick 17B Instruct

Model Details

Llama 4 Maverick is Meta's 17-billion active parameter mixture-of-experts model with 128 experts, optimized for multimodal chat and instruction following. For more information about model development and performance, see the model/service card.

  • Model launch date: Apr 05, 2025

  • Model EOL date: No sooner than 4/28/2026

  • End User License Agreements and Terms of Use: View

  • Model lifecycle: Active

  • Context window: 1M tokens

  • Max output tokens: 8K

  • Knowledge cutoff: Aug 2024

Input Modalities Output Modalities APIs supported Endpoints supported
No AudioNo EmbeddingNo ResponsesYes bedrock-runtime
Yes ImageNo ImageNo Chat CompletionsNo bedrock-mantle
No SpeechNo SpeechYes Invoke
Yes TextYes TextYes Converse
No VideoNo Video

Pricing

For pricing, please refer to the Amazon Bedrock Pricing page.

Programmatic Access

Use the following model IDs and endpoint URLs to access this model programmatically. For more information about the available APIs and endpoints, see APIs supported and Endpoints supported.

Endpoint Model ID In-Region endpoint URL Geo inference ID Global inference ID
bedrock-runtime meta.llama4-maverick-17b-instruct-v1:0 https://bedrock-runtime.{region}.amazonaws.com us.meta.llama4-maverick-17b-instruct-v1:0 Not supported

For example, if region is us-east-1 (N. Virginia), then the bedrock-runtime endpoint URL will be "https://bedrock-runtime.us-east-1.amazonaws.com" and for bedrock-mantle will be "https://bedrock-mantle.us-east-1.api.aws/v1".

Service Tiers

Amazon Bedrock offers multiple service tiers to match your workload requirements. Standard provides pay-per-token access with no commitment. Priority offers higher throughput with a time-based commitment. Flex provides lower-cost access for flexible, non-time-sensitive workloads. Reserved provides dedicated throughput with a term commitment for predictable workloads. For more information, see service tiers.

Standard Priority Flex Reserved
Yes No No No

Regional Availability

Regional availability at a glance

Bedrock offers three inference options: In-Region keeps requests within a single Region for strict compliance, Geo Cross-Region routes across Regions within a geography (US, EU, etc.) for higher throughput while respecting data residency, and Global Cross-Region routes anywhere worldwide for maximum throughput when there are no residency constraints. Refer to the Regional availability page for more details.

Region In-Region Geo Global
us-east-1 (N. Virginia)NoYesNo
us-east-2 (Ohio)NoYesNo
us-west-1 (N. California)NoYesNo
us-west-2 (Oregon)NoYesNo

Geo inference details

Geo: US

Geo Inference ID: us.meta.llama4-maverick-17b-instruct-v1:0

Source Region Destination Regions
us-east-1 (N. Virginia)us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon)
us-east-2 (Ohio)us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon)
us-west-1 (N. California)us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-1 (N. California), us-west-2 (Oregon)
us-west-2 (Oregon)us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon)

Quotas and Limits

Your AWS account has default quotas to maintain the performance of the service and to ensure appropriate usage of Amazon Bedrock. The default quotas assigned to an account might be updated depending on regional factors, payment history, fraudulent usage, and/or approval of a quota increase request. For more details, please refer to Quotas documentation.

Quota Default value
Cross-region requests per minute800
Cross-region tokens per minute600,000
Max tokens per day432,000,000

These are default quotas shown for us-east-1. To see quotas and limits for your account, please log in to your AWS Console.

Sample Code

Step 1 - AWS Account: If you have an AWS account already, skip this step. If you are new to AWS, sign up for an AWS account.

Step 2 - API key: Go to the Amazon Bedrock console and generate a long-term API key.

Step 3 - Get the SDK: To use this getting started guide, you must have Python already installed. Then install the relevant software depending on the APIs you are using.

pip install boto3

Step 4 - Set environment variables: Configure your environment to use the API key for authentication.

AWS_BEARER_TOKEN_BEDROCK="<provide your Bedrock API key>"

Step 5 - Run your first inference request: Save the file as bedrock-first-request.py

Invoke API
import json import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.invoke_model( modelId='meta.llama4-maverick-17b-instruct-v1:0', body=json.dumps({ 'messages': [{ 'role': 'user', 'content': 'Can you explain the features of Amazon Bedrock?'}], 'max_tokens': 1024 }) ) print(json.loads(response['body'].read()))
Converse API
import boto3 client = boto3.client('bedrock-runtime', region_name='us-east-1') response = client.converse( modelId='meta.llama4-maverick-17b-instruct-v1:0', messages=[ { 'role': 'user', 'content': [{'text': 'Can you explain the features of Amazon Bedrock?'}] } ] ) print(response)