AWS::Bedrock::Prompt PromptModelInferenceConfiguration - AWS CloudFormation
Services or capabilities described in AWS documentation might vary by Region. To see the differences applicable to the AWS European Sovereign Cloud Region, see the AWS European Sovereign Cloud User Guide.

This is the new CloudFormation Template Reference Guide. Please update your bookmarks and links. For help getting started with CloudFormation, see the AWS CloudFormation User Guide.

AWS::Bedrock::Prompt PromptModelInferenceConfiguration

Contains inference configurations related to model inference for a prompt. For more information, see Inference parameters.

Syntax

To declare this entity in your CloudFormation template, use the following syntax:

JSON

{ "MaxTokens" : Number, "StopSequences" : [ String, ... ], "Temperature" : Number, "TopP" : Number }

YAML

MaxTokens: Number StopSequences: - String Temperature: Number TopP: Number

Properties

MaxTokens

The maximum number of tokens to return in the response.

Required: No

Type: Number

Minimum: 0

Maximum: 512000

Update requires: No interruption

StopSequences

A list of strings that define sequences after which the model will stop generating.

Required: No

Type: Array of String

Minimum: 0

Maximum: 4

Update requires: No interruption

Temperature

Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

Required: No

Type: Number

Minimum: 0

Maximum: 1

Update requires: No interruption

TopP

The percentage of most-likely candidates that the model considers for the next token.

Required: No

Type: Number

Minimum: 0

Maximum: 1

Update requires: No interruption