Skip to main content

AWS Bedrock

You can deploy the following Mistral AI models on the AWS Bedrock service:

  • Mistral 7B Instruct
  • Mixtral 8x7B Instruct
  • Mistral Small
  • Mistral Large

This page provides a straightforward guide on how to get started on using Mistral Large as an AWS Bedrock foundational model.

Pre-requisites

In order to query the model you will need:

  • Access to an AWS account within a region that supports the AWS Bedrock service and offers access to Mistral Large: see the AWS documentation for model availability per region.
  • An AWS IAM principal (user, role) with sufficient permissions, see the AWS documentation for more details.
  • Access to the Mistral AI models enabled from the AWS Bedrock home page, see the AWS documentation for more details.
  • A local code environment set up with the relevant AWS SDK components, namely:

Querying the model

Before starting, make sure to properly configure the authentication credentials for your development environment. The AWS documentation provides an in-depth explanation on the required steps.

import boto3

MISTRAL_LARGE_BEDROCK_ID = "mistral.mistral-large-2402-v1:0"
AWS_REGION = "eu-west-3"

bedrock_client = boto3.client(service_name='bedrock-runtime', region_name=AWS_REGION)

messages = [{"role": "user", "content": [{"text": "What is the best French cheese?"}]}]
temperature = 0.0
max_tokens = 1024

params = {"modelId": MISTRAL_LARGE_BEDROCK_ID,
"messages": messages,
"inferenceConfig": {"temperature": temperature,
"maxTokens": max_tokens}}

resp = bedrock_client.converse(**params)

print(resp["output"]["message"]["content"][0]["text"])

Going further

You can find a more detailed user guide on the AWS documentation on inference requests for Mistral models.

For more advanced examples, you can also check out the following notebooks: