module pixeltable.functions.bedrock
Pixeltable UDFs for AWS Bedrock AI models. In order to use them, you must firstpip install boto3 and configure your AWS credentials, as described in
the Working with Bedrock tutorial.
func invoke_tools()
Signature
tools._invoke().
udf converse()
Signature
converse API endpoint.
For additional details, see:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html
PIL images and media file paths in messages[*].content[*].(image|video|audio).source.bytes
are converted to raw bytes automatically.
Requirements:
pip install boto3
messages(pxt.Json[(Json): Input messages.model_id(Any): The model that will complete your prompt.system(Any): An optional system prompt.inference_config(Any): Base inference parameters to use.additional_model_request_fields(Any): Additional inference parameters to use.tool_config(Any): An optional list of Pixeltable tools to use.
pxt.Json: A dictionary containing the response and other metadata.
anthropic.claude-3-haiku-20240307-v1:0 to an existing Pixeltable column tbl.prompt of the table tbl:
udf embed()
Signatures
invoke_model API for embedding models.
For additional details, see:
https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html
https://docs.aws.amazon.com/nova/latest/userguide/modality-embedding.html
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-embed.html
Requirements:
pip install boto3
text(String): Input text to embed.model_id(String): The embedding model identifier. Supported models:amazon.titan-embed-text-v1amazon.titan-embed-text-v2:0(supportsdimensions: 256, 512, 1024)amazon.nova-2-multimodal-embeddings-v1:0(supportsdimensions: 256, 512, 1024, 3072)cohere.embed-english-v3cohere.embed-multilingual-v3cohere.embed-v4:0(supportsdimensions: 256, 512, 1024, 1536)
dimensions(Int | None, default:Literal(None)): Output embedding dimensions (model-dependent, optional).
pxt.Array[(None,), float32]: Embedding vector.
description with Nova embeddings and custom dimensions:
udf invoke_model()
Signature
invoke_model API endpoint, with automatic routing to
StartAsyncInvoke for models that require it (e.g. video generation, audio/video embeddings).
For additional details, see:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/invoke_model.html
PIL images and media file paths anywhere in the request body are converted automatically
to the encoding expected by the target model. For image-generation models, base64-encoded
images in the response are automatically decoded into PIL.Image objects.
For models that require async invocation, bedrock.temp_location must be configured
(set environment variable BEDROCK_TEMP_LOCATION or add temp_location to the [bedrock] section of
your Pixeltable configuration file).
Requirements:
pip install boto3
body(pxt.Json): The prompt and inference parameters as a dictionary.model_id(pxt.String): The model identifier to invoke.
pxt.Json: A dictionary containing the model response. For image-generation models, image fields are decoded toPIL.Imageobjects. For video-generation models, returns apxt.Videopath.
bedrock.temp_location to be configured):
bedrock.temp_location):