module pixeltable.functions.openai
Pixeltable UDFs that wrap various endpoints from the OpenAI API. In order to use them, you must firstpip install openai and configure your OpenAI credentials, as described in
the Working with OpenAI tutorial.
func invoke_tools()
Signature
tools._invoke().
udf chat_completions()
Signature
chat/completions API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Request throttling:
Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available
request and token capacity. No configuration is necessary.
Requirements:
pip install openai
messages(pxt.Json): A list of messages to use for chat completion, as described in the OpenAI API documentation.model(pxt.String): The model to use for chat completion.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIchat/completionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/chat/create
pxt.Json: A dictionary containing the response and other metadata.
gpt-4o-mini to an existing Pixeltable column tbl.prompt of the table tbl:
udf embeddings()
Signature
embeddings API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/embeddings
Request throttling:
Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available
request and token capacity. No configuration is necessary.
Requirements:
pip install openai
input(pxt.String): The text to embed.model(pxt.String): The model to use for the embedding.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIembeddingsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/embeddings
pxt.Array[(None,), Float]: An array representing the application of the given embedding toinput.
text-embedding-3-small to an existing Pixeltable column tbl.text of the table tbl:
text, using the model text-embedding-3-small:
udf image_generations()
Signature
images/generations API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/images
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
prompt(pxt.String): Prompt for the image.model(pxt.String): The model to use for the generations.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIimages/generationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/images/create
pxt.Image: The generated image.
dall-e-2 to an existing Pixeltable column tbl.text of the table tbl:
udf moderations()
Signature
moderations API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/moderation
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(pxt.String): Text to analyze with the moderations model.model(pxt.String): The model to use for moderations.
pxt.Json: Details of the moderations results.
text-moderation-stable to an existing Pixeltable column tbl.input of the table tbl:
udf speech()
Signature
audio/speech API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(pxt.String): The text to synthesize into speech.model(pxt.String): The model to use for speech synthesis.voice(pxt.String): The voice profile to use for speech synthesis. Supported options include:alloy,echo,fable,onyx,nova, andshimmer.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIaudio/speechAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
pxt.Audio: An audio file containing the synthesized speech.
tts-1 to an existing Pixeltable column tbl.text of the table tbl:
udf transcriptions()
Signature
audio/transcriptions API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(pxt.Audio): The audio to transcribe.model(pxt.String): The model to use for speech transcription.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIaudio/transcriptionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
pxt.Json: A dictionary containing the transcription and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
udf translations()
Signature
audio/translations API endpoint.
For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling:
Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate
limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(pxt.Audio): The audio to translate.model(pxt.String): The model to use for speech transcription and translation.model_kwargs(pxt.Json | None): Additional keyword args for the OpenAIaudio/translationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
pxt.Json: A dictionary containing the translation and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
udf vision()
Signature
pip install openai
prompt(pxt.String): A prompt for the OpenAI vision request.image(pxt.Image): The image to analyze.model(pxt.String): The model to use for OpenAI vision.
pxt.String: The response from the OpenAI vision API.
gpt-4o-mini to an existing Pixeltable column tbl.image of the table tbl: