module pixeltable.functions.openai
View source on GitHub
Pixeltable UDFs that wrap various endpoints from the OpenAI API. In order to use them, you must first pip install openai and configure your OpenAI credentials, as described in the Working with OpenAI tutorial.
func invoke_tools()
tools._invoke().
udf chat_completions()
chat/completions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
messages(Json): A list of messages to use for chat completion, as described in the OpenAI API documentation.model(String): The model to use for chat completion.model_kwargs(Json | None): Additional keyword args for the OpenAIchat/completionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/chat/create
Json: A dictionary containing the response and other metadata.
gpt-4o-mini to an existing Pixeltable column tbl.prompt of the table tbl:
udf embeddings()
embeddings API endpoint. For additional details, see: https://platform.openai.com/docs/guides/embeddings
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
input(String): The text to embed.model(String): The model to use for the embedding.model_kwargs(Json | None): Additional keyword args for the OpenAIembeddingsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/embeddings
Array[(None,), Float]: An array representing the application of the given embedding toinput.
text-embedding-3-small to an existing Pixeltable column tbl.text of the table tbl:
text, using the model text-embedding-3-small:
udf image_generations()
images/generations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/images
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
prompt(String): Prompt for the image.model(String): The model to use for the generations.model_kwargs(Json | None): Additional keyword args for the OpenAIimages/generationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/images/create
Image: The generated image.
dall-e-2 to an existing Pixeltable column tbl.text of the table tbl:
udf moderations()
moderations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/moderation
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(String): Text to analyze with the moderations model.model(String): The model to use for moderations.
Json: Details of the moderations results.
text-moderation-stable to an existing Pixeltable column tbl.input of the table tbl:
udf speech()
audio/speech API endpoint. For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(String): The text to synthesize into speech.model(String): The model to use for speech synthesis.voice(String): The voice profile to use for speech synthesis. Supported options include:alloy,echo,fable,onyx,nova, andshimmer.model_kwargs(Json | None): Additional keyword args for the OpenAIaudio/speechAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
Audio: An audio file containing the synthesized speech.
tts-1 to an existing Pixeltable column tbl.text of the table tbl:
udf transcriptions()
audio/transcriptions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(Audio): The audio to transcribe.model(String): The model to use for speech transcription.model_kwargs(Json | None): Additional keyword args for the OpenAIaudio/transcriptionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
Json: A dictionary containing the transcription and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
udf translations()
audio/translations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(Audio): The audio to translate.model(String): The model to use for speech transcription and translation.model_kwargs(Json | None): Additional keyword args for the OpenAIaudio/translationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
Json: A dictionary containing the translation and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
udf vision()
pip install openai
prompt(String): A prompt for the OpenAI vision request.image(Image): The image to analyze.model(String): The model to use for OpenAI vision.
String: The response from the OpenAI vision API.
gpt-4o-mini to an existing Pixeltable column tbl.image of the table tbl: