pip install openai and configure your OpenAI credentials, as described in the Working with OpenAI tutorial.
View source on GitHub
Functions
invoke_tools()
Converts an OpenAI response dict to Pixeltable tool invocation format and calls tools._invoke().
Signature:
UDFs
chat_completions() udf
Creates a model response for the given chat conversation.
Equivalent to the OpenAI chat/completions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/chat-completions
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
messages(Json): A list of messages to use for chat completion, as described in the OpenAI API documentation.model(String): The model to use for chat completion.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIchat/completionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/chat/create
- Json: A dictionary containing the response and other metadata.
gpt-4o-mini to an existing Pixeltable column tbl.prompt of the table tbl:
embeddings() udf
Creates an embedding vector representing the input text.
Equivalent to the OpenAI embeddings API endpoint. For additional details, see: https://platform.openai.com/docs/guides/embeddings
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
input(String): The text to embed.model(String): The model to use for the embedding.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIembeddingsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/embeddings
- Array[(None,), Float]: An array representing the application of the given embedding to
input.
text-embedding-3-small to an existing Pixeltable column tbl.text of the table tbl:
text, using the model text-embedding-3-small:
image_generations() udf
Creates an image given a prompt.
Equivalent to the OpenAI images/generations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/images
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
prompt(String): Prompt for the image.model(String): The model to use for the generations.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIimages/generationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/images/create
- Image: The generated image.
dall-e-2 to an existing Pixeltable column tbl.text of the table tbl:
moderations() udf
Classifies if text is potentially harmful.
Equivalent to the OpenAI moderations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/moderation
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(String): Text to analyze with the moderations model.model(String): The model to use for moderations.
- Json: Details of the moderations results.
text-moderation-stable to an existing Pixeltable column tbl.input of the table tbl:
speech() udf
Generates audio from the input text.
Equivalent to the OpenAI audio/speech API endpoint. For additional details, see: https://platform.openai.com/docs/guides/text-to-speech
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
input(String): The text to synthesize into speech.model(String): The model to use for speech synthesis.voice(String): The voice profile to use for speech synthesis. Supported options include:alloy,echo,fable,onyx,nova, andshimmer.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIaudio/speechAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
- Audio: An audio file containing the synthesized speech.
tts-1 to an existing Pixeltable column tbl.text of the table tbl:
transcriptions() udf
Transcribes audio into the input language.
Equivalent to the OpenAI audio/transcriptions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(Audio): The audio to transcribe.model(String): The model to use for speech transcription.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIaudio/transcriptionsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranscription
- Json: A dictionary containing the transcription and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
translations() udf
Translates audio into English.
Equivalent to the OpenAI audio/translations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text
Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM.
Requirements:
pip install openai
audio(Audio): The audio to translate.model(String): The model to use for speech transcription and translation.model_kwargs(Optional[Json]): Additional keyword args for the OpenAIaudio/translationsAPI. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createTranslation
- Json: A dictionary containing the translation and other metadata.
whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
vision() udf
Analyzes an image with the OpenAI vision capability. This is a convenience function that takes an image and
prompt, and constructs a chat completion request that utilizes OpenAI vision.
For additional details, see: https://platform.openai.com/docs/guides/vision
Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary.
Requirements:
pip install openai
prompt(String): A prompt for the OpenAI vision request.image(Image): The image to analyze.model(String): The model to use for OpenAI vision.
- String: The response from the OpenAI vision API.
gpt-4o-mini to an existing Pixeltable column tbl.image of the table tbl: