Skip to main content
Pixeltable UDFs that wrap various endpoints from the OpenAI API. In order to use them, you must first pip install openai and configure your OpenAI credentials, as described in the Working with OpenAI tutorial. View source on GitHub

UDFs


chat_completions() udf

Creates a model response for the given chat conversation. Equivalent to the OpenAI chat/completions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/chat-completions Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary. Requirements:
  • pip install openai
Signature:
chat_completions(
    messages: Json,
    model: String,
    model_kwargs: Optional[Json],
    tools: Optional[Json],
    tool_choice: Optional[Json]
)-> Json
Parameters:
  • messages (Json): A list of messages to use for chat completion, as described in the OpenAI API documentation.
  • model (String): The model to use for chat completion.
  • model_kwargs (Optional[Json]): Additional keyword args for the OpenAI chat/completions API. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/chat/create
Returns:
  • Json: A dictionary containing the response and other metadata.
Example: Add a computed column that applies the model gpt-4o-mini to an existing Pixeltable column tbl.prompt of the table tbl:
messages = [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': tbl.prompt}]
tbl.add_computed_column(response=chat_completions(messages, model='gpt-4o-mini'))

embeddings() udf

Creates an embedding vector representing the input text. Equivalent to the OpenAI embeddings API endpoint. For additional details, see: https://platform.openai.com/docs/guides/embeddings Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary. Requirements:
  • pip install openai
Signature:
embeddings(
    input: String,
    model: String,
    model_kwargs: Optional[Json]
)-> Array[(None,), Float]
Parameters: Returns:
  • Array[(None,), Float]: An array representing the application of the given embedding to input.
Example: Add a computed column that applies the model text-embedding-3-small to an existing Pixeltable column tbl.text of the table tbl:
tbl.add_computed_column(embed=embeddings(tbl.text, model='text-embedding-3-small'))
Add an embedding index to an existing column text, using the model text-embedding-3-small:
tbl.add_embedding_index(embedding=embeddings.using(model='text-embedding-3-small'))

image_generations() udf

Creates an image given a prompt. Equivalent to the OpenAI images/generations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/images Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM. Requirements:
  • pip install openai
Signature:
image_generations(
    prompt: String,
    model: String,
    model_kwargs: Optional[Json]
)-> Image
Parameters: Returns:
  • Image: The generated image.
Example: Add a computed column that applies the model dall-e-2 to an existing Pixeltable column tbl.text of the table tbl:
tbl.add_computed_column(gen_image=image_generations(tbl.text, model='dall-e-2'))

invoke_tools() udf

Converts an OpenAI response dict to Pixeltable tool invocation format and calls tools._invoke(). Signature:
invoke_tools(
    tools: pixeltable.func.tools.Tools,
    response: pixeltable.exprs.expr.Expr
)-> pixeltable.exprs.inline_expr.InlineDict

moderations() udf

Classifies if text is potentially harmful. Equivalent to the OpenAI moderations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/moderation Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM. Requirements:
  • pip install openai
Signature:
moderations(
    input: String,
    model: String
)-> Json
Parameters:
  • input (String): Text to analyze with the moderations model.
  • model (String): The model to use for moderations.
Returns:
  • Json: Details of the moderations results.
Example: Add a computed column that applies the model text-moderation-stable to an existing Pixeltable column tbl.input of the table tbl:
tbl.add_computed_column(moderations=moderations(tbl.text, model='text-moderation-stable'))

speech() udf

Generates audio from the input text. Equivalent to the OpenAI audio/speech API endpoint. For additional details, see: https://platform.openai.com/docs/guides/text-to-speech Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM. Requirements:
  • pip install openai
Signature:
speech(
    input: String,
    model: String,
    voice: String,
    model_kwargs: Optional[Json]
)-> Audio
Parameters:
  • input (String): The text to synthesize into speech.
  • model (String): The model to use for speech synthesis.
  • voice (String): The voice profile to use for speech synthesis. Supported options include: alloy, echo, fable, onyx, nova, and shimmer.
  • model_kwargs (Optional[Json]): Additional keyword args for the OpenAI audio/speech API. For details on the available parameters, see: https://platform.openai.com/docs/api-reference/audio/createSpeech
Returns:
  • Audio: An audio file containing the synthesized speech.
Example: Add a computed column that applies the model tts-1 to an existing Pixeltable column tbl.text of the table tbl:
tbl.add_computed_column(audio=speech(tbl.text, model='tts-1', voice='nova'))

transcriptions() udf

Transcribes audio into the input language. Equivalent to the OpenAI audio/transcriptions API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM. Requirements:
  • pip install openai
Signature:
transcriptions(
    audio: Audio,
    model: String,
    model_kwargs: Optional[Json]
)-> Json
Parameters: Returns:
  • Json: A dictionary containing the transcription and other metadata.
Example: Add a computed column that applies the model whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
tbl.add_computed_column(transcription=transcriptions(tbl.audio, model='whisper-1', language='en'))

translations() udf

Translates audio into English. Equivalent to the OpenAI audio/translations API endpoint. For additional details, see: https://platform.openai.com/docs/guides/speech-to-text Request throttling: Applies the rate limit set in the config (section openai.rate_limits; use the model id as the key). If no rate limit is configured, uses a default of 600 RPM. Requirements:
  • pip install openai
Signature:
translations(
    audio: Audio,
    model: String,
    model_kwargs: Optional[Json]
)-> Json
Parameters: Returns:
  • Json: A dictionary containing the translation and other metadata.
Example: Add a computed column that applies the model whisper-1 to an existing Pixeltable column tbl.audio of the table tbl:
tbl.add_computed_column(translation=translations(tbl.audio, model='whisper-1', language='en'))

vision() udf

Analyzes an image with the OpenAI vision capability. This is a convenience function that takes an image and prompt, and constructs a chat completion request that utilizes OpenAI vision. For additional details, see: https://platform.openai.com/docs/guides/vision Request throttling: Uses the rate limit-related headers returned by the API to throttle requests adaptively, based on available request and token capacity. No configuration is necessary. Requirements:
  • pip install openai
Signature:
vision(
    prompt: String,
    image: Image,
    model: String,
    model_kwargs: Optional[Json]
)-> String
Parameters:
  • prompt (String): A prompt for the OpenAI vision request.
  • image (Image): The image to analyze.
  • model (String): The model to use for OpenAI vision.
Returns:
  • String: The response from the OpenAI vision API.
Example: Add a computed column that applies the model gpt-4o-mini to an existing Pixeltable column tbl.image of the table tbl:
tbl.add_computed_column(response=vision("What's in this image?", tbl.image, model='gpt-4o-mini'))
I