Skip to main content

module  pixeltable.functions.runwayml

Pixeltable UDFs that wrap various endpoints from the RunwayML API. In order to use them, you must first pip install runwayml and configure your RunwayML credentials by setting the RUNWAYML_API_SECRET environment variable.

udf  image_to_video()

Signature
@pxt.udf
image_to_video(
    prompt_image: pxt.Image,
    model: pxt.String,
    ratio: pxt.String,
    *,
    prompt_text: pxt.String | None = None,
    duration: pxt.Int | None = None,
    seed: pxt.Int | None = None,
    audio: pxt.Bool | None = None,
    model_kwargs: pxt.Json | None = None
) -> pxt.Json
Generate videos from images. For additional details, see: Image to video Requirements:
  • pip install runwayml
Parameters:
  • prompt_image (pxt.Image): Input image to use as the first frame.
  • model (pxt.String): The model to use.
  • ratio (pxt.String): Aspect ratio of the generated video.
  • prompt_text (pxt.String | None): Text description to guide generation.
  • duration (pxt.Int | None): Duration in seconds.
  • seed (pxt.Int | None): Seed for reproducibility.
  • audio (pxt.Bool | None): Whether to generate audio.
  • model_kwargs (pxt.Json | None): Additional API parameters.
Returns:
  • pxt.Json: A dictionary containing the response and metadata.
Examples: Add a computed column that generates videos from images:
tbl.add_computed_column(
    response=image_to_video(
        tbl.image,
        model='gen4',
        ratio='16:9',
        prompt_text='Slow motion',
        duration=5,
    )
)
tbl.add_computed_column(video=tbl.response['output'].astype(pxt.Video))

udf  text_to_image()

Signature
@pxt.udf
text_to_image(
    prompt_text: pxt.String,
    reference_images: pxt.Json,
    model: pxt.String,
    ratio: pxt.String,
    *,
    seed: pxt.Int | None = None,
    model_kwargs: pxt.Json | None = None
) -> pxt.Json
Generate images from text prompts and reference images. For additional details, see: Text/Image to Image Requirements:
  • pip install runwayml
Parameters:
  • prompt_text (pxt.String): Text description of the image to generate.
  • reference_images (pxt.Json): List of 1-3 reference images.
  • model (pxt.String): The model to use.
  • ratio (pxt.String): Aspect ratio of the generated image.
  • seed (pxt.Int | None): Seed for reproducibility.
  • model_kwargs (pxt.Json | None): Additional API parameters.
Returns:
  • pxt.Json: A dictionary containing the response and metadata.
Examples: Add a computed column that generates images from prompts:
tbl.add_computed_column(
    response=text_to_image(
        tbl.prompt, [tbl.ref_image], model='gen4_image', ratio='16:9'
    )
)
tbl.add_computed_column(image=tbl.response['output'][0].astype(pxt.Image))

udf  text_to_video()

Signature
@pxt.udf
text_to_video(
    prompt_text: pxt.String,
    model: pxt.String,
    ratio: pxt.String,
    *,
    duration: pxt.Int | None = None,
    audio: pxt.Bool | None = None,
    model_kwargs: pxt.Json | None = None
) -> pxt.Json
Generate videos from text prompts. For additional details, see: Text to video Requirements:
  • pip install runwayml
Parameters:
  • prompt_text (pxt.String): Text description of the video to generate.
  • model (pxt.String): The model to use.
  • ratio (pxt.String): Aspect ratio of the generated video.
  • duration (pxt.Int | None): Duration in seconds.
  • audio (pxt.Bool | None): Whether to generate audio.
  • model_kwargs (pxt.Json | None): Additional API parameters.
Returns:
  • pxt.Json: A dictionary containing the response and metadata.
Examples: Add a computed column that generates videos from prompts:
tbl.add_computed_column(
    response=text_to_video(
        tbl.prompt, model='veo3.1', ratio='16:9', duration=4
    )
)
tbl.add_computed_column(video=tbl.response['output'].astype(pxt.Video))

udf  video_to_video()

Signature
@pxt.udf
video_to_video(
    video_uri: pxt.String,
    prompt_text: pxt.String,
    model: pxt.String,
    ratio: pxt.String,
    *,
    seed: pxt.Int | None = None,
    model_kwargs: pxt.Json | None = None
) -> pxt.Json
Transform videos with text guidance. For additional details, see: Video to video Requirements:
  • pip install runwayml
Parameters:
  • video_uri (pxt.String): HTTPS URL to the input video.
  • prompt_text (pxt.String): Text description of the transformation.
  • model (pxt.String): The model to use.
  • ratio (pxt.String): Aspect ratio of the output video.
  • seed (pxt.Int | None): Seed for reproducibility.
  • model_kwargs (pxt.Json | None): Additional API parameters.
Returns:
  • pxt.Json: A dictionary containing the response and metadata.
Examples: Add a computed column that transforms videos:
tbl.add_computed_column(
    response=video_to_video(
        tbl.video_url, 'Anime style', model='gen4_aleph', ratio='16:9'
    )
)
tbl.add_computed_column(video=tbl.response['output'].astype(pxt.Video))
Last modified on January 29, 2026