Skip to main content
Who: AI/App Developers
Output: AI-powered application
Add multimodal intelligence to applications with two deployment patterns.
Same foundation, different intent: This workflow uses the same Pixeltable capabilities as Data Wrangling for ML — tables, multimodal types, computed columns, iterators. The difference is the output: training datasets vs. live application intelligence.

Data Lifecycle

1

Create Tables

Define schema with native multimodal types — Pixeltable handles storage and referencescreate_table(), pxt.Image, pxt.Video, pxt.Audio, pxt.Document, pxt.Json
import pixeltable as pxt

# Native multimodal types
t = pxt.create_table('app.docs', {
    'pdf': pxt.Document,
    'metadata': pxt.Json
})
2

Ingest Data

Load from any source — local files, URLs, cloud storage, or databasesinsert(), import_csv(), S3/GCS/Azure
# Insert with URLs, local paths, or direct upload
t.insert([
    {'pdf': 'https://example.com/report.pdf'},
    {'pdf': '/local/path/to/doc.pdf'},
    {'pdf': 's3://bucket/documents/spec.pdf'}
])

Deployment Patterns

When: Keep existing RDBMS + blob storagePixeltable processes media, runs models, then exports results to your existing systems.
# Process in Pixeltable with media stored directly to S3/GCS/Azure
videos.add_computed_column(
    thumbnail=videos.frame.resize((256, 256)),
    destination='s3://my-bucket/thumbnails/'  # Direct to blob storage
)

# Export metadata to external RDBMS
df = videos.select(videos.video, videos.transcript).collect()
df.to_sql('video_metadata', engine, if_exists='append')  # SQLAlchemy

Orchestration Pattern Guide

Process → Export to your existing infrastructure

End-to-End Examples

More sample apps: Check out the sample-apps directory for chat applications, multimodal search, and more.
Last modified on January 29, 2026