Skip to main content
From language models to computer vision frameworks, Pixeltable integrates with the entire ecosystem. All integrations are available out-of-the-box with Pixeltable installation. No additional setup required unless specified.
If you have a framework that you want us to integrate with, please reach out and you can also leverage Pixeltable’s UDFs to build your own.

Cloud LLM providers

Anthropic Claude

Integrate Claude models for advanced language understanding and generation with multimodal capabilities

Google Gemini

Access Google’s Gemini models via Google AI Studio or Vertex AI for state-of-the-art multimodal AI capabilities

OpenAI

Leverage GPT models for text generation, embeddings, and image analysis

Azure OpenAI

Use OpenAI models via Azure with enterprise security and compliance

Mistral AI

Use Mistral’s efficient language models for various NLP tasks

Together AI

Access a variety of open-source models through Together AI’s platform

Fireworks

Use Fireworks.ai’s optimized model inference infrastructure

DeepSeek

Leverage DeepSeek’s powerful language and code models for text and code generation

AWS Bedrock

Access a variety of AI models through AWS Bedrock’s unified API

Groq

Access Groq’s models for text generation

OpenRouter

Unified access to 100+ LLMs from various providers through a single API

Embeddings & Reranking

Voyage AI

High-quality embeddings and reranking for text, images, and video

Jina AI

Embeddings and reranking optimized for search and RAG pipelines

Video Understanding

Twelve Labs

Multimodal video understanding, search, and analysis with state-of-the-art foundation models

Media Generation

fal.ai

Fast image generation with Flux, Stable Diffusion, and other models

Reve

AI-powered video generation and editing capabilities

RunwayML

AI video generation with Gen-3 Alpha and other Runway models

Local LLM runtimes

Llama.cpp

High-performance C++ implementation for running LLMs on CPU and GPU

Ollama

Easy-to-use toolkit for running and managing open-source models locally

Computer vision

YOLOX

State-of-the-art object detection with YOLOX models

Voxel51

Advanced video and image dataset management with Voxel51

Annotation tools

Label Studio

Comprehensive platform for data annotation and labeling workflows

Audio processing

Whisper/WhisperX

High-quality speech recognition and transcription using OpenAI’s Whisper models

Enterprise Platforms

Microsoft Fabric

Azure OpenAI integration through Microsoft Fabric for enterprise AI workloads

Data Wrangling

Pandas

Import and export from and to Pandas DataFrames

Usage examples

import pixeltable as pxt
from pixeltable.functions import openai

# Create a table with computed column for OpenAI completion
table = pxt.create_table('responses', {'prompt': pxt.String})

table.add_computed_column(
    response=openai.chat_completions(
        messages=[{'role': 'user', 'content': table.prompt}],
        model='gpt-4'
    )
)
from pixeltable.functions.yolox import yolox

# Add object detection to video frames
frames_view.add_computed_column(
    detections=yolox(
        frames_view.frame,
        model_id='yolox_l'
    )
)
from pixeltable.functions import openai

# Transcribe audio files
audio_table.add_computed_column(
    transcription=openai.transcriptions(
        audio=audio_table.file,
        model='whisper-1'
    )
)

Integration features

1

Easy Setup

Most integrations work out-of-the-box with simple API configuration
2

Computed Columns

Use integrations directly in computed columns for automated processing
3

Batch Processing

Efficient handling of batch operations with automatic optimization
Check the provider notebooks for detailed usage instructions for each integration.
Need help setting up integrations? Join our Discord community for support.
Last modified on March 15, 2026