Documentation Index
Fetch the complete documentation index at: https://docs.pixeltable.com/llms.txt
Use this file to discover all available pages before exploring further.
This documentation page is also available as an interactive notebook. You can launch the notebook in
Kaggle or Colab, or download it for use with an IDE or local Jupyter installation, by clicking one of the
above links.
Turn a collection of images into an animated video with Ken Burns
effects, text overlays, logos, and background music — using a fully
declarative Pixeltable pipeline.
Problem
You have a set of images and want to produce a polished video — an ad, a
product reel, a social media clip. Typically this means a video editor
or a complex ffmpeg scripting pipeline.
Solution
What’s in this recipe:
- Convert still images into animated video clips with pan effects
- Control pan direction per image from table data (no Python loops)
- Add per-image captions and a logo overlay
- Concatenate all clips and add background music
The key insight: store per-clip metadata (caption text, pan direction,
logo) as table columns. One chained computed column expression
handles the entire pipeline, and Pixeltable evaluates it per row.
Setup
%pip install -qU pixeltable
Note: you may need to restart the kernel to use updated packages.
import pixeltable as pxt
from pixeltable.functions.video import concat_videos_agg, with_audio
pxt.drop_dir('slideshow_demo', force=True)
pxt.create_dir('slideshow_demo')
Connected to Pixeltable database at: postgresql+psycopg://postgres:@/pixeltable?host=/Users/pjlb/.pixeltable/pgdata
Pixeltable dashboard available at: http://localhost:22089
Created directory ‘slideshow_demo’.
<pixeltable.catalog.dir.Dir at 0x13fca3ac0>
Step 1: Define the data
Each row is one clip in the final video. Per-clip variation comes from
table columns:
caption: text overlay for each clip
pan_sign: +1.0 for pan-right, -1.0 for pan-left
logo: image to overlay in the corner
This is what makes the pipeline fully declarative — no Python loops, no
conditional logic.
LOGO_URL = 'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/pixeltable-logo-large.png'
t = pxt.create_table(
'slideshow_demo/clips',
{
'image': pxt.Image,
'seq': pxt.Int,
'caption': pxt.String,
'logo': pxt.Image,
'pan_sign': pxt.Int,
},
)
t.insert(
[
{
'image': 'https://images.unsplash.com/photo-1506744038136-46273834b3fb?w=1920&q=80',
'seq': 0,
'caption': 'DISCOVER NATURE',
'logo': LOGO_URL,
'pan_sign': 1,
},
{
'image': 'https://images.unsplash.com/photo-1470071459604-3b5ec3a7fe05?w=1920&q=80',
'seq': 1,
'caption': 'WILD FORESTS',
'logo': LOGO_URL,
'pan_sign': -1,
},
{
'image': 'https://images.unsplash.com/photo-1441974231531-c6227db76b6e?w=1920&q=80',
'seq': 2,
'caption': 'SUNLIT CANOPY',
'logo': LOGO_URL,
'pan_sign': 1,
},
{
'image': 'https://images.unsplash.com/photo-1507525428034-b723cf961d3e?w=1920&q=80',
'seq': 3,
'caption': 'OCEAN BREEZE',
'logo': LOGO_URL,
'pan_sign': -1,
},
{
'image': 'https://images.unsplash.com/photo-1519681393784-d120267933ba?w=1920&q=80',
'seq': 4,
'caption': 'EXPLORE MORE',
'logo': LOGO_URL,
'pan_sign': 1,
},
]
)
Inserted 5 rows with 0 errors in 1.04 s (4.81 rows/s)
5 rows inserted.
Step 2: Build the video pipeline
One computed column chains the entire transformation:
image → static video → resize → pan effect → resize → logo overlay → text overlay
pan() is a convenience wrapper around scroll() that computes
viewport size, start position, and speed automatically. It accepts
column expressions for per-row direction:
pan_sign = +1 → pans right
pan_sign = -1 → pans left
For lower-level control (custom speed, diagonal pans, asymmetric crops),
use scroll() directly.
W, H, DUR, CROP = 1280, 720, 4.0, 0.25
# Base: still image → video → uniform resolution
t.add_computed_column(
base=t.image.to_video(duration=DUR).resize(width=W, height=H)
)
# Full pipeline: pan → resize → logo → caption
# pan() accepts column expressions for per-row direction
t.add_computed_column(
clip=t.base.pan(x_sign=t.pan_sign, crop_pct=CROP)
.resize(width=W, height=H)
.overlay_image(
t.logo,
scale=0.10,
opacity=0.85,
horizontal_align='right',
vertical_align='top',
horizontal_margin=15,
vertical_margin=15,
)
.overlay_text(
t.caption,
font_size=44,
color='white',
horizontal_align='center',
vertical_align='bottom',
vertical_margin=50,
box=True,
box_color='black',
box_opacity=0.5,
box_border=[8, 16],
start_time=0.5,
end_time=3.0,
)
)
Added 5 column values with 0 errors in 6.43 s (0.78 rows/s)
5 rows updated.
Step 3: Preview individual clips
Each row now has a fully rendered video clip. Let’s inspect them.
t.select(
t.seq, t.caption, t.pan_sign, dur=t.clip.get_duration()
).order_by(t.seq).collect()
Step 4: Concatenate into final video
concat_videos_agg merges all clips in seq order into a single video.
video_path = (
t.group_by()
.select(v=concat_videos_agg(t.seq, t.clip))
.collect()[0]['v']
)
Step 5: Add background music
with_audio() replaces (or adds) the audio track on a video. The audio
is trimmed to match the video duration automatically.
MUSIC_URL = 'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/sample-background-music.m4a'
final = pxt.create_table(
'slideshow_demo/final', {'video': pxt.Video, 'music': pxt.Audio}
)
final.insert([{'video': video_path, 'music': MUSIC_URL}])
final.add_computed_column(out=with_audio(final.video, final.music))
final.select(final.out).collect()
How it works
The entire pipeline is declarative — per-clip variation comes from
data, not code:
One computed column expression handles the full transformation chain.
Pixeltable evaluates it per row, pulling caption, logo, and pan
direction from each row’s data.
Alternative effects
Swap the pan for other built-in effects:
# Zoom instead of pan
t.add_computed_column(clip=t.base.zoom(start_scale=1.0, end_scale=1.3))
# Fade-through-black transitions (add to each clip before concat)
t.add_computed_column(clip=t.base.fade_in(duration=0.5).fade_out(duration=0.5))
# Combine: pan + fade
t.add_computed_column(
clip=t.base.pan(x_sign=1)
.resize(width=1280, height=720)
.fade_in(duration=0.5)
.fade_out(duration=0.5)
)
Audio
This recipe uses with_audio() to replace the soundtrack. To blend
a second audio track into a video that already has audio (e.g. add
background music under narration), use mix_audio():
t.video.mix_audio(t.music, audio_volume=0.3, original_volume=1.0)