Documentation Index
Fetch the complete documentation index at: https://docs.pixeltable.com/llms.txt
Use this file to discover all available pages before exploring further.
Problem
You have a batch of images that need AI-powered transformations—like turning photos into paintings, adding stylistic effects, or modifying content based on text prompts.Solution
What’s in this recipe:- Transform images using text prompts with Hugging Face Stable Diffusion models
- Control transformation strength and quality settings
- Process batches of images automatically
.select() with .collect() to preview results on sample
images—nothing is stored in your table. If you want to collect only the
first few rows, use .head(n) instead of .collect(). Once you’re
satisfied, use .add_computed_column() to apply the transformation to
all images in your table.
For more on this workflow, see Get fast feedback on
transformations.
Setup
Load images
Connected to Pixeltable database at: postgresql+psycopg://postgres:@/pixeltable?host=/Users/cpestano/.pixeltable/pgdata
Created directory ‘img2img_demo’.
<pixeltable.catalog.dir.Dir at 0x14f787820>
Created table ‘images’.
Inserted 2 rows with 0 errors in 0.49 s (4.07 rows/s)
2 rows inserted.
Iterate: test transformation on a single image
Use.select() to define the transformation, then .head(n) to preview
results on a subset of images. Nothing is stored in your table.
The image_to_image function requires:
image: The source image to transformprompt: Text describing the desired outputmodel_id: A Hugging Face model ID that supports image-to-image (e.g.,stable-diffusion-v1-5/stable-diffusion-v1-5)
Iterate: adjust transformation strength
You control how much the model modifies the original image usingstrength (0.0-1.0):
- Lower values (0.3-0.5): Subtle changes, preserves more of the original
- Higher values (0.7-1.0): Dramatic changes, more creative freedom
model_kwargs. For example,
negative_prompt text describing what you don’t want the output to be.
Add: apply transformation to all images
Once you’re satisfied with the results, use.add_computed_column()
with the same expression. This processes all rows and stores the results
permanently in your table.
Added 2 column values with 0 errors in 53.83 s (0.04 rows/s)
2 rows updated.
Use reproducible results with seeds
You set aseed parameter to get the same output every time you run the
transformation.
Added 2 column values with 0 errors in 96.24 s (0.02 rows/s)
2 rows updated.
Explanation
How image-to-image works: Image-to-image diffusion models take an existing image and a text prompt, then generate a new image that blends the structure of the original with the guidance from the prompt. Thestrength parameter
controls the balance—lower values preserve more of the original, while
higher values allow more dramatic transformations.
Model compatibility:
The image_to_image UDF uses AutoPipelineForImage2Image from the
diffusers library, which automatically detects the model type and
selects the appropriate pipeline. You use any compatible model:
stable-diffusion-v1-5/stable-diffusion-v1-5- General-purpose, runs on most hardwarestabilityai/stable-diffusion-xl-base-1.0- Higher quality, needs more VRAM
strength(0.0-1.0): How much to transform the imagenegative_prompt: Text describing what to avoid in the generated image (e.g., “blurry, low quality”).num_inference_steps: Quality vs speed tradeoff (more steps = better quality)guidance_scale: How closely to follow the prompt (7-8 is typical)seed: For reproducible results