This documentation page is also available as an interactive notebook. You can launch the notebook in
Kaggle or Colab, or download it for use with an IDE or local Jupyter installation, by clicking one of the
above links.
Draw bounding boxes on images to visualize object detection results.
Problem
You’ve run object detection on images but need to visualize the
results—see where objects were detected and verify the model’s accuracy.
Solution
What’s in this recipe:
- Run object detection with YOLOX
- Draw bounding boxes on images
- Color-code by object class
You create a pipeline that detects objects and then draws the results on
the original image.
Setup
%pip install -qU pixeltable torch torchvision
import pixeltable as pxt
from pixeltable.functions.yolox import yolox
from pixeltable.functions.vision import draw_bounding_boxes
# Create a fresh directory
pxt.drop_dir('viz_demo', force=True)
pxt.create_dir('viz_demo')
Connected to Pixeltable database at: postgresql+psycopg://postgres:@/pixeltable?host=/Users/pjlb/.pixeltable/pgdata
Created directory ‘viz_demo’.
<pixeltable.catalog.dir.Dir at 0x138534d00>
Create detection and visualization pipeline
# Create table for images
images = pxt.create_table(
'viz_demo.images',
{'image': pxt.Image}
)
Created table ‘images’.
# Step 1: Run object detection
images.add_computed_column(
detections=yolox(images.image, model_id='yolox_m', threshold=0.5)
)
Added 0 column values with 0 errors.
No rows affected.
# Step 2: Draw bounding boxes on the image
# Note: draw_bounding_boxes takes image, boxes, and labels (scores are not used for drawing)
images.add_computed_column(
annotated=draw_bounding_boxes(
images.image,
images.detections.bboxes,
images.detections.labels
)
)
Added 0 column values with 0 errors.
No rows affected.
Detect and visualize
# Insert sample images
base_url = 'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/images'
image_urls = [
f'{base_url}/000000000036.jpg', # cats
f'{base_url}/000000000139.jpg', # elephants
]
images.insert([{'image': url} for url in image_urls])
Inserting rows into `images`: 0 rows [00:00, ? rows/s]
Inserting rows into `images`: 2 rows [00:00, 236.29 rows/s]
Inserted 2 rows with 0 errors.
2 rows inserted, 8 values computed.
# View original vs annotated images side by side
images.select(images.image, images.annotated).collect()
# View detection details
images.select(images.detections).collect()
Explanation
Pipeline flow:
Image → YOLOX detection → Bounding boxes + labels → draw_bounding_boxes → Annotated image
Detection output format:
The yolox function returns a dict with:
bboxes - List of [x1, y1, x2, y2] coordinates
labels - List of class names (e.g., “cat”, “dog”)
scores - List of confidence scores (0-1)
YOLOX model options:
See also