Skip to main content
Open in Kaggle  Open in Colab  Download Notebook
This documentation page is also available as an interactive notebook. You can launch the notebook in Kaggle or Colab, or download it for use with an IDE or local Jupyter installation, by clicking one of the above links.
In this tutorial, we’ll explore Pixeltable’s flexible handling of RAG operations on unstructured text. In a traditional AI workflow, such operations might be implemented as a Python script that runs on a periodic schedule or in response to certain events. In Pixeltable, as with everything else, they are implemented as persistent table operations that update incrementally as new data becomes available. In our tutorial workflow, we’ll chunk PDF documents in various ways with a document splitter, then apply several kinds of embeddings to the chunks.

Set Up the Table Structure

We start by installing the necessary dependencies, creating a Pixeltable directory rag_ops_demo (if it doesn’t already exist), and setting up the table structure for our new workflow.
%pip install -qU pixeltable sentence-transformers spacy tiktoken
import pixeltable as pxt

# Ensure a clean slate for the demo
pxt.drop_dir('rag_ops_demo', force=True)
# Create the Pixeltable workspace
pxt.create_dir('rag_ops_demo')

Creating Tables and Views

Now we’ll create the tables that represent our workflow, starting with a table to hold references to source documents. The table contains a single column source_doc whose elements have type pxt.Document, representing a general document instance. In this tutorial, we’ll be working with PDF documents, but Pixeltable supports a range of other document types, such as Markdown and HTML.
docs = pxt.create_table(
    'rag_ops_demo.docs',
    {'source_doc': pxt.Document}
)
Created table ‘docs’.
If we take a peek at the docs table, we see its very simple structure.
docs
Next we create a view to represent chunks of our PDF documents. A Pixeltable view is a virtual table, which is dynamically derived from a source table by applying a transformation and/or selecting a subset of data. In this case, our view represents a one-to-many transformation from source documents into individual sentences. This is achieved using Pixeltable’s built-in document_splitter class. Note that the docs table is currently empty, so creating this view doesn’t actually do anything yet: it simply defines an operation that we want Pixeltable to execute when it sees new data.
from pixeltable.functions.document import document_splitter

sentences = pxt.create_view(
    'rag_ops_demo.sentences',  # Name of the view
    docs,  # Table from which the view is derived
    iterator=document_splitter(
        docs.source_doc,
        separators='sentence',  # Chunk docs into sentences
        metadata='title,heading,sourceline'
    )
)
Let’s take a peek at the new sentences view.
sentences
We see that sentences inherits the source_doc column from docs, together with some new fields: - pos: The position in the source document where the sentence appears. - text: The text of the sentence. - title, heading, and sourceline: The metadata we requested when we set up the view.

Data Ingestion

Ok, now it’s time to insert some data into our workflow. A document in Pixeltable is just a URL; the following command inserts a single row into the docs table with the source_doc field set to the specified URL:
docs.insert([{'source_doc': 'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/rag-demo/Argus-Market-Digest-June-2024.pdf'}])
Inserting rows into `docs`: 1 rows [00:00, 292.76 rows/s]
Inserting rows into `sentences`: 217 rows [00:00, 42910.00 rows/s]
Inserted 218 rows with 0 errors.
218 rows inserted, 2 values computed.
We can see that two things happened. First, a single row was inserted into docs, containing the URL representing our source PDF. Then, the view sentences was incrementally updated by applying the document_splitter according to the definition of the view. This illustrates an important principle in Pixeltable: by default, anytime Pixeltable sees new data, the update is incrementally propagated to any downstream views or computed columns. We can see the effect of the insertion with the select command. There’s a single row in docs:
docs.select(docs.source_doc.fileurl).show()
And here are the first 20 rows in sentences. The content of the PDF is broken into individual sentences, as expected.
sentences.select(sentences.text, sentences.heading).show(20)

Experimenting with Chunking

Of course, chunking into sentences isn’t the only way to split a document. Perhaps we want to experiment with different chunking methodologies, in order to see which one performs best in a particular application. Pixeltable makes it easy to do this, by creating several views of the same source table. Here are a few examples. Notice that as each new view is created, it is initially populated from the data already in docs.
chunks = pxt.create_view(
    'rag_ops_demo.chunks', docs,
    iterator=document_splitter(
        docs.source_doc,
        separators='sentence,token_limit',
        limit=2048,
        overlap=0,
        metadata='title,heading,sourceline'
    )
)
Inserting rows into `chunks`: 217 rows [00:00, 47827.85 rows/s]
short_chunks = pxt.create_view(
    'rag_ops_demo.short_chunks', docs,
    iterator=document_splitter(
        docs.source_doc,
        separators='sentence,token_limit',
        limit=72,
        overlap=0,
        metadata='title,heading,sourceline'
    )
)
Inserting rows into `short_chunks`: 219 rows [00:00, 49104.70 rows/s]
short_char_chunks = pxt.create_view(
    'rag_ops_demo.short_char_chunks', docs,
    iterator=document_splitter(
        docs.source_doc,
        separators='sentence,char_limit',
        limit=72,
        overlap=0,
        metadata='title,heading,sourceline'
    )
)
Inserting rows into `short_char_chunks`: 459 rows [00:00, 63241.10 rows/s]
chunks.select(chunks.text, chunks.heading).show(20)
short_chunks.select(short_chunks.text, short_chunks.heading).show(20)
short_char_chunks.select(short_char_chunks.text, short_char_chunks.heading).show(20)
Now let’s add a few more documents to our workflow. Notice how all of the downstream views are updated incrementally, processing just the new documents as they are inserted.
urls = [
    'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/rag-demo/Argus-Market-Watch-June-2024.pdf',
    'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/rag-demo/Company-Research-Alphabet.pdf',
    'https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/rag-demo/Zacks-Nvidia-Report.pdf'
]
docs.insert({'source_doc': url} for url in urls)
Inserting rows into `docs`: 3 rows [00:00, 1969.77 rows/s]
Inserting rows into `chunks`: 742 rows [00:00, 61926.41 rows/s]
Inserting rows into `short_chunks`: 747 rows [00:00, 67743.68 rows/s]
Inserting rows into `sentences`: 742 rows [00:00, 67949.90 rows/s]
Inserting rows into `short_char_chunks`: 1165 rows [00:00, 3603.41 rows/s]
Inserted 3399 rows with 0 errors.
3399 rows inserted, 6 values computed.

Further Experiments

This is a good time to mention another important guiding principle of Pixeltable. The preceding examples all used the built-in document_splitter class with various configurations. That’s probably fine as a first cut or to prototype an application quickly, and it might be sufficient for some applications. But other applications might want to do more sophisticated kinds of chunking, implementing their own specialized logic or leveraging third-party tools. Pixeltable imposes no constraints on the AI or RAG operations a workflow uses: the iterator interface is highly general, and it’s easy to implement new operations or adapt existing code or third-party tools into the Pixeltable workflow.

Computing Embeddings

Next, let’s look at how embedding indices can be added seamlessly to existing Pixeltable workflows. To compute our embeddings, we’ll use the Huggingface sentence_transformer package, running it over the chunks view that broke our documents up into sentence-based chunks. Pixeltable has a built-in sentence_transformer adapter, and all we have to do is add a new column that leverages it. Pixeltable takes care of the rest, applying the new column to all existing data in the view.
from pixeltable.functions.huggingface import sentence_transformer

chunks.add_computed_column(minilm_embed=sentence_transformer(
    chunks.text,
    model_id='paraphrase-MiniLM-L6-v2'
))
Added 959 column values with 0 errors.
959 rows updated, 959 values computed.
The new column is a computed column: it is defined as a function on top of existing data and updated incrementally as new data are added to the workflow. Let’s have a look at how the new column affected the chunks view.
chunks
chunks.select(chunks.text, chunks.heading, chunks.minilm_embed).head()
Similarly, we might want to add a CLIP embedding to our workflow; once again, it’s just another computed column:
from pixeltable.functions.huggingface import clip

chunks.add_computed_column(clip_embed=clip(
    chunks.text, model_id='openai/clip-vit-base-patch32'
))
Added 959 column values with 0 errors.
959 rows updated, 959 values computed.
chunks
chunks.select(chunks.text, chunks.heading, chunks.clip_embed).head()
Last modified on December 15, 2025