Skip to main content
Open in Kaggle  Open in Colab  Download Notebook
This documentation page is also available as an interactive notebook. You can launch the notebook in Kaggle or Colab, or download it for use with an IDE or local Jupyter installation, by clicking one of the above links.
Two popular taxonomies describe the building blocks of agentic AI systems:
  • Cognitive / reasoning-oriented (Taxonomy 1): Reflection, Tool Use, ReAct, Planning, Multi-Agent — asks “how does the agent think?”
  • Architectural / system-design-oriented (Taxonomy 2): Prompt Chaining, Routing, Parallelization, Tool Use, Evaluator-Optimizer, Orchestrator-Worker — asks “how do you wire LLM calls together?”
(See OpenAI’s Practical Guide to Building Agents, Anthropic’s multi-agent research system, and Pydantic AI’s multi-agent delegation.) Mapping them against each other reveals:
The cleanest framing: six architectural patterns that describe how you structure LLM calls, plus two cross-cutting reasoning strategies (ReAct and Planning) that can be layered inside any of them. This cookbook implements all eight in Pixeltable, where your agent is a table:

Setup

%pip install -qU pixeltable openai
import getpass
import os

if 'OPENAI_API_KEY' not in os.environ:
    os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key: ')
import pixeltable as pxt
from pixeltable.functions import openai

pxt.drop_dir('agentic_patterns', force=True)
pxt.create_dir('agentic_patterns')
Created directory ‘agentic_patterns’.
<pixeltable.catalog.dir.Dir at 0x32639cb90>

Pattern 1: Prompt Chaining

Break a complex task into sequential steps, where each step’s output feeds the next. Imperative approach: a chain of function calls or an explicit pipeline object. Pixeltable approach: each step is a computed column. The engine resolves dependencies automatically.
input → step 1 (outline) → step 2 (draft) → step 3 (polish) → output
# Create a table with a single input column
chain = pxt.create_table('agentic_patterns/chain', {'topic': pxt.String})
Created table ‘chain’.
# Step 1: generate an outline
chain.add_computed_column(
    outline_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Create a 3-point outline for a short article about: '
                + chain.topic,
            }
        ],
        model='gpt-4o-mini',
    )
)
chain.add_computed_column(
    outline=chain.outline_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 2: write a draft from the outline
chain.add_computed_column(
    draft_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Write a short article (2-3 paragraphs) based on this outline:\n\n'
                + chain.outline,
            }
        ],
        model='gpt-4o-mini',
    )
)
chain.add_computed_column(
    draft=chain.draft_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 3: polish the draft
chain.add_computed_column(
    polish_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Edit this article for clarity and conciseness. '
                'Return only the improved text:\n\n' + chain.draft,
            }
        ],
        model='gpt-4o-mini',
    )
)
chain.add_computed_column(
    final_article=chain.polish_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Insert a topic — all three steps execute automatically
chain.insert([{'topic': 'the benefits of declarative AI pipelines'}])

chain.select(
    chain.topic, chain.outline, chain.draft, chain.final_article
).collect()
Inserted 1 row with 0 errors in 14.58 s (0.07 rows/s)
Every intermediate result (outline, draft, final_article) is persisted in the table. Inserting another topic reuses the same pipeline — no code changes needed. If the same topic is inserted again, cached results are returned instantly.

Pattern 2: Routing

Classify an input and route it to a specialized handler. This is the agent equivalent of a switch/case statement. Imperative approach: a triage agent that performs handoffs to specialized agents. Pixeltable approach: one computed column classifies; a UDF selects the prompt; a second LLM call generates the response.
input → classify intent → select specialized prompt → generate response
router = pxt.create_table(
    'agentic_patterns/router', {'query': pxt.String}
)
Created table ‘router’.
# Step 1: classify the query intent
router.add_computed_column(
    classify_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Classify this customer query into exactly one category: '
                'technical, billing, or general. Reply with the single word only.\n\n'
                'Query: ' + router.query,
            }
        ],
        model='gpt-4o-mini',
    )
)
router.add_computed_column(
    intent=router.classify_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 2: route to a specialized system prompt based on the classification
@pxt.udf
def route_prompt(intent: str, query: str) -> list[dict]:
    """Select a system prompt based on the classified intent."""
    system_prompts = {
        'technical': 'You are a senior technical support engineer. '
        'Provide precise, step-by-step troubleshooting guidance.',
        'billing': 'You are a billing specialist. '
        'Be empathetic and clear about charges, refunds, and payment options.',
        'general': 'You are a friendly customer service representative. '
        'Answer helpfully and concisely.',
    }
    # Default to general if classification is unexpected
    system = system_prompts.get(
        intent.strip().lower(), system_prompts['general']
    )
    return [
        {'role': 'system', 'content': system},
        {'role': 'user', 'content': query},
    ]


router.add_computed_column(
    routed_messages=route_prompt(router.intent, router.query)
)
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 3: generate the specialized response
router.add_computed_column(
    response_raw=openai.chat_completions(
        messages=router.routed_messages, model='gpt-4o-mini'
    )
)
router.add_computed_column(
    response=router.response_raw.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Insert queries spanning different intents
router.insert(
    [
        {
            'query': 'My API calls are returning 429 errors since this morning'
        },
        {'query': 'I was charged twice for my subscription last month'},
        {'query': 'What programming languages do you support?'},
    ]
)

router.select(router.query, router.intent, router.response).collect()
Inserted 3 rows with 0 errors in 6.93 s (0.43 rows/s)
Each query was classified and then handled by a specialized system prompt. The intent column is inspectable for every row, making it easy to audit routing decisions.

Pattern 3: Parallelization

Run multiple independent LLM calls on the same input simultaneously, then combine the results. Imperative approach: asyncio.gather or thread pools. Pixeltable approach: add independent computed columns. The engine parallelizes them automatically because they share no dependencies.
         ┌→ sentiment  ─┐
input  ──┼→ entities   ──┼→ merge → combined output
         └→ summary    ─┘
parallel = pxt.create_table(
    'agentic_patterns/parallel', {'text': pxt.String}
)
Created table ‘parallel’.
# Three independent LLM calls — Pixeltable runs them in parallel automatically
parallel.add_computed_column(
    sentiment_raw=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Analyze the sentiment of this text. '
                'Reply with: positive, negative, or neutral.\n\n'
                + parallel.text,
            }
        ],
        model='gpt-4o-mini',
    )
)
parallel.add_computed_column(
    sentiment=parallel.sentiment_raw.choices[0].message.content.astype(
        pxt.String
    )
)

parallel.add_computed_column(
    entities_raw=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Extract all named entities (people, companies, locations) '
                'from this text. Return a comma-separated list.\n\n'
                + parallel.text,
            }
        ],
        model='gpt-4o-mini',
    )
)
parallel.add_computed_column(
    entities=parallel.entities_raw.choices[0].message.content.astype(
        pxt.String
    )
)

parallel.add_computed_column(
    summary_raw=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Summarize this text in one sentence.\n\n'
                + parallel.text,
            }
        ],
        model='gpt-4o-mini',
    )
)
parallel.add_computed_column(
    summary=parallel.summary_raw.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Merge the parallel results into a single structured report
@pxt.udf
def merge_analysis(sentiment: str, entities: str, summary: str) -> dict:
    """Combine parallel analysis results into one report."""
    return {
        'sentiment': sentiment.strip(),
        'entities': entities.strip(),
        'summary': summary.strip(),
    }


parallel.add_computed_column(
    report=merge_analysis(
        parallel.sentiment, parallel.entities, parallel.summary
    )
)
Added 0 column values with 0 errors in 0.01 s
No rows affected.
parallel.insert(
    [
        {
            'text': 'Apple announced record quarterly revenue of $124 billion, '
            'driven by strong iPhone sales in Europe and Asia. CEO Tim Cook '
            "expressed optimism about the company's AI initiatives, while "
            'some analysts remain cautious about increased R&D spending.'
        }
    ]
)

parallel.select(
    parallel.text, parallel.sentiment, parallel.entities, parallel.summary
).collect()
The three LLM calls (sentiment, entities, summary) have no dependency on each other, so Pixeltable dispatches them concurrently. The merge_analysis UDF waits for all three before combining the results. No async code required.

Pattern 4: Tool Use

Give an LLM access to external functions it can call to gather information or take action. Imperative approach: @function_tool decorator, tool loop that re-prompts until the LLM stops requesting tools. Pixeltable approach: pxt.tools() bundles UDFs into tool definitions; invoke_tools() executes the LLM’s choices — both as computed columns.
input → LLM (with tools) → invoke_tools() → results
For a deeper walkthrough including MCP servers, see Use tool calling with LLMs.
# Define tool functions as UDFs
@pxt.udf
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    weather_data = {
        'new york': 'Sunny, 72F',
        'london': 'Cloudy, 58F',
        'tokyo': 'Rainy, 65F',
        'paris': 'Partly cloudy, 68F',
    }
    return weather_data.get(
        city.lower(), f'Weather data not available for {city}'
    )


@pxt.udf
def get_stock_price(symbol: str) -> str:
    """Get the current stock price for a ticker symbol."""
    prices = {'AAPL': '$178.50', 'GOOGL': '$141.25', 'MSFT': '$378.90'}
    return prices.get(symbol.upper(), f'Price not available for {symbol}')


# Bundle into a Tools object
tools = pxt.tools(get_weather, get_stock_price)
# Create the tool-calling pipeline
tool_agent = pxt.create_table(
    'agentic_patterns/tool_agent', {'query': pxt.String}
)

# LLM decides which tool(s) to call
tool_agent.add_computed_column(
    response=openai.chat_completions(
        messages=[{'role': 'user', 'content': tool_agent.query}],
        model='gpt-4o-mini',
        tools=tools,
    )
)

# Execute the tool calls automatically
tool_agent.add_computed_column(
    tool_output=openai.invoke_tools(tools, tool_agent.response)
)
Created table ‘tool_agent’.
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
tool_agent.insert(
    [
        {'query': "What's the weather in Tokyo?"},
        {'query': "What's Apple's stock price?"},
        {
            'query': "What's the weather in Paris and Microsoft's stock price?"
        },
    ]
)

for row in tool_agent.select(
    tool_agent.query, tool_agent.tool_output
).collect():
    print(f'Query: {row["query"]}')
    for tool_name, results in (row['tool_output'] or {}).items():
        if results:
            print(f'  -> {tool_name}: {results}')
    print()
The LLM chose which tools to invoke (including multiple tools for the last query). invoke_tools() executed them and stored results. The full LLM response is also persisted in the response column for debugging.

Pattern 5: Evaluator-Optimizer

One LLM generates output, a second LLM evaluates it, and the results are used to decide whether to refine. This is the architectural cousin of the Reflection pattern from Taxonomy 1 — an agent critiques its own output and iteratively improves it. Imperative approach: a while-loop that re-prompts until a quality threshold is met (see Pixelagent’s reflection example). Pixeltable approach: chained computed columns — generate, evaluate, then conditionally refine. The evaluation score is stored alongside the content for analysis.
input → generate → evaluate (score + feedback) → refine if needed → output
evaluator = pxt.create_table(
    'agentic_patterns/evaluator', {'product_brief': pxt.String}
)
Created table ‘evaluator’.
# Step 1: generate initial marketing copy
evaluator.add_computed_column(
    gen_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Write a short marketing tagline (one sentence) for this product:\n\n'
                + evaluator.product_brief,
            }
        ],
        model='gpt-4o-mini',
    )
)
evaluator.add_computed_column(
    first_draft=evaluator.gen_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 2: evaluate the draft with an LLM-as-judge
evaluator.add_computed_column(
    eval_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Rate this marketing tagline on a scale of 1-10 for clarity, '
                'creativity, and persuasiveness. Then provide one sentence of feedback '
                'for improvement.\n\n'
                'Tagline: ' + evaluator.first_draft + '\n\n'
                'Reply in this exact format:\n'
                'Score: <number>\nFeedback: <one sentence>',
            }
        ],
        model='gpt-4o-mini',
    )
)
evaluator.add_computed_column(
    evaluation=evaluator.eval_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 3: refine using the feedback
evaluator.add_computed_column(
    refine_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Improve this marketing tagline based on the feedback below. '
                'Return only the improved tagline.\n\n'
                'Original: ' + evaluator.first_draft + '\n\n'
                'Feedback: ' + evaluator.evaluation,
            }
        ],
        model='gpt-4o-mini',
    )
)
evaluator.add_computed_column(
    refined=evaluator.refine_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
evaluator.insert(
    [
        {
            'product_brief': 'A noise-canceling headphone designed for open-plan offices, '
            'with 30-hour battery life and a built-in microphone for calls.'
        },
        {
            'product_brief': 'An AI-powered code review tool that catches bugs, suggests '
            "improvements, and learns your team's coding style over time."
        },
    ]
)

evaluator.select(
    evaluator.product_brief,
    evaluator.first_draft,
    evaluator.evaluation,
    evaluator.refined,
).collect()
Inserted 2 rows with 0 errors in 2.95 s (0.68 rows/s)
Both the first draft and the refined version are stored side-by-side with the evaluation. This makes it straightforward to compare outputs, audit the judge’s reasoning, or filter rows where the score fell below a threshold.

Pattern 6: Orchestrator-Worker

A central agent decomposes a task, delegates sub-tasks to specialized worker agents, and synthesizes the results. This is the architectural cousin of the Multi-Agent pattern from Taxonomy 1, and the same structure Anthropic uses in their multi-agent research system — a lead agent coordinates parallel subagents, each with their own context and tools. Imperative approach: an orchestrator agent class that spawns worker agent instances and collects their outputs. Pixeltable approach: each worker is a table with computed columns, wrapped as a callable function via pxt.udf(table, return_value=...). The orchestrator table calls these functions as computed columns.
input → decompose → worker A (summarizer)  ─┐
                  → worker B (fact-checker) ─┼→ synthesize → output
For more on table UDFs, see Use a table pipeline as a reusable function.

Build worker agents as tables

# Worker A: summarizer
summarizer_tbl = pxt.create_table(
    'agentic_patterns/summarizer', {'text': pxt.String}
)
summarizer_tbl.add_computed_column(
    response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Summarize this text in 2-3 sentences:\n\n'
                + summarizer_tbl.text,
            }
        ],
        model='gpt-4o-mini',
    )
)
summarizer_tbl.add_computed_column(
    summary=summarizer_tbl.response.choices[0].message.content.astype(
        pxt.String
    )
)

# Wrap as a callable function
summarize = pxt.udf(summarizer_tbl, return_value=summarizer_tbl.summary)
Created table ‘summarizer’.
Added 0 column values with 0 errors in 0.10 s
Added 0 column values with 0 errors in 0.06 s
# Worker B: fact-checker
checker_tbl = pxt.create_table(
    'agentic_patterns/checker', {'claim': pxt.String}
)
checker_tbl.add_computed_column(
    response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Assess whether this claim is plausible. '
                'Reply with: PLAUSIBLE or DUBIOUS, followed by a one-sentence explanation.\n\n'
                'Claim: ' + checker_tbl.claim,
            }
        ],
        model='gpt-4o-mini',
    )
)
checker_tbl.add_computed_column(
    assessment=checker_tbl.response.choices[0].message.content.astype(
        pxt.String
    )
)

# Wrap as a callable function
fact_check = pxt.udf(checker_tbl, return_value=checker_tbl.assessment)
Created table ‘checker’.
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.02 s

Build the orchestrator

# Orchestrator table: delegates to workers, then synthesizes
orchestrator = pxt.create_table(
    'agentic_patterns/orchestrator', {'article': pxt.String}
)

# Dispatch to worker A (summarizer) and worker B (fact-checker) in parallel
orchestrator.add_computed_column(
    summary=summarize(text=orchestrator.article)
)
orchestrator.add_computed_column(
    fact_check_result=fact_check(claim=orchestrator.article)
)
Created table ‘orchestrator’.
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Synthesize worker outputs into a final briefing
orchestrator.add_computed_column(
    synth_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Based on the summary and fact-check below, write a brief '
                'editorial note (2-3 sentences) about this article.\n\n'
                'Summary: ' + orchestrator.summary + '\n\n'
                'Fact-check: ' + orchestrator.fact_check_result,
            }
        ],
        model='gpt-4o-mini',
    )
)
orchestrator.add_computed_column(
    briefing=orchestrator.synth_response.choices[
        0
    ].message.content.astype(pxt.String)
)
Added 0 column values with 0 errors in 0.02 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
orchestrator.insert(
    [
        {
            'article': 'A recent study published in Nature found that global sea levels '
            'rose by 4.5 mm per year over the last decade, nearly double the rate observed '
            'in the 1990s. Researchers attribute the acceleration primarily to ice sheet '
            'loss in Greenland and Antarctica, compounded by thermal expansion of ocean '
            'water. The findings suggest coastal cities may face significant flooding risks '
            'by 2050 without aggressive mitigation strategies.'
        }
    ]
)

orchestrator.select(
    orchestrator.summary,
    orchestrator.fact_check_result,
    orchestrator.briefing,
).collect()
Inserted 1 row with 0 errors in 4.69 s (0.21 rows/s)
The orchestrator table called two independent worker pipelines (summarize and fact_check), each backed by their own table with full intermediate-result persistence. The synthesis step consumed both outputs to produce the final briefing. Adding a new worker (e.g., a tone analyzer) requires only creating another table, wrapping it with pxt.udf(), and adding one more computed column to the orchestrator.

Strategy A: ReAct

ReAct is not a wiring pattern — it is a reasoning strategy that can be applied inside any of the six patterns above. The agent alternates between reasoning about the next step and acting on it (typically via tools), observing the result before deciding what to do next. Imperative approach: a while-loop that parses the LLM’s THOUGHT/ACTION output, calls tools, and feeds observations back (see Pixelagent’s ReAct example). Pixeltable approach: the reasoning loop lives in a UDF that inserts rows into a tool-calling table and reads back results. The table stores every thought-action-observation triple for full observability.
question → [THOUGHT → ACTION → OBSERVATION] × N → final answer
import re

# Define a tool for the ReAct agent


@pxt.udf
def lookup_population(country: str) -> str:
    """Look up the approximate population of a country."""
    populations = {
        'united states': '331 million',
        'china': '1.4 billion',
        'india': '1.4 billion',
        'germany': '84 million',
        'brazil': '214 million',
        'japan': '125 million',
    }
    return populations.get(
        country.lower(), f'Population data not available for {country}'
    )


react_tools = pxt.tools(lookup_population)
# Build a tool-calling table that the ReAct loop will insert into
react_steps = pxt.create_table(
    'agentic_patterns/react_steps',
    {'step': pxt.Int, 'prompt': pxt.String, 'system_prompt': pxt.String},
)

react_steps.add_computed_column(
    response=openai.chat_completions(
        messages=[
            {'role': 'system', 'content': react_steps.system_prompt},
            {'role': 'user', 'content': react_steps.prompt},
        ],
        model='gpt-4o-mini',
        tools=react_tools,
    )
)
react_steps.add_computed_column(
    answer=react_steps.response.choices[0].message.content.astype(
        pxt.String
    )
)
react_steps.add_computed_column(
    tool_output=openai.invoke_tools(react_tools, react_steps.response)
)
Created table ‘react_steps’.
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.00 s
No rows affected.
# The ReAct loop: reason → act → observe, repeated until done
REACT_SYSTEM = (
    "You are a research assistant. Answer the user's question step by step.\n"
    'Available tools: lookup_population\n\n'
    'On each turn, respond in this exact format:\n'
    'THOUGHT: <your reasoning>\n'
    'ACTION: <tool name to call, or FINAL if ready to answer>\n\n'
    'When ACTION is FINAL, include your final answer after it.\n'
    'Current step: {step} of {max_steps}.'
)

question = 'Which country has a larger population, Brazil or Germany?'
max_steps = 4
history = []

for step in range(1, max_steps + 1):
    # Build prompt with accumulated observations
    prompt = question
    if history:
        prompt += '\n\nPrevious observations:\n' + '\n'.join(history)

    system = REACT_SYSTEM.format(step=step, max_steps=max_steps)

    react_steps.insert(
        [{'step': step, 'prompt': prompt, 'system_prompt': system}]
    )

    # Read back the result for this step
    row = (
        react_steps.where(react_steps.step == step)
        .select(react_steps.answer, react_steps.tool_output)
        .collect()
    )
    answer_text = row['answer'][0] or ''
    tool_out = row['tool_output'][0]

    # Record observation from tool output (if any)
    if tool_out:
        history.append(f'Step {step} tool result: {tool_out}')

    # Check if the agent decided to finalize
    if 'FINAL' in answer_text.upper():
        break

print(f'Completed in {step} steps')
for row in react_steps.select(
    react_steps.step, react_steps.answer, react_steps.tool_output
).collect():
    print(f'Step {row["step"]}:')
    if row['answer']:
        print(f'  {row["answer"][:200]}')
    for tool_name, results in (row['tool_output'] or {}).items():
        if results:
            print(f'  -> {tool_name}: {results}')
    print()
Every thought, action, and observation is persisted as a row in the react_steps table. The loop itself is plain Python; the LLM calls and tool execution happen declaratively via computed columns. This makes the reasoning trace fully queryable after the fact — useful for debugging or evaluation.

Strategy B: Planning

Planning is the second cross-cutting reasoning strategy. Instead of acting step-by-step (ReAct), the agent first generates a complete plan, then executes each step. This is especially effective for complex tasks where the structure of the solution can be determined upfront. Imperative approach: an LLM generates a plan as structured JSON, then a loop executes each step (see Pixelagent’s planning example). Pixeltable approach: a prompt-chaining pipeline where the first column generates the plan and a UDF parses it into executable steps. Each step then feeds into subsequent computed columns.
question → generate plan → execute step 1 → execute step 2 → … → synthesize
import json as json_mod

planner = pxt.create_table(
    'agentic_patterns/planner', {'question': pxt.String}
)

# Step 1: generate a plan as structured JSON
planner.add_computed_column(
    plan_response=openai.chat_completions(
        messages=[
            {
                'role': 'user',
                'content': 'Break this question into 2-3 research steps. '
                'Return ONLY a JSON object like {"steps": ["sub-question 1", "sub-question 2"]}. '
                'No other text.\n\n'
                'Question: ' + planner.question,
            }
        ],
        model='gpt-4o-mini',
    )
)
planner.add_computed_column(
    plan_text=planner.plan_response.choices[0].message.content.astype(
        pxt.String
    )
)
Created table ‘planner’.
Added 0 column values with 0 errors in 0.00 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 2: parse the plan and execute each sub-question, then synthesize
@pxt.udf
def execute_plan(plan_json: str, original_question: str) -> list[dict]:
    """Parse the plan JSON and return structured sub-questions."""
    try:
        data = json_mod.loads(plan_json)
        # Handle both {"steps": [...]} and direct [...]
        steps = (
            data
            if isinstance(data, list)
            else data.get('steps', data.get('questions', []))
        )
        return [
            {'step': i + 1, 'sub_question': q}
            for i, q in enumerate(steps)
        ]
    except (json_mod.JSONDecodeError, TypeError):
        return [{'step': 1, 'sub_question': original_question}]


planner.add_computed_column(
    plan_steps=execute_plan(planner.plan_text, planner.question)
)
Added 0 column values with 0 errors in 0.01 s
No rows affected.
# Step 3: execute the plan — answer each sub-question, then synthesize
@pxt.udf
def format_plan_for_execution(
    plan_steps: list[dict], original_question: str
) -> str:
    """Format the plan steps into a single execution prompt."""
    step_list = '\n'.join(
        f'{s["step"]}. {s["sub_question"]}' for s in plan_steps
    )
    return (
        f'Answer each of these research sub-questions briefly, '
        f'then provide a final synthesis that answers the original question.\n\n'
        f'Original question: {original_question}\n\n'
        f'Sub-questions:\n{step_list}'
    )


planner.add_computed_column(
    exec_prompt=format_plan_for_execution(
        planner.plan_steps, planner.question
    )
)

planner.add_computed_column(
    exec_response=openai.chat_completions(
        messages=[{'role': 'user', 'content': planner.exec_prompt}],
        model='gpt-4o-mini',
    )
)
planner.add_computed_column(
    final_answer=planner.exec_response.choices[0].message.content.astype(
        pxt.String
    )
)
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
Added 0 column values with 0 errors in 0.01 s
No rows affected.
planner.insert(
    [
        {
            'question': 'What are the economic and environmental trade-offs of electric vehicles vs hydrogen fuel cells?'
        }
    ]
)

row = planner.select(
    planner.question, planner.plan_text, planner.final_answer
).collect()
print('Plan:', row['plan_text'][0])
print()
print('Answer:', row['final_answer'][0][:500])
The plan (stored in plan_steps) is fully inspectable. The execution step answers all sub-questions in a single LLM call, but this could also use parallelization (Pattern 3) to answer each sub-question independently and merge the results. Planning and ReAct compose naturally with any of the six architectural patterns.

Choosing a Pattern

Six architectural patterns

Two cross-cutting reasoning strategies

Patterns compose naturally. An orchestrator-worker system might use routing in the orchestrator, tool use within a worker, and ReAct reasoning inside the tool-calling loop. Because each pattern is just a set of computed columns on a table, combining them requires no special glue code.

See Also

Pixeltable cookbooks: Pixelagent examples (imperative implementations of the same patterns):
  • Reflection loop — main agent + critic agent with iterative refinement
  • ReAct / Planning — step-by-step reasoning with tool calls
  • Tool calling — OpenAI, Anthropic, and Bedrock tool integration
  • Memory — persistent and semantic memory management
External references:
Last modified on February 24, 2026