This documentation page is also available as an interactive notebook. You can launch the notebook in
Kaggle or Colab, or download it for use with an IDE or local Jupyter installation, by clicking one of the
above links.
Enable LLMs to call functions and tools, then execute the results
automatically.
Problem
You want an LLM to decide which functions to call based on user
queries—for agents, chatbots, or automated workflows.
Solution
What’s in this recipe:
- Define tools as Python functions
- Let LLMs decide which tool to call
- Automatically execute tool calls with
invoke_tools
You define tools with JSON schemas, pass them to the LLM, and use
invoke_tools to execute the function calls.
Setup
%pip install -qU pixeltable openai
import os
import getpass
if 'OPENAI_API_KEY' not in os.environ:
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key: ')
import pixeltable as pxt
from pixeltable.functions import openai
# Create a fresh directory
pxt.drop_dir('tools_demo', force=True)
pxt.create_dir('tools_demo')
Connected to Pixeltable database at: postgresql+psycopg://postgres:@/pixeltable?host=/Users/pjlb/.pixeltable/pgdata
Created directory ‘tools_demo’.
<pixeltable.catalog.dir.Dir at 0x17de70850>
# Define tool functions as Pixeltable UDFs
@pxt.udf
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
# In production, call a real weather API
weather_data = {
'new york': 'Sunny, 72°F',
'london': 'Cloudy, 58°F',
'tokyo': 'Rainy, 65°F',
'paris': 'Partly cloudy, 68°F',
}
return weather_data.get(city.lower(), f'Weather data not available for {city}')
@pxt.udf
def get_stock_price(symbol: str) -> str:
"""Get the current stock price for a symbol."""
# In production, call a real stock API
prices = {
'AAPL': '$178.50',
'GOOGL': '$141.25',
'MSFT': '$378.90',
'AMZN': '$185.30',
}
return prices.get(symbol.upper(), f'Price not available for {symbol}')
# Create a Tools object with our functions
tools = pxt.tools(get_weather, get_stock_price)
# Create table for queries
queries = pxt.create_table(
'tools_demo.queries',
{'query': pxt.String}
)
Created table ‘queries’.
# Add LLM call with tools
queries.add_computed_column(
response=openai.chat_completions(
messages=[{'role': 'user', 'content': queries.query}],
model='gpt-4o-mini',
tools=tools # Pass tools to the LLM
)
)
Added 0 column values with 0 errors.
No rows affected.
# Automatically execute tool calls and get results
queries.add_computed_column(
tool_results=openai.invoke_tools(tools, queries.response)
)
Added 0 column values with 0 errors.
No rows affected.
# Insert queries that require tool calls
sample_queries = [
{'query': "What's the weather in Tokyo?"},
{'query': "What's the stock price of Apple?"},
{'query': "What's the weather in Paris and the price of Microsoft stock?"},
]
queries.insert(sample_queries)
Inserting rows into `queries`: 3 rows [00:00, 330.87 rows/s]
Inserted 3 rows with 0 errors.
3 rows inserted, 9 values computed.
# View results
queries.select(queries.query, queries.tool_results).collect()
Explanation
Tool calling flow:
Query → LLM decides tool → invoke_tools executes → Results
Key components:
Supported providers:
See also