Skip to main content
Core Pixeltable API for table operations, data processing, and UDF management. View source on GitHub

UDFs


array() udf

Signature:
array(elements: Iterable)-> exprs.Expr

configure_logging() udf

Configure logging. Signature:
configure_logging(
    *,
    to_stdout: Optional[bool] = None,
    level: Optional[int] = None,
    add: Optional[str] = None,
    remove: Optional[str] = None
)-> None
Parameters:
  • to_stdout (Optional[bool]): if True, also log to stdout
  • level (Optional[int]): default log level
  • add (Optional[str]): comma-separated list of ‘module name:log level’ pairs; ex.: add=‘video:10’
  • remove (Optional[str]): comma-separated list of module names

create_dir() udf

Create a directory. Signature:
create_dir(
    path: str,
    if_exists: Literal['error', 'ignore', 'replace', 'replace_force'] = 'error',
    parents: bool = False
)-> Optional[catalog.Dir]
Parameters:
  • path (str): Path to the directory.
  • if_exists (Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) = error: Directive regarding how to handle if the path already exists. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return the existing directory handle
    • 'replace': if the existing directory is empty, drop it and create a new one
    • 'replace_force': drop the existing directory and all its children, and create a new one
  • parents (bool) = False: Create missing parent directories.
Returns:
  • Optional[catalog.Dir]: A handle to the newly created directory, or to an already existing directory at the path when if_exists='ignore'. Please note the existing directory may not be empty.
Example:
pxt.create_dir('my_dir')
Create a subdirectory:
pxt.create_dir('my_dir.sub_dir')
Create a subdirectory only if it does not already exist, otherwise do nothing:
pxt.create_dir('my_dir.sub_dir', if_exists='ignore')
Create a directory and replace if it already exists:
pxt.create_dir('my_dir', if_exists='replace_force')
Create a subdirectory along with its ancestors:
pxt.create_dir('parent1.parent2.sub_dir', parents=True)

create_snapshot() udf

Create a snapshot of an existing table object (which itself can be a view or a snapshot or a base table). Signature:
create_snapshot(
    path_str: str,
    base: catalog.Table | DataFrame,
    *,
    additional_columns: Optional[dict[str, Any]] = None,
    iterator: Optional[tuple[type[ComponentIterator], dict[str, Any]]] = None,
    num_retained_versions: int = 10,
    comment: str = '',
    media_validation: Literal['on_read', 'on_write'] = 'on_write',
    if_exists: Literal['error', 'ignore', 'replace', 'replace_force'] = 'error'
)-> Optional[catalog.Table]
Parameters:
  • path_str (str): A name for the snapshot; can be either a simple name such as my_snapshot, or a pathname such as dir1.my_snapshot.
  • base (catalog.Table | DataFrame): Table (i.e., table or view or snapshot) or DataFrame to base the snapshot on.
  • additional_columns (Optional[dict[str, Any]]): If specified, will add these columns to the snapshot once it is created. The format of the additional_columns parameter is identical to the format of the schema_or_df parameter in create_table.
  • iterator (Optional[tuple[type[ComponentIterator], dict[str, Any]]]): The iterator to use for this snapshot. If specified, then this snapshot will be a one-to-many view of the base table.
  • num_retained_versions (int) = 10: Number of versions of the view to retain.
  • comment (str) = “: Optional comment for the snapshot.
  • media_validation (Literal[‘on_read’, ‘on_write’]) = on_write: Media validation policy for the snapshot.
    • 'on_read': validate media files at query time
    • 'on_write': validate media files during insert/update operations
  • if_exists (Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) = error: Directive regarding how to handle if the path already exists. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return the existing snapshot handle
    • 'replace': if the existing snapshot has no dependents, drop and replace it with a new one
    • 'replace_force': drop the existing snapshot and all its dependents, and create a new one
Returns:
  • Optional[catalog.Table]: A handle to the Table representing the newly created snapshot. Please note the schema or base of the existing snapshot may not match those provided in the call.
Example: Create a snapshot my_snapshot of a table my_table:
tbl = pxt.get_table('my_table')
snapshot = pxt.create_snapshot('my_snapshot', tbl)
Create a snapshot my_snapshot of a view my_view with additional int column col3, if my_snapshot does not already exist:
view = pxt.get_table('my_view')
snapshot = pxt.create_snapshot('my_snapshot', view, additional_columns={'col3': pxt.Int}, if_exists='ignore')
Create a snapshot my_snapshot on a table my_table, and replace any existing snapshot named my_snapshot:
tbl = pxt.get_table('my_table')
snapshot = pxt.create_snapshot('my_snapshot', tbl, if_exists='replace_force')

create_table() udf

Create a new base table. Exactly one of schema or source must be provided. If a schema is provided, then an empty table will be created with the specified schema. If a source is provided, then Pixeltable will attempt to infer a data source format and table schema from the contents of the specified data, and the data will be imported from the specified source into the new table. The source format and/or schema can be specified directly via the source_format and schema_overrides parameters. Signature:
create_table(
    path: str,
    schema: Optional[dict[str, Any]] = None,
    *,
    source: Optional[TableDataSource] = None,
    source_format: Optional[Literal['csv', 'excel', 'parquet', 'json']] = None,
    schema_overrides: Optional[dict[str, Any]] = None,
    on_error: Literal['abort', 'ignore'] = 'abort',
    primary_key: str | list[str] | None = None,
    num_retained_versions: int = 10,
    comment: str = '',
    media_validation: Literal['on_read', 'on_write'] = 'on_write',
    if_exists: Literal['error', 'ignore', 'replace', 'replace_force'] = 'error',
    extra_args: Optional[dict[str, Any]] = None
)-> catalog.Table
Parameters:
  • path (str): Pixeltable path (qualified name) of the table, such as 'my_table' or 'my_dir.my_subdir.my_table'.
  • schema (Optional[dict[str, Any]]): Schema for the new table, mapping column names to Pixeltable types.
  • source (Optional[TableDataSource]): A data source (file, URL, DataFrame, or list of rows) to import from.
  • source_format (Optional[Literal[‘csv’, ‘excel’, ‘parquet’, ‘json’]]): Must be used in conjunction with a source. If specified, then the given format will be used to read the source data. (Otherwise, Pixeltable will attempt to infer the format from the source data.)
  • schema_overrides (Optional[dict[str, Any]]): Must be used in conjunction with a source. If specified, then columns in schema_overrides will be given the specified types. (Pixeltable will attempt to infer the types of any columns not specified.)
  • on_error (Literal[‘abort’, ‘ignore’]) = abort: Determines the behavior if an error occurs while evaluating a computed column or detecting an invalid media file (such as a corrupt image) for one of the inserted rows.
    • If on_error='abort', then an exception will be raised and the rows will not be inserted.
    • If on_error='ignore', then execution will continue and the rows will be inserted. Any cells with errors will have a None value for that cell, with information about the error stored in the corresponding tbl.col_name.errortype and tbl.col_name.errormsg fields.
  • primary_key (str | list[str] | None): An optional column name or list of column names to use as the primary key(s) of the table.
  • num_retained_versions (int) = 10: Number of versions of the table to retain.
  • comment (str) = “: An optional comment; its meaning is user-defined.
  • media_validation (Literal[‘on_read’, ‘on_write’]) = on_write: Media validation policy for the table.
    • 'on_read': validate media files at query time
    • 'on_write': validate media files during insert/update operations
  • if_exists (Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) = error: Determines the behavior if a table already exists at the specified path location.
    • 'error': raise an error
    • 'ignore': do nothing and return the existing table handle
    • 'replace': if the existing table has no views or snapshots, drop and replace it with a new one; raise an error if the existing table has views or snapshots
    • 'replace_force': drop the existing table and all its views and snapshots, and create a new one
  • extra_args (Optional[dict[str, Any]]): Must be used in conjunction with a source. If specified, then additional arguments will be passed along to the source data provider.
Returns:
  • catalog.Table: A handle to the newly created table, or to an already existing table at the path when if_exists='ignore'. Please note the schema of the existing table may not match the schema provided in the call.
Example: Create a table with an int and a string column:
tbl = pxt.create_table('my_table', schema={'col1': pxt.Int, 'col2': pxt.String})
Create a table from a select statement over an existing table orig_table (this will create a new table containing the exact contents of the query):
tbl1 = pxt.get_table('orig_table')
tbl2 = pxt.create_table('new_table', tbl1.where(tbl1.col1 < 10).select(tbl1.col2))
Create a table if it does not already exist, otherwise get the existing table:
tbl = pxt.create_table('my_table', schema={'col1': pxt.Int, 'col2': pxt.String}, if_exists='ignore')
Create a table with an int and a float column, and replace any existing table:
tbl = pxt.create_table('my_table', schema={'col1': pxt.Int, 'col2': pxt.Float}, if_exists='replace')
Create a table from a CSV file:
tbl = pxt.create_table('my_table', source='data.csv')

create_view() udf

Create a view of an existing table object (which itself can be a view or a snapshot or a base table). Signature:
create_view(
    path: str,
    base: catalog.Table | DataFrame,
    *,
    additional_columns: Optional[dict[str, Any]] = None,
    is_snapshot: bool = False,
    iterator: Optional[tuple[type[ComponentIterator], dict[str, Any]]] = None,
    num_retained_versions: int = 10,
    comment: str = '',
    media_validation: Literal['on_read', 'on_write'] = 'on_write',
    if_exists: Literal['error', 'ignore', 'replace', 'replace_force'] = 'error'
)-> Optional[catalog.Table]
Parameters:
  • path (str): A name for the view; can be either a simple name such as my_view, or a pathname such as dir1.my_view.
  • base (catalog.Table | DataFrame): Table (i.e., table or view or snapshot) or DataFrame to base the view on.
  • additional_columns (Optional[dict[str, Any]]): If specified, will add these columns to the view once it is created. The format of the additional_columns parameter is identical to the format of the schema_or_df parameter in create_table.
  • is_snapshot (bool) = False: Whether the view is a snapshot. Setting this to True is equivalent to calling create_snapshot.
  • iterator (Optional[tuple[type[ComponentIterator], dict[str, Any]]]): The iterator to use for this view. If specified, then this view will be a one-to-many view of the base table.
  • num_retained_versions (int) = 10: Number of versions of the view to retain.
  • comment (str) = “: Optional comment for the view.
  • media_validation (Literal[‘on_read’, ‘on_write’]) = on_write: Media validation policy for the view.
    • 'on_read': validate media files at query time
    • 'on_write': validate media files during insert/update operations
  • if_exists (Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) = error: Directive regarding how to handle if the path already exists. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return the existing view handle
    • 'replace': if the existing view has no dependents, drop and replace it with a new one
    • 'replace_force': drop the existing view and all its dependents, and create a new one
Returns:
  • Optional[catalog.Table]: A handle to the Table representing the newly created view. If the path already exists and if_exists='ignore', returns a handle to the existing view. Please note the schema or the base of the existing view may not match those provided in the call.
Example: Create a view my_view of an existing table my_table, filtering on rows where col1 is greater than 10:
tbl = pxt.get_table('my_table')
view = pxt.create_view('my_view', tbl.where(tbl.col1 > 10))
Create a view my_view of an existing table my_table, filtering on rows where col1 is greater than 10, and if it not already exist. Otherwise, get the existing view named my_view:
tbl = pxt.get_table('my_table')
view = pxt.create_view('my_view', tbl.where(tbl.col1 > 10), if_exists='ignore')
Create a view my_view of an existing table my_table, filtering on rows where col1 is greater than 100, and replace any existing view named my_view:
tbl = pxt.get_table('my_table')
view = pxt.create_view('my_view', tbl.where(tbl.col1 > 100), if_exists='replace_force')

drop_dir() udf

Remove a directory. Signature:
drop_dir(
    path: str,
    force: bool = False,
    if_not_exists: Literal['error', 'ignore'] = 'error'
)-> None
Parameters:
  • path (str): Name or path of the directory.
  • force (bool) = False: If True, will also drop all tables and subdirectories of this directory, recursively, along with any views or snapshots that depend on any of the dropped tables.
  • if_not_exists (Literal[‘error’, ‘ignore’]) = error: Directive regarding how to handle if the path does not exist. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return
Example: Remove a directory, if it exists and is empty:
pxt.drop_dir('my_dir')
Remove a subdirectory:
pxt.drop_dir('my_dir.sub_dir')
Remove an existing directory if it is empty, but do nothing if it does not exist:
pxt.drop_dir('my_dir.sub_dir', if_not_exists='ignore')
Remove an existing directory and all its contents:
pxt.drop_dir('my_dir', force=True)

drop_table() udf

Drop a table, view, snapshot, or replica. Signature:
drop_table(
    table: str | catalog.Table,
    force: bool = False,
    if_not_exists: Literal['error', 'ignore'] = 'error'
)-> None
Parameters:
  • table (str | catalog.Table): Fully qualified name or table handle of the table to be dropped; or a remote URI of a cloud replica to be deleted.
  • force (bool) = False: If True, will also drop all views and sub-views of this table.
  • if_not_exists (Literal[‘error’, ‘ignore’]) = error: Directive regarding how to handle if the path does not exist. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return
Example: Drop a table by its fully qualified name:
pxt.drop_table('subdir.my_table')
Drop a table by its handle:
t = pxt.get_table('subdir.my_table')
pxt.drop_table(t)
Drop a table if it exists, otherwise do nothing:
pxt.drop_table('subdir.my_table', if_not_exists='ignore')
Drop a table and all its dependents:
pxt.drop_table('subdir.my_table', force=True)

expr_udf() udf

Signature:
expr_udf(
    *args: Any,
    **kwargs: Any
)-> Any

get_dir_contents() udf

Get the contents of a Pixeltable directory. Signature:
get_dir_contents(
    dir_path: str = '',
    recursive: bool = True
)-> DirContents
Parameters:
  • dir_path (str) = “: Path to the directory. Defaults to the root directory.
  • recursive (bool) = True: If False, returns only those tables and directories that are directly contained in specified directory; if True, returns all tables and directories that are descendants of the specified directory, recursively.
Returns:
  • ‘DirContents’: A DirContents object representing the contents of the specified directory.
Example: Get contents of top-level directory:
pxt.get_dir_contents()
Get contents of ‘dir1’:
pxt.get_dir_contents('dir1')

get_table() udf

Get a handle to an existing table, view, or snapshot. Signature:
get_table(
    path: str,
    if_not_exists: Literal['error', 'ignore'] = 'error'
)-> catalog.Table | None
Parameters:
  • path (str): Path to the table.
  • if_not_exists (Literal[‘error’, ‘ignore’]) = error: Directive regarding how to handle if the path does not exist. Must be one of the following:
    • 'error': raise an error
    • 'ignore': do nothing and return None
Returns:
  • catalog.Table | None: A handle to the Table.
Example: Get handle for a table in the top-level directory:
tbl = pxt.get_table('my_table')
For a table in a subdirectory:
tbl = pxt.get_table('subdir.my_table')
Handles to views and snapshots are retrieved in the same way:
tbl = pxt.get_table('my_snapshot')
Get a handle to a specific version of a table:
tbl = pxt.get_table('my_table:722')

init() udf

Initializes the Pixeltable environment. Signature:
init(config_overrides: Optional[dict[str, Any]] = None)-> None

list_dirs() udf

List the directories in a directory. Signature:
list_dirs(
    path: str = '',
    recursive: bool = True
)-> list[str]
Parameters:
  • path (str) = “: Name or path of the directory.
  • recursive (bool) = True: If True, lists all descendants of this directory recursively.
Returns:
  • list[str]: List of directory paths.
Example:
cl.list_dirs('my_dir', recursive=True)

list_functions() udf

Returns information about all registered functions. Signature:
list_functions()-> Styler
Returns:
  • Styler: Pandas DataFrame with columns ‘Path’, ‘Name’, ‘Parameters’, ‘Return Type’, ‘Is Agg’, ‘Library’

list_tables() udf

List the Tables in a directory. Signature:
list_tables(
    dir_path: str = '',
    recursive: bool = True
)-> list[str]
Parameters:
  • dir_path (str) = “: Path to the directory. Defaults to the root directory.
  • recursive (bool) = True: If False, returns only those tables that are directly contained in specified directory; if True, returns all tables that are descendants of the specified directory, recursively.
Returns:
  • list[str]: A list of Table paths.
Example: List tables in top-level directory:
pxt.list_tables()
List tables in ‘dir1’:
pxt.list_tables('dir1')

ls() udf

List the contents of a Pixeltable directory. This function returns a Pandas DataFrame representing a human-readable listing of the specified directory, including various attributes such as version and base table, as appropriate. To get a programmatic list of the directory’s contents, use [get_dir_contents()][pixeltable.get_dir_contents] instead. Signature:
ls(path: str = '')-> pd.DataFrame

mcp_udfs() udf

Signature:
mcp_udfs(url: str)-> list['pxt.func.Function']

move() udf

Move a schema object to a new directory and/or rename a schema object. Signature:
move(
    path: str,
    new_path: str
)-> None
Parameters:
  • path (str): absolute path to the existing schema object.
  • new_path (str): absolute new path for the schema object.
Example: Move a table to a different directory:
> pxt.move('dir1.my_table', 'dir2.my_table')
Rename a table:
> pxt.move('dir1.my_table', 'dir1.new_name')

publish() udf

Publishes a replica of a local Pixeltable table to Pixeltable cloud. A given table can be published to at most one URI per Pixeltable cloud database. Signature:
publish(
    source: str | catalog.Table,
    destination_uri: str,
    bucket_name: str | None = None,
    access: Literal['public', 'private'] = 'private'
)-> None
Parameters:
  • source (str | catalog.Table): Path or table handle of the local table to be published.
  • destination_uri (str): Remote URI where the replica will be published, such as 'pxt://org_name/my_dir/my_table'.
  • bucket_name (str | None): The name of the bucket to use to store replica’s data. The bucket must be registered with Pixeltable cloud. If no bucket_name is provided, the default storage bucket for the destination database will be used.
  • access (Literal[‘public’, ‘private’]) = private: Access control for the replica.
    • 'public': Anyone can access this replica.
    • 'private': Only the host organization can access.

query() udf

Signature:
query(
    *args: Any,
    **kwargs: Any
)-> Any

replicate() udf

Retrieve a replica from Pixeltable cloud as a local table. This will create a full local copy of the replica in a way that preserves the table structure of the original source data. Once replicated, the local table can be queried offline just as any other Pixeltable table. Signature:
replicate(
    remote_uri: str,
    local_path: str
)-> catalog.Table
Parameters:
  • remote_uri (str): Remote URI of the table to be replicated, such as 'pxt://org_name/my_dir/my_table'.
  • local_path (str): Local table path where the replica will be created, such as 'my_new_dir.my_new_tbl'. It can be the same or different from the cloud table name.
Returns:
  • catalog.Table: A handle to the newly created local replica table.

retrieval_udf() udf

Constructs a retrieval UDF for the given table. The retrieval UDF is a UDF whose parameters are columns of the table and whose return value is a list of rows from the table. The return value of
f(col1=x, col2=y, ...)
will be a list of all rows from the table that match the specified arguments. Signature:
retrieval_udf(
    table: catalog.Table,
    name: Optional[str] = None,
    description: Optional[str] = None,
    parameters: Optional[Iterable[str | exprs.ColumnRef]] = None,
    limit: Optional[int] = 10
)-> func.QueryTemplateFunction
Parameters:
  • table (catalog.Table): The table to use as the dataset for the retrieval tool.
  • name (Optional[str]): The name of the tool. If not specified, then the name of the table will be used by default.
  • description (Optional[str]): The description of the tool. If not specified, then a default description will be generated.
  • parameters (Optional[Iterable[str | exprs.ColumnRef]]): The columns of the table to use as parameters. If not specified, all data columns (non-computed columns) will be used as parameters. All of the specified parameters will be required parameters of the tool, regardless of their status as columns.
  • limit (Optional[int]) = 10: The maximum number of rows to return. If not specified, then all matching rows will be returned.
Returns:
  • func.QueryTemplateFunction: A list of dictionaries containing data from the table, one per row that matches the input arguments. If there are no matching rows, an empty list will be returned.

tool() udf

Specifies a Pixeltable UDF to be used as an LLM tool with customizable metadata. See the documentation for [pxt.tools()][pixeltable.tools] for more details. Signature:
tool(
    fn: func.Function,
    name: Optional[str] = None,
    description: Optional[str] = None
)-> func.tools.Tool
Parameters:
  • fn (func.Function): The UDF to use as a tool.
  • name (Optional[str]): The name of the tool. If not specified, then the unqualified name of the UDF will be used by default.
  • description (Optional[str]): The description of the tool. If not specified, then the entire contents of the UDF docstring will be used by default.
Returns:
  • func.tools.Tool: A Tool instance that can be passed to an LLM tool-calling API.

tools() udf

Specifies a collection of UDFs to be used as LLM tools. Pixeltable allows any UDF to be used as an input into an LLM tool-calling API. To use one or more UDFs as tools, wrap them in a pxt.tools call and pass the return value to an LLM API. The UDFs can be specified directly or wrapped inside a [pxt.tool()][pixeltable.tool] invocation. If a UDF is specified directly, the tool name will be the (unqualified) UDF name, and the tool description will consist of the entire contents of the UDF docstring. If a UDF is wrapped in a pxt.tool() invocation, then the name and/or description may be customized. Signature:
tools(*args: func.Function | func.tools.Tool)-> func.tools.Tools
Parameters:
  • args (func.Function | func.tools.Tool): The UDFs to use as tools.
Returns:
  • func.tools.Tools: A Tools instance that can be passed to an LLM tool-calling API or invoked to generate tool results.
Example: Create a tools instance with a single UDF:
tools = pxt.tools(stock_price)
Create a tools instance with several UDFs:
tools = pxt.tools(stock_price, weather_quote)
Create a tools instance, some of whose UDFs have customized metadata:
tools = pxt.tools(
    stock_price,
    pxt.tool(weather_quote, description='Returns information about the weather in a particular location.'),
    pxt.tool(traffic_quote, name='traffic_conditions'),
)

uda() udf

Decorator for user-defined aggregate functions. The decorated class must inherit from Aggregator and implement the following methods:
  • init(self, …) to initialize the aggregator
  • update(self, …) to update the aggregator with a new value
  • value(self) to return the final result
The decorator creates an AggregateFunction instance from the class and adds it to the module where the class is defined. Parameters:
  • requires_order_by: if True, the first parameter to the function is the order-by expression
  • allows_std_agg: if True, the function can be used as a standard aggregate function w/o a window
  • allows_window: if True, the function can be used with a window
Signature:
uda(
    *args,
    **kwargs
)

udf() udf

A decorator to create a Function from a function definition. Signature:
udf(
    *args,
    **kwargs
)
Example:
@pxt.udf
def my_function(x: int) -> int:
return x + 1
I