UDFs
array()
udf
Signature:
configure_logging()
udf
Configure logging.
Signature:
to_stdout
(Optional[bool]): if True, also log to stdoutlevel
(Optional[int]): default log leveladd
(Optional[str]): comma-separated list of ‘module name:log level’ pairs; ex.: add=‘video:10’remove
(Optional[str]): comma-separated list of module names
create_dir()
udf
Create a directory.
Signature:
path
(str): Path to the directory.if_exists
(Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) =error
: Directive regarding how to handle if the path already exists. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and return the existing directory handle'replace'
: if the existing directory is empty, drop it and create a new one'replace_force'
: drop the existing directory and all its children, and create a new one
parents
(bool) =False
: Create missing parent directories.
- Optional[catalog.Dir]: A handle to the newly created directory, or to an already existing directory at the path when
if_exists='ignore'
. Please note the existing directory may not be empty.
create_snapshot()
udf
Create a snapshot of an existing table object (which itself can be a view or a snapshot or a base table).
Signature:
path_str
(str): A name for the snapshot; can be either a simple name such asmy_snapshot
, or a pathname such asdir1.my_snapshot
.base
(catalog.Table | DataFrame):Table
(i.e., table or view or snapshot) orDataFrame
to base the snapshot on.additional_columns
(Optional[dict[str, Any]]): If specified, will add these columns to the snapshot once it is created. The format of theadditional_columns
parameter is identical to the format of theschema_or_df
parameter increate_table
.iterator
(Optional[tuple[type[ComponentIterator], dict[str, Any]]]): The iterator to use for this snapshot. If specified, then this snapshot will be a one-to-many view of the base table.num_retained_versions
(int) =10
: Number of versions of the view to retain.comment
(str) = “: Optional comment for the snapshot.media_validation
(Literal[‘on_read’, ‘on_write’]) =on_write
: Media validation policy for the snapshot.'on_read'
: validate media files at query time'on_write'
: validate media files during insert/update operations
if_exists
(Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) =error
: Directive regarding how to handle if the path already exists. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and return the existing snapshot handle'replace'
: if the existing snapshot has no dependents, drop and replace it with a new one'replace_force'
: drop the existing snapshot and all its dependents, and create a new one
- Optional[catalog.Table]: A handle to the
Table
representing the newly created snapshot. Please note the schema or base of the existing snapshot may not match those provided in the call.
my_snapshot
of a table my_table
:
my_snapshot
of a view my_view
with additional int column col3
, if my_snapshot
does not already exist:
my_snapshot
on a table my_table
, and replace any existing snapshot named my_snapshot
:
create_table()
udf
Create a new base table. Exactly one of schema
or source
must be provided.
If a schema
is provided, then an empty table will be created with the specified schema.
If a source
is provided, then Pixeltable will attempt to infer a data source format and table schema from the contents of the specified data, and the data will be imported from the specified source into the new table. The source format and/or schema can be specified directly via the source_format
and schema_overrides
parameters.
Signature:
path
(str): Pixeltable path (qualified name) of the table, such as'my_table'
or'my_dir.my_subdir.my_table'
.schema
(Optional[dict[str, Any]]): Schema for the new table, mapping column names to Pixeltable types.source
(Optional[TableDataSource]): A data source (file, URL, DataFrame, or list of rows) to import from.source_format
(Optional[Literal[‘csv’, ‘excel’, ‘parquet’, ‘json’]]): Must be used in conjunction with asource
. If specified, then the given format will be used to read the source data. (Otherwise, Pixeltable will attempt to infer the format from the source data.)schema_overrides
(Optional[dict[str, Any]]): Must be used in conjunction with asource
. If specified, then columns inschema_overrides
will be given the specified types. (Pixeltable will attempt to infer the types of any columns not specified.)on_error
(Literal[‘abort’, ‘ignore’]) =abort
: Determines the behavior if an error occurs while evaluating a computed column or detecting an invalid media file (such as a corrupt image) for one of the inserted rows.- If
on_error='abort'
, then an exception will be raised and the rows will not be inserted. - If
on_error='ignore'
, then execution will continue and the rows will be inserted. Any cells with errors will have aNone
value for that cell, with information about the error stored in the correspondingtbl.col_name.errortype
andtbl.col_name.errormsg
fields.
- If
primary_key
(str | list[str] | None): An optional column name or list of column names to use as the primary key(s) of the table.num_retained_versions
(int) =10
: Number of versions of the table to retain.comment
(str) = “: An optional comment; its meaning is user-defined.media_validation
(Literal[‘on_read’, ‘on_write’]) =on_write
: Media validation policy for the table.'on_read'
: validate media files at query time'on_write'
: validate media files during insert/update operations
if_exists
(Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) =error
: Determines the behavior if a table already exists at the specified path location.'error'
: raise an error'ignore'
: do nothing and return the existing table handle'replace'
: if the existing table has no views or snapshots, drop and replace it with a new one; raise an error if the existing table has views or snapshots'replace_force'
: drop the existing table and all its views and snapshots, and create a new one
extra_args
(Optional[dict[str, Any]]): Must be used in conjunction with asource
. If specified, then additional arguments will be passed along to the source data provider.
- catalog.Table: A handle to the newly created table, or to an already existing table at the path when
if_exists='ignore'
. Please note the schema of the existing table may not match the schema provided in the call.
orig_table
(this will create a new table containing the exact contents of the query):
create_view()
udf
Create a view of an existing table object (which itself can be a view or a snapshot or a base table).
Signature:
path
(str): A name for the view; can be either a simple name such asmy_view
, or a pathname such asdir1.my_view
.base
(catalog.Table | DataFrame):Table
(i.e., table or view or snapshot) orDataFrame
to base the view on.additional_columns
(Optional[dict[str, Any]]): If specified, will add these columns to the view once it is created. The format of theadditional_columns
parameter is identical to the format of theschema_or_df
parameter increate_table
.is_snapshot
(bool) =False
: Whether the view is a snapshot. Setting this toTrue
is equivalent to callingcreate_snapshot
.iterator
(Optional[tuple[type[ComponentIterator], dict[str, Any]]]): The iterator to use for this view. If specified, then this view will be a one-to-many view of the base table.num_retained_versions
(int) =10
: Number of versions of the view to retain.comment
(str) = “: Optional comment for the view.media_validation
(Literal[‘on_read’, ‘on_write’]) =on_write
: Media validation policy for the view.'on_read'
: validate media files at query time'on_write'
: validate media files during insert/update operations
if_exists
(Literal[‘error’, ‘ignore’, ‘replace’, ‘replace_force’]) =error
: Directive regarding how to handle if the path already exists. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and return the existing view handle'replace'
: if the existing view has no dependents, drop and replace it with a new one'replace_force'
: drop the existing view and all its dependents, and create a new one
- Optional[catalog.Table]: A handle to the
Table
representing the newly created view. If the path already exists andif_exists='ignore'
, returns a handle to the existing view. Please note the schema or the base of the existing view may not match those provided in the call.
my_view
of an existing table my_table
, filtering on rows where col1
is greater than 10:
my_view
of an existing table my_table
, filtering on rows where col1
is greater than 10, and if it not already exist. Otherwise, get the existing view named my_view
:
my_view
of an existing table my_table
, filtering on rows where col1
is greater than 100, and replace any existing view named my_view
:
drop_dir()
udf
Remove a directory.
Signature:
path
(str): Name or path of the directory.force
(bool) =False
: IfTrue
, will also drop all tables and subdirectories of this directory, recursively, along with any views or snapshots that depend on any of the dropped tables.if_not_exists
(Literal[‘error’, ‘ignore’]) =error
: Directive regarding how to handle if the path does not exist. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and return
drop_table()
udf
Drop a table, view, snapshot, or replica.
Signature:
table
(str | catalog.Table): Fully qualified name or table handle of the table to be dropped; or a remote URI of a cloud replica to be deleted.force
(bool) =False
: IfTrue
, will also drop all views and sub-views of this table.if_not_exists
(Literal[‘error’, ‘ignore’]) =error
: Directive regarding how to handle if the path does not exist. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and return
expr_udf()
udf
Signature:
get_dir_contents()
udf
Get the contents of a Pixeltable directory.
Signature:
dir_path
(str) = “: Path to the directory. Defaults to the root directory.recursive
(bool) =True
: IfFalse
, returns only those tables and directories that are directly contained in specified directory; ifTrue
, returns all tables and directories that are descendants of the specified directory, recursively.
- ‘DirContents’: A
DirContents
object representing the contents of the specified directory.
get_table()
udf
Get a handle to an existing table, view, or snapshot.
Signature:
path
(str): Path to the table.if_not_exists
(Literal[‘error’, ‘ignore’]) =error
: Directive regarding how to handle if the path does not exist. Must be one of the following:'error'
: raise an error'ignore'
: do nothing and returnNone
- catalog.Table | None: A handle to the
Table
.
init()
udf
Initializes the Pixeltable environment.
Signature:
list_dirs()
udf
List the directories in a directory.
Signature:
path
(str) = “: Name or path of the directory.recursive
(bool) =True
: IfTrue
, lists all descendants of this directory recursively.
- list[str]: List of directory paths.
list_functions()
udf
Returns information about all registered functions.
Signature:
- Styler: Pandas DataFrame with columns ‘Path’, ‘Name’, ‘Parameters’, ‘Return Type’, ‘Is Agg’, ‘Library’
list_tables()
udf
List the Table
s in a directory.
Signature:
dir_path
(str) = “: Path to the directory. Defaults to the root directory.recursive
(bool) =True
: IfFalse
, returns only those tables that are directly contained in specified directory; ifTrue
, returns all tables that are descendants of the specified directory, recursively.
- list[str]: A list of
Table
paths.
ls()
udf
List the contents of a Pixeltable directory.
This function returns a Pandas DataFrame representing a human-readable listing of the specified directory, including various attributes such as version and base table, as appropriate.
To get a programmatic list of the directory’s contents, use [get_dir_contents()][pixeltable.get_dir_contents] instead.
Signature:
mcp_udfs()
udf
Signature:
move()
udf
Move a schema object to a new directory and/or rename a schema object.
Signature:
path
(str): absolute path to the existing schema object.new_path
(str): absolute new path for the schema object.
publish()
udf
Publishes a replica of a local Pixeltable table to Pixeltable cloud. A given table can be published to at most one
URI per Pixeltable cloud database.
Signature:
source
(str | catalog.Table): Path or table handle of the local table to be published.destination_uri
(str): Remote URI where the replica will be published, such as'pxt://org_name/my_dir/my_table'
.bucket_name
(str | None): The name of the bucket to use to store replica’s data. The bucket must be registered with Pixeltable cloud. If nobucket_name
is provided, the default storage bucket for the destination database will be used.access
(Literal[‘public’, ‘private’]) =private
: Access control for the replica.'public'
: Anyone can access this replica.'private'
: Only the host organization can access.
query()
udf
Signature:
replicate()
udf
Retrieve a replica from Pixeltable cloud as a local table. This will create a full local copy of the replica in a
way that preserves the table structure of the original source data. Once replicated, the local table can be queried offline just as any other Pixeltable table.
Signature:
remote_uri
(str): Remote URI of the table to be replicated, such as'pxt://org_name/my_dir/my_table'
.local_path
(str): Local table path where the replica will be created, such as'my_new_dir.my_new_tbl'
. It can be the same or different from the cloud table name.
- catalog.Table: A handle to the newly created local replica table.
retrieval_udf()
udf
Constructs a retrieval UDF for the given table. The retrieval UDF is a UDF whose parameters are
columns of the table and whose return value is a list of rows from the table. The return value of
table
(catalog.Table): The table to use as the dataset for the retrieval tool.name
(Optional[str]): The name of the tool. If not specified, then the name of the table will be used by default.description
(Optional[str]): The description of the tool. If not specified, then a default description will be generated.parameters
(Optional[Iterable[str | exprs.ColumnRef]]): The columns of the table to use as parameters. If not specified, all data columns (non-computed columns) will be used as parameters. All of the specified parameters will be required parameters of the tool, regardless of their status as columns.limit
(Optional[int]) =10
: The maximum number of rows to return. If not specified, then all matching rows will be returned.
- func.QueryTemplateFunction: A list of dictionaries containing data from the table, one per row that matches the input arguments. If there are no matching rows, an empty list will be returned.
tool()
udf
Specifies a Pixeltable UDF to be used as an LLM tool with customizable metadata. See the documentation for
[pxt.tools()][pixeltable.tools] for more details.
Signature:
fn
(func.Function): The UDF to use as a tool.name
(Optional[str]): The name of the tool. If not specified, then the unqualified name of the UDF will be used by default.description
(Optional[str]): The description of the tool. If not specified, then the entire contents of the UDF docstring will be used by default.
- func.tools.Tool: A
Tool
instance that can be passed to an LLM tool-calling API.
tools()
udf
Specifies a collection of UDFs to be used as LLM tools. Pixeltable allows any UDF to be used as an input into an
LLM tool-calling API. To use one or more UDFs as tools, wrap them in a pxt.tools
call and pass the return value to an LLM API.
The UDFs can be specified directly or wrapped inside a [pxt.tool()][pixeltable.tool] invocation. If a UDF is specified directly, the tool name will be the (unqualified) UDF name, and the tool description will consist of the entire contents of the UDF docstring. If a UDF is wrapped in a pxt.tool()
invocation, then the name and/or description may be customized.
Signature:
args
(func.Function | func.tools.Tool): The UDFs to use as tools.
- func.tools.Tools: A
Tools
instance that can be passed to an LLM tool-calling API or invoked to generate tool results.
uda()
udf
Decorator for user-defined aggregate functions.
The decorated class must inherit from Aggregator and implement the following methods:
- init(self, …) to initialize the aggregator
- update(self, …) to update the aggregator with a new value
- value(self) to return the final result
- requires_order_by: if True, the first parameter to the function is the order-by expression
- allows_std_agg: if True, the function can be used as a standard aggregate function w/o a window
- allows_window: if True, the function can be used with a window