Skip to main content

class  pixeltable.Query

Represents a query for retrieving and transforming data from Pixeltable tables.

method  collect()

collect() -> ResultSet

method  distinct()

distinct() -> Query
Remove duplicate rows from this Query. Note that grouping will be applied to the rows based on the select clause of this Query. In the absence of a select clause, by default, all columns are selected in the grouping. Examples: Select unique addresses from table addresses.
results = addresses.distinct()
Select unique cities in table addresses
results = addresses.city.distinct()
Select unique locations (street, city) in the state of CA
results = (
    addresses.select(addresses.street, addresses.city)
    .where(addresses.state == 'CA')
    .distinct()
)

method  group_by()

group_by(*grouping_items: Any) -> Query
Add a group-by clause to this Query. Variants:
  • group_by(base_tbl): group a component view by their respective base table rows
  • group_by(expr1, expr2, expr3): group by the given expressions
Note that grouping will be applied to the rows and take effect when used with an aggregation function like sum(), count() etc. Parameters:
  • grouping_items (Any): expressions to group by
Returns:
  • Query: A new Query with the specified group-by clause.
Examples: Given the Query book from a table t with all its columns and rows:
book = t.select()
Group the above Query book by the ‘genre’ column (referenced in table t):
query = book.group_by(t.genre)
Use the above Query grouped by genre to count the number of books for each ‘genre’:
query = (
    book.group_by(t.genre).select(t.genre, count=count(t.genre)).show()
)
Use the above Query grouped by genre to the total price of books for each ‘genre’:
query = book.group_by(t.genre).select(t.genre, total=sum(t.price)).show()

method  head()

head(n: int = 10) -> ResultSet
Return the first n rows of the Query, in insertion order of the underlying Table. head() is not supported for joins. Parameters:
  • n (int, default: 10): Number of rows to select. Default is 10.
Returns:
  • ResultSet: A ResultSet with the first n rows of the Query.

method  join()

join(
    other: catalog.Table,
    on: exprs.Expr | Sequence[exprs.ColumnRef] | None = None,
    how: plan.JoinType.LiteralType = 'inner'
) -> Query
Join this Query with a table. Parameters:
  • other (catalog.Table): the table to join with
  • on (exprs.Expr | Sequence[exprs.ColumnRef] | None): the join condition, which can be either a) references to one or more columns or b) a boolean expression.
    • column references: implies an equality predicate that matches columns in both this Query and other by name.
      • column in other: A column with that same name must be present in this Query, and it must be unique (otherwise the join is ambiguous).
      • column in this Query: A column with that same name must be present in other.
    • boolean expression: The expressions must be valid in the context of the joined tables.
  • how (plan.JoinType.LiteralType, default: 'inner'): the type of join to perform.
    • 'inner': only keep rows that have a match in both
    • 'left': keep all rows from this Query and only matching rows from the other table
    • 'right': keep all rows from the other table and only matching rows from this Query
    • 'full_outer': keep all rows from both this Query and the other table
    • 'cross': Cartesian product; no on condition allowed
Returns:
  • Query: A new Query.
Examples: Perform an inner join between t1 and t2 on the column id:
join1 = t1.join(t2, on=t2.id)
Perform a left outer join of join1 with t3, also on id (note that we can’t specify on=t3.id here, because that would be ambiguous, since both t1 and t2 have a column named id):
join2 = join1.join(t3, on=t2.id, how='left')
Do the same, but now with an explicit join predicate:
join2 = join1.join(t3, on=t2.id == t3.id, how='left')
Join t with d, which has a composite primary key (columns pk1 and pk2, with corresponding foreign key columns d1 and d2 in t):
query = t.join(d, on=(t.d1 == d.pk1) & (t.d2 == d.pk2), how='left')

method  limit()

limit(n: int) -> Query
Limit the number of rows in the Query. Parameters:
  • n (int): Number of rows to select.
Returns:
  • Query: A new Query with the specified limited rows.

method  order_by()

order_by(*expr_list: exprs.Expr, asc: bool = True) -> Query
Add an order-by clause to this Query. Parameters:
  • expr_list (exprs.Expr): expressions to order by
  • asc (bool, default: True): whether to order in ascending order (True) or descending order (False). Default is True.
Returns:
  • Query: A new Query with the specified order-by clause.
Examples: Given the Query book from a table t with all its columns and rows:
book = t.select()
Order the above Query book by two columns (price, pages) in descending order:
query = book.order_by(t.price, t.pages, asc=False)
Order the above Query book by price in descending order, but order the pages in ascending order:
query = book.order_by(t.price, asc=False).order_by(t.pages)

method  sample()

sample(
    n: int | None = None,
    n_per_stratum: int | None = None,
    fraction: float | None = None,
    seed: int | None = None,
    stratify_by: Any = None
) -> Query
Return a new Query specifying a sample of rows from the Query, considered in a shuffled order. The size of the sample can be specified in three ways:
  • n: the total number of rows to produce as a sample
  • n_per_stratum: the number of rows to produce per stratum as a sample
  • fraction: the fraction of available rows to produce as a sample
The sample can be stratified by one or more columns, which means that the sample will be selected from each stratum separately. The data is shuffled before creating the sample. Parameters:
  • n (int | None): Total number of rows to produce as a sample.
  • n_per_stratum (int | None): Number of rows to produce per stratum as a sample. This parameter is only valid if stratify_by is specified. Only one of n or n_per_stratum can be specified.
  • fraction (float | None): Fraction of available rows to produce as a sample. This parameter is not usable with n or n_per_stratum. The fraction must be between 0.0 and 1.0.
  • seed (int | None): Random seed for reproducible shuffling
  • stratify_by (Any): If specified, the sample will be stratified by these values.
Returns:
  • Query: A new Query which specifies the sampled rows
Examples: Given the Table person containing the field ‘age’, we can create samples of the table in various ways: Sample 100 rows from the above Table:
query = person.sample(n=100)
Sample 10% of the rows from the above Table:
query = person.sample(fraction=0.1)
Sample 10% of the rows from the above Table, stratified by the column ‘age’:
query = person.sample(fraction=0.1, stratify_by=t.age)
Equal allocation sampling: Sample 2 rows from each age present in the above Table:
query = person.sample(n_per_stratum=2, stratify_by=t.age)
Sampling is compatible with the where clause, so we can also sample from a filtered Query:
query = person.where(t.age > 30).sample(n=100)

method  select()

select(*items: Any, **named_items: Any) -> Query
Select columns or expressions from the Query. Parameters:
  • items (Any): expressions to be selected
  • named_items (Any): named expressions to be selected
Returns:
  • Query: A new Query with the specified select list.
Examples: Given the Query person from a table t with all its columns and rows:
person = t.select()
Select the columns ‘name’ and ‘age’ (referenced in table t) from the Query person:
query = person.select(t.name, t.age)
Select the columns ‘name’ (referenced in table t) from the Query person, and a named column ‘is_adult’ from the expression age >= 18 where ‘age’ is another column in table t:
query = person.select(t.name, is_adult=(t.age >= 18))

method  show()

show(n: int = 20) -> ResultSet

method  tail()

tail(n: int = 10) -> ResultSet
Return the last n rows of the Query, in insertion order of the underlying Table. tail() is not supported for joins. Parameters:
  • n (int, default: 10): Number of rows to select. Default is 10.
Returns:
  • ResultSet: A ResultSet with the last n rows of the Query.

method  to_coco_dataset()

to_coco_dataset() -> Path
Convert the Query to a COCO dataset. This Query must return a single json-typed output column in the following format:
{
    'image': PIL.Image.Image,
    'annotations': [
        {
            'bbox': [x: int, y: int, w: int, h: int],
            'category': str | int,
        },
        ...
    ],
}
Returns:
  • Path: Path to the COCO dataset file.

method  to_pytorch_dataset()

to_pytorch_dataset(image_format: str = 'pt') -> torch.utils.data.IterableDataset
Convert the Query to a pytorch IterableDataset suitable for parallel loading with torch.utils.data.DataLoader. This method requires pyarrow >= 13, torch and torchvision to work. This method serializes data so it can be read from disk efficiently and repeatedly without re-executing the query. This data is cached to disk for future re-use. Parameters:
  • image_format (str, default: 'pt'): format of the images. Can be ‘pt’ (pytorch tensor) or ‘np’ (numpy array). ‘np’ means image columns return as an RGB uint8 array of shape HxWxC. ‘pt’ means image columns return as a CxHxW tensor with values in [0,1] and type torch.float32. (the format output by torchvision.transforms.ToTensor())
Returns:
  • 'torch.utils.data.IterableDataset': A pytorch IterableDataset: Columns become fields of the dataset, where rows are returned as a dictionary compatible with torch.utils.data.DataLoader default collation.

method  where()

where(pred: exprs.Expr) -> Query
Filter rows based on a predicate. Parameters:
  • pred (exprs.Expr): the predicate to filter rows
Returns:
  • Query: A new Query with the specified predicates replacing the where-clause.
Examples: Given the Query person from a table t with all its columns and rows:
person = t.select()
Filter the above Query person to only include rows where the column ‘age’ (referenced in table t) is greater than 30:
query = person.where(t.age > 30)

attr  schema

schema: dict[str, ColumnType]
Column names and types in this Query.