Skip to main content
Pixeltable UDFs for Ollama local models. Provides integration with Ollama for running large language models locally, including chat completions and embeddings. View source on GitHub

UDFs


chat() udf

Generate the next message in a chat with a provided model. Signature:
chat(
    messages: Json,
    model: String,
    tools: Optional[Json],
    format: Optional[String],
    options: Optional[Json]
)-> Json
Parameters:
  • messages (Json): The messages of the chat.
  • model (String): The model name.
  • tools (Optional[Json]): Tools for the model to use.
  • format (Optional[String]): The format of the response; must be one of 'json' or None.
  • options (Optional[Json]): Additional options to pass to the chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.

embed() udf

Generate embeddings from a model. Signature:
embed(
    input: String,
    model: String,
    truncate: Bool,
    options: Optional[Json]
)-> Array[(None,), Float]
Parameters:
  • input (String): The input text to generate embeddings for.
  • model (String): The model name.
  • truncate (Bool): Truncates the end of each input to fit within context length. Returns error if false and context length is exceeded.
  • options (Optional[Json]): Additional options to pass to the embed call. For details, see the Valid Parameters and Values section of the Ollama documentation.

generate() udf

Generate a response for a given prompt with a provided model. Signature:
generate(
    prompt: String,
    model: String,
    suffix: String,
    system: String,
    template: String,
    context: Optional[Json],
    raw: Bool,
    format: Optional[String],
    options: Optional[Json]
)-> Json
Parameters:
  • prompt (String): The prompt to generate a response for.
  • model (String): The model name.
  • suffix (String): The text after the model response.
  • format (Optional[String]): The format of the response; must be one of 'json' or None.
  • system (String): System message.
  • template (String): Prompt template to use.
  • context (Optional[Json]): The context parameter returned from a previous call to generate().
  • raw (Bool): If True, no formatting will be applied to the prompt.
  • options (Optional[Json]): Additional options for the Ollama chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.
I