Skip to main content
Pixeltable UDFs for Ollama local models. Provides integration with Ollama for running large language models locally, including chat completions and embeddings. View source on GitHub

udf chat()

chat(
    messages: Json,
    *,
    model: String,
    tools: Json | None = None,
    format: String | None = None,
    options: Json | None = None
) -> Json
Generate the next message in a chat with a provided model. Parameters:
  • messages (Json): The messages of the chat.
  • model (String): The model name.
  • tools (Json | None): Tools for the model to use.
  • format (String | None): The format of the response; must be one of 'json' or None.
  • options (Json | None): Additional options to pass to the chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.

udf embed()

embed(
    input: String,
    *,
    model: String,
    truncate: Bool = True,
    options: Json | None = None
) -> Array[(None,), Float]
Generate embeddings from a model. Parameters:
  • input (String): The input text to generate embeddings for.
  • model (String): The model name.
  • truncate (Bool): Truncates the end of each input to fit within context length. Returns error if false and context length is exceeded.
  • options (Json | None): Additional options to pass to the embed call. For details, see the Valid Parameters and Values section of the Ollama documentation.

udf generate()

generate(
    prompt: String,
    *,
    model: String,
    suffix: String = '',
    system: String = '',
    template: String = '',
    context: Json | None = None,
    raw: Bool = False,
    format: String | None = None,
    options: Json | None = None
) -> Json
Generate a response for a given prompt with a provided model. Parameters:
  • prompt (String): The prompt to generate a response for.
  • model (String): The model name.
  • suffix (String): The text after the model response.
  • format (String | None): The format of the response; must be one of 'json' or None.
  • system (String): System message.
  • template (String): Prompt template to use.
  • context (Json | None): The context parameter returned from a previous call to generate().
  • raw (Bool): If True, no formatting will be applied to the prompt.
  • options (Json | None): Additional options for the Ollama chat call, such as max_tokens, temperature, top_p, and top_k. For details, see the Valid Parameters and Values section of the Ollama documentation.