udf chat()
messages(Json): The messages of the chat.model(String): The model name.tools(Json | None): Tools for the model to use.format(String | None): The format of the response; must be one of'json'orNone.options(Json | None): Additional options to pass to thechatcall, such asmax_tokens,temperature,top_p, andtop_k. For details, see the Valid Parameters and Values section of the Ollama documentation.
udf embed()
input(String): The input text to generate embeddings for.model(String): The model name.truncate(Bool): Truncates the end of each input to fit within context length. Returns error if false and context length is exceeded.options(Json | None): Additional options to pass to theembedcall. For details, see the Valid Parameters and Values section of the Ollama documentation.
udf generate()
prompt(String): The prompt to generate a response for.model(String): The model name.suffix(String): The text after the model response.format(String | None): The format of the response; must be one of'json'orNone.system(String): System message.template(String): Prompt template to use.context(Json | None): The context parameter returned from a previous call togenerate().raw(Bool): IfTrue, no formatting will be applied to the prompt.options(Json | None): Additional options for the Ollamachatcall, such asmax_tokens,temperature,top_p, andtop_k. For details, see the Valid Parameters and Values section of the Ollama documentation.