Source: factory.ai/package.mcp/ObjAiMcpOllama.py
Ollama implementation for the MCP AI interface.
| Method | Signature | Description |
|---|---|---|
| prompt | prompt(role: str = '', prompt: str = '', image_base64: str = '', options: dict | None = None) -> str |
Send a prompt to the Ollama model and return the response. |
| embed | embed(text: str = '') -> list |
Generate an embedding vector for text using the local |
| chat | chat(message: str, role: str = '', options: dict | None = None) -> str |
Send a message in a multi-turn conversation, accumulating |
| reset_context | reset_context() -> None |
Clear accumulated multi-turn conversation history. |
| stream_prompt | stream_prompt(role: str = '', prompt: str = '', options: dict | None = None) |
Stream a prompt response token-by-token from the Ollama |
| get_model_registry | get_model_registry() -> list[dict] |
Return the curated model list from |
| pull_model_set | pull_model_set(default_only: bool = True) -> list[str] |
Pull models from the curated registry. |
| smoke_test | smoke_test() -> bool |
Send a minimal prompt to verify the model actually |
| warm_model | warm_model() -> bool |
Pre-load the model into VRAM so the first real request |
| get_loaded_models | get_loaded_models() -> list[str] |
Return the names of models currently loaded into VRAM |
| get_capabilities | get_capabilities() -> dict |
Inspect model metadata to detect capabilities. |