Ported 2026-02-23 — implementation now delegates to
ObjAiMcpOllama. PreferObjAiVisionor the MCP layer in new
code:from ObjAiVision import ObjAiVision vision = ObjAiVision(db=0, model="llava") result = vision.analyse_url("https://example.com/image.png")
LLaVA (Large Language and Vision Assistant) combines visual
understanding with language generation via the Ollama runtime.
from ObjAILlmLlava import ObjPrompt
p = ObjPrompt(DB=0, model="llava")
# From a file path
response = p.query_prompt(
role="You are an analyst.",
prompt="Describe this receipt.",
image_base64="/path/to/image.jpg",
)
# From a base64-encoded string
response = p.query_prompt(
role="You are an analyst.",
prompt="What do you see?",
image_base64=base64_string,
)
| Method | Description |
|---|---|
__init__(DB, model) |
Initialises provider; defaults to llava |
set_model(model) |
Switch the active model |
query_prompt(role, prompt, image_base64) |
Vision prompt; accepts file path or base64 string |
cythonize -3 -a -i ObjAILlmLlava.py