Source: factory.core/ObjAI.py
Handles AI-related operations, including model management, embeddings, and prompting.
| Method | Signature | Description |
|---|---|---|
| set_model | set_model(model: str = '') -> None |
Sets the AI model to use for prompting. |
| has_ollama | has_ollama() -> bool |
Return True if the Ollama service is installed and reachable. |
| has_gpu | has_gpu() -> bool |
Return True if an AI accelerator is physically present. |
| has_ai | has_ai() -> bool |
Return True if this machine may run AI workloads. |
| vram_gb | vram_gb() -> int |
Return total VRAM across detected GPUs, in GB, or 0 when |
| cpu_cores | cpu_cores() -> int |
Return the number of logical CPU cores, or 1 as a floor. |
| get_model | get_model() -> str |
Return the generation (chat) model for this machine. |
| get_model_vision | get_model_vision() -> str |
Return the vision (multimodal) model for this machine. |
| get_model_embedding | get_model_embedding() -> str |
Return the embedding model for this machine. |
| check_service_on_gpu | check_service_on_gpu(service_name: str = 'ollama') |
Checks if a given service is running on the GPU. |
| local_process_text | local_process_text(text_to_process: str) -> str |
Processes a string by removing newlines, underscores, and asterisks. |
| get_documents_query | get_documents_query(sql = '') |
Executes a SQL query and processes the results to populate the documents list. |
| compute_embeddings | compute_embeddings(embedding_model: str = '') |
Computes embeddings for the documents in the documents list and adds |
| find_prompt_match | find_prompt_match(prompt: str = '') -> str |
Finds the best matching document in the collection for a given prompt. |
| llm_factory | llm_factory(model_type: str = '', model_set: str = '') -> Optional[ModuleType] |
A factory for creating LLM objects. |
| mcp_factory | mcp_factory(provider: str = '') -> Optional[ModuleType] |
A factory for creating MCP provider objects. |
| list_providers | list_providers() -> list[str] |
Return available MCP provider names by scanning |
| prompt | prompt(role: str = '', prompt: str = '', image_base64: str = '') -> str |
Sends a prompt to the AI model and returns the response. |
| capture | capture(url: str, provider: str = 'playwright', width: int = 1280, height: int = 1200) -> str |
Capture an image from url using the named MCP provider. |
| review_sql | review_sql(package: str = '', group_name: str = '', model: str = 'mistral', limit: int = 30, fetch_limit: int = 200, exec_limit: int = 50, chunk_size: int = 15) -> str |
Fetch SQL from def_calculations and ask Ollama to review it for |
| review_efficiency | review_efficiency(package: str = '', group_name: str = '', model: str = 'mistral', limit: int = 30, fetch_limit: int = 200, exec_limit: int = 50, chunk_size: int = 15) -> str |
Fetch SQL from def_calculations and ask Ollama to review it for |
| workflow_prompt | workflow_prompt(model: str = '', role: str = '', prompt: str = '', image_base64: str = '') -> str |
A wrapper around the prompt method for use in workflows. |
Read a single prompt value from ObjAI.yaml without requiring an
Runs a preflight check to ensure the AI environment is set up correctly.
Runs a test of the AI model with a sample prompt.
Runs a test of the AI model with a sample image.
Runs a chained test of the AI model with an image and a follow-up prompt.
Runs a test of the AI model with a RAG query.
Runs a test of the MCP functionality with OpenAI.
Runs a test of the MCP functionality with Anthropic Claude.
Runs a test of the MCP functionality with Google Gemini.
Runs a test of the MCP functionality with Ollama.
Runs a test of the MCP functionality with Hugging Face.
Runs a test of the MCP functionality with Mistral AI.