Model Integration¶
Goal¶
Connect your model in a way that is robust for multi-step agent runs.
Default Engine-driven model path¶
When AgentModule.decide(...) returns None:
- Engine calls
agent.prepare(state). - Engine adds system prompt from
build_system_prompt. - Engine retrieves memory messages.
- Engine calls
agent.llm(messages). - Parser maps model output to
Decision.
Minimal model wiring¶
from qitos import AgentModule
from qitos.kit.parser import ReActTextParser
class MyAgent(AgentModule):
def __init__(self, llm):
super().__init__(tool_registry=..., llm=llm, model_parser=ReActTextParser())
def build_system_prompt(self, state):
return "You are a precise coding assistant."
def prepare(self, state):
return f"Task: {state.task}\nObservation: {observation}"
def decide(self, state, observation):
return None
Config recommendation¶
Use env vars, not hardcoded keys:
Reliability checklist¶
- Parser supports your output format (JSON/XML/ReAct/function-like).
- Prompt instructs exact output protocol.
- Parser has fallback behavior for malformed/truncated outputs.
- Trace includes model name and parser name for audit.