Agent: dynamic models from aibroker, MCP agent tool per service, error display #27
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_router#27
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Changes needed
1. Dynamic model list from hero_aibroker
The agent tab model dropdown is hard-coded to 3 models (gpt-4o-mini, gpt-4o, claude-sonnet). It should dynamically fetch all available models from hero_aibroker across all connected providers.
GET /api/modelsendpoint that proxiesai.modelsRPC call to hero_aibroker<select>with JS that populates on tab open2. MCP agent tool per service
Each service already gets per-service MCP tools from its OpenRPC spec. Add an
agent_runtool to every service MCP so Claude Desktop / MCP clients can invoke the agent directly.agent_runtool totools/listresponse alongside OpenRPC-derived toolsagent_runintools/callby delegating toagent::run_agent(){prompt: string, model?: string, max_retries?: integer}3. Better error display in agent UI
When the agent fails, show the actual error and script details prominently instead of just a generic failure message. Show per-attempt error details when retries occur.
Implemented on branch
development_27(commit6f0bf37):Changes:
Dynamic models endpoint (
GET /api/models) — proxiesai.modelsJSON-RPC call to hero_aibroker over Unix socket, returns all available models from all connected providers.Dynamic model dropdown — replaces the 3 hard-coded model options (GPT-4o Mini, GPT-4o, Claude Sonnet) with a JS fetch to
/api/modelson page load, populating the dropdown with all available models.agent_runMCP tool per service — every service's MCP server now includes anagent_runtool intools/list. When called viatools/call, it delegates toagent::run_agent()which generates and executes Python code against the service. This lets MCP clients (like Claude Code) use the AI agent through the standard MCP protocol.Improved error display — agent failures now show a styled error alert with the actual error message, plus any LLM-generated explanation. Network errors include guidance about checking hero_aibroker. Success results get simple markdown-like formatting.
Files changed:
agent.rs— madesend_unix_rpcandaibroker_socket_pathpublic, addedfetch_models()mcp.rs— addedagent_runtool totools/list, handled it intools/callroutes.rs— addedGET /api/modelsroute andmodels_handlerservice.html— dynamic model loading, error formatting helpers, improved result displayAll tests pass (7/7).