Track A: Token/cost display
- Read agent usage attrs (session_prompt_tokens, session_completion_tokens,
session_estimated_cost_usd) after run_conversation in streaming.py
- Add input_tokens, output_tokens, estimated_cost fields to Session model
- Include usage in done SSE event payload
- Store usage on S.lastUsage in messages.js done handler
- Render usage badge below last assistant message (input/output/cost)
Track B: Subagent delegation cards
- Add subagent_progress to toolIcon map with shuffle emoji
- Special-case subagent_progress in buildToolCard: "Subagent" label,
strip double emoji from preview, add tool-card-subagent CSS class
- Indented border-left styling for subagent cards
- Clean delegate_task display name
Track C: Skill picker in cron create form
- Add skill search input + tag chips to cron create form HTML
- Skill picker JS in panels.js: search/filter, click-to-add tags,
remove tag chips, pre-fetch skill list on form open
- submitCronCreate sends skills array in POST body
- Skill picker dropdown + tag CSS
Track D: Skill linked files viewer
- Add file query param to /api/skills/content endpoint
- Serve linked files from skill directory with path traversal protection
- Ensure linked_files key always present in skill content response
- Render linked files section below SKILL.md content in preview panel
- openSkillFile function for viewing individual linked files
Track E: Bug fixes and code quality
- Expand Session.__init__ and compact() to readable multi-line format
- Remove inline import json as _j2 inside loop in streaming.py
- Fix tool_calls: capture args from assistant messages, skip unresolved names
- Store args snapshot in persisted tool_calls for reload display
6 new tests. Total: 421 (409 passing).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two fixes for Camanji rate limit UX:
1. api/streaming.py — pass fallback_model from profile config to AIAgent
The agent already supports fallback_model (a dict with provider/model/base_url)
for automatic rate-limit recovery, but streaming.py never read it from config
or passed it to AIAgent. Now reads get_config().get('fallback_model') at
call time (not module-level snapshot) and passes it through.
Also reads platform_toolsets.cli from the active profile's config at call
time so profiles with custom toolset lists use the right tools.
Camanji has fallback_model: {provider: openrouter, model: anthropic/claude-sonnet-4.6}
so hitting the direct-Anthropic rate limit will now automatically retry via
OpenRouter before giving up.
2. api/streaming.py + static/messages.js — show error inline, not 'Connection lost'
Previously: agent threw -> put('error', msg) -> SSE connection closed ->
browser's network-level 'error' event fired -> generic 'Connection lost'.
The actual error message was invisible to the user.
Fix: renamed server-side error event to 'apperror' (distinct from the SSE
spec's network error event). Added source.addEventListener('apperror', ...)
in messages.js that renders the error as a styled assistant message:
⏱️ Rate limit reached: <full message>
*Rate limit reached. Fallback model exhausted. Try again in a moment.*
Also added source.addEventListener('warning', ...) for non-fatal notices
(future use: fallback-activated status bar update).
Tests: 426 passed, 0 failed.
Add full profile management to the web UI, matching the hermes-agent CLI
profile system. Profiles are isolated HERMES_HOME instances with their own
config, skills, memory, cron, and API keys.
Backend: new api/profiles.py wrapping hermes_cli.profiles, dynamic config
reloading, 5 new API endpoints, profile-aware path resolution, HERMES_HOME
env save/restore in streaming, module-level cache patching for skills_tool
and cron/jobs.
Frontend: profile chip in topbar with dropdown, Profiles sidebar panel with
CRUD UI, boot-time profile fetch, cascade refresh on switch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Tracebacks exposed file paths, module names, and potentially secret
values from local variables. Now logged server-side only; clients
receive a generic error message.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Update _PROVIDER_MODELS['minimax'] from stale ABAB 6.5 models to
current MiniMax-M2.7/M2.5/M2.1 lineup (matching hermes-agent upstream)
- Update _PROVIDER_MODELS['zai'] from GLM-4 to current GLM-5/4.7/4.5
lineup (matching hermes-agent upstream)
- Extend resolve_model_provider() to also return base_url from config.yaml,
so providers with custom endpoints (MiniMax, Z.AI) are routed correctly
- Pass base_url to AIAgent in both streaming and sync chat paths
Fixes#6
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace duplicated inline provider resolution in routes.py and streaming.py
with a shared resolve_model_provider() helper in config.py.
Improvements over original:
- If model ID has a prefix matching any known direct-API provider
(not just the config provider), strip it and route correctly.
This handles edge cases like localStorage restoring a model from
a different provider group.
- Single source of truth for the resolution logic.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When the model dropdown sends a prefixed ID like "anthropic/claude-xxx",
AIAgent interprets the provider/model format as an OpenRouter path and
routes through OpenRouter instead of the direct Anthropic API.
Fix: read the configured provider from config.yaml model section. If
the model ID starts with the configured provider name followed by "/",
strip that prefix and pass the provider explicitly to AIAgent. This
ensures direct API providers (Anthropic, OpenAI, etc.) are used when
configured, regardless of the model ID format from the dropdown.