Resolved CHANGELOG.md and SPRINTS.md conflicts: master added v0.29
(Sprint 23: Agentic Transparency), CLI bridge becomes v0.30.
Updated all version references to v0.30.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
get_cli_sessions() and get_cli_session_messages() reference HOME but
it was not imported from api.config. This caused /api/sessions to 500
on every request, breaking the entire session list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Read CLI sessions from the agent's state.db and surface them in the
WebUI sidebar alongside local sessions, with read-only display and
import-on-click to avoid data duplication.
Key changes:
- get_cli_sessions(): reads sessions list via parameterized SQL,
wrapped in sqlite3 context manager (no connection leaks)
- get_cli_session_messages(): reads messages for a CLI session
via parameterized SQL, also context-managed
- GET /api/sessions: merges WebUI + CLI sessions with dedup
(WebUI takes priority on same session_id)
- GET /api/session: falls back to CLI store if not a WebUI session
- POST /api/session/import_cli: imports a CLI session into the
WebUI store (idempotent, no duplicates on re-import)
- Imported sessions use get_last_workspace() for the workspace field
(not a hardcoded string) and carry the active profile tag
- CSS: .cli-session with ::after 'cli' indicator (no theme changes)
Fixes review feedback:
- SQLite connections use 'with' context managers (no leaks)
- Workspace uses real path via get_last_workspace()
- Profile awareness via api.profiles.get_active_profile_name()
- Parameterized SQL queries throughout (no injection risk)
- Graceful fallback when sqlite3 or state.db is missing
Use v(\d+\.\d+(?:\.\d+)?) instead of v(.*) to only match real
version numbers (v0.29, v0.29.1), not arbitrary strings.
Keep latest unconditional since all tag pushes are releases.
Based on review of PR #52 approach vs #53.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The semver pattern in docker/metadata-action requires 3-part versions
(e.g. v0.29.0). Our tags use 2-part (v0.29), causing the metadata
step to produce empty tags, which made build-push-action fail.
Fix: use type=match with regex to extract the version string directly,
plus type=raw for the latest tag unconditionally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Token usage display:
- Add 'show_token_usage' boolean to settings (default: false, off by default)
- Settings panel: checkbox 'Show token usage after responses'
- /usage slash command: instant toggle with toast feedback, persists to
server, updates checkbox if settings panel is open, re-renders messages
- Boot: load show_token_usage alongside send_key on startup
- ui.js: gate usage badge on window._showTokenUsage flag
Timestamps:
- streaming.py: stamp 'timestamp' on every message that lacks one at
conversation completion; old messages (no timestamp field) now get a
wall-clock time the first time they're touched by a new turn
- messages.js: stamp _ts on the last assistant message at done-event time
so the time shows immediately on the current turn before next reload
- Timestamps already render in the UI (Sprint 14): faint time on each
role header line, full opacity on hover, full date in title tooltip
Store expanded directory paths in localStorage keyed by workspace path
(key: 'hermes-webui-expanded:{workspacePath}'). On root load (loadDir('.')),
restore the saved set for the current workspace and pre-fetch dir contents
for any restored expanded directories so the tree renders fully on first
paint without requiring a second click to expand.
Saves on every expand/collapse toggle. Switching workspaces automatically
picks up that workspace's own saved state. Per-workspace (not per-session)
so the same tree state is shared across sessions using the same workspace,
which is the natural expectation.
- routes.py: reject glob wildcards (* ? [ ]) in skill name param to
prevent rglob wildcard injection when serving linked files
- panels.js: replace inline onclick+esc() with data-* attributes and
addEventListener for skill tag removal and linked-file clicks;
esc() is HTML-safe but not JS-safe -- apostrophes in names caused
JS syntax errors and _cronSelectedSkills array corruption
- ui.js: fix _fmtTokens(null/undefined) returning 'null'/'undefined'
by guarding with (!n||n<0) -> '0'; add data-role attribute to msg-row
elements so usage badge correctly targets the last assistant row
instead of the last row regardless of speaker
- tests: rename test_sprint24.py -> test_sprint23.py (wrong sprint #);
add 3 new tests: path traversal rejection, wildcard name rejection,
cron create with skills; strengthen existing tests to assert field
presence explicitly (was using .get(field, 0)==0 which never caught
a missing field)
Track A: Token/cost display
- Read agent usage attrs (session_prompt_tokens, session_completion_tokens,
session_estimated_cost_usd) after run_conversation in streaming.py
- Add input_tokens, output_tokens, estimated_cost fields to Session model
- Include usage in done SSE event payload
- Store usage on S.lastUsage in messages.js done handler
- Render usage badge below last assistant message (input/output/cost)
Track B: Subagent delegation cards
- Add subagent_progress to toolIcon map with shuffle emoji
- Special-case subagent_progress in buildToolCard: "Subagent" label,
strip double emoji from preview, add tool-card-subagent CSS class
- Indented border-left styling for subagent cards
- Clean delegate_task display name
Track C: Skill picker in cron create form
- Add skill search input + tag chips to cron create form HTML
- Skill picker JS in panels.js: search/filter, click-to-add tags,
remove tag chips, pre-fetch skill list on form open
- submitCronCreate sends skills array in POST body
- Skill picker dropdown + tag CSS
Track D: Skill linked files viewer
- Add file query param to /api/skills/content endpoint
- Serve linked files from skill directory with path traversal protection
- Ensure linked_files key always present in skill content response
- Render linked files section below SKILL.md content in preview panel
- openSkillFile function for viewing individual linked files
Track E: Bug fixes and code quality
- Expand Session.__init__ and compact() to readable multi-line format
- Remove inline import json as _j2 inside loop in streaming.py
- Fix tool_calls: capture args from assistant messages, skip unresolved names
- Store args snapshot in persisted tool_calls for reload display
6 new tests. Total: 421 (409 passing).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- README: add GHCR pre-built images to Docker section, update line counts
and test count (426 tests, 22 files), add CI/CD to architecture tree
- ROADMAP: update header to v0.28.1/426 tests, mark all user-requested
features as shipped, collapse completed Waves 3-7 into summary table,
update architecture line counts, add CI/CD row
- CHANGELOG: add v0.28.1 entry for CI pipeline + multi-arch Docker builds,
update footer version
- SPRINTS: update header and footer to v0.28.1
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The SHA-pinned versions from the security hardening commit referenced
non-existent commit hashes, causing the workflow to fail with 'unable
to resolve action'. Switch to standard major version tags (v4, v3, v2,
v6, v5) which are the recommended approach for GitHub-maintained and
well-known actions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pinned all 7 third-party actions from mutable version tags to immutable
commit SHAs. Mutable tags (e.g. @v4) can be force-pushed by the action
author (or a compromised account) to inject malicious code into the workflow,
which runs with write access to the repo and GHCR registry.
Also moved 'permissions' from workflow level to job level (best practice:
scope permissions as narrowly as possible).
Pin mapping:
actions/checkout@v4 -> @11bd71901bbe... (v4.2.2)
softprops/action-gh-release@v2 -> @c062e08bd532... (v2.2.1)
docker/setup-qemu-action@v3 -> @49b3bc8e6bdd... (v3.2.0)
docker/setup-buildx-action@v3 -> @c47758b77c97... (v3.7.1)
docker/login-action@v3 -> @9780b0c442fb... (v3.3.0)
docker/metadata-action@v5 -> @369eb591f429... (v5.6.1)
docker/build-push-action@v6 -> @ca877d9245fe... (v6.10.0)
BUG-1 (medium): _validate_profile_name() used re.match() with a $ anchor.
re.match() with $ is truthy for 'name\n' because match() allows trailing
content after the $ in multiline mode. Changed to re.fullmatch() which
requires the entire string to match — trailing newlines now correctly rejected.
BUG-2 (medium/defense-in-depth): create_profile_api() validated 'name' via
_validate_profile_name() but passed clone_from directly to hermes_cli and
_create_profile_fallback() without validation. Added clone_from validation
inside create_profile_api() (skipping 'default' which is a valid clone source).
routes.py already validates it at the HTTP layer; this adds API-layer defense.
BUG-3 (low): When hermes_cli is not importable (the exact Docker case this PR
targets), list_profiles_api() also returns only the stub default dict and
can't find the newly created profile by name. The fallback return was a
2-key dict {name, path} — incomplete vs the 9-key schema everywhere else.
Expanded to the full profile dict with all fields so API clients get
consistent data regardless of hermes_cli availability.
OBS-4 (low/TOCTOU): _create_profile_fallback() checked profile_dir.exists()
then called mkdir(exist_ok=True). If a concurrent request created the dir
between those two calls, mkdir silently succeeded — defeating the
FileExistsError guard. Changed to mkdir(exist_ok=False) so the OS raises
FileExistsError atomically if the dir appears in the race window.
Tests: 423 passed, 0 failed.
When hermes-agent is not discoverable (common in Docker), create_profile_api()
raised a hard RuntimeError while list and delete already had manual fallbacks.
Changes:
- Add _create_profile_fallback() that bootstraps profile directory structure
directly (matching upstream hermes_cli.profiles: 8 subdirs + config clone)
- Extract _validate_profile_name() so validation works without hermes_cli
- Add constants _PROFILE_ID_RE, _PROFILE_DIRS, _CLONE_CONFIG_FILES matching
upstream hermes-agent
- Remove :ro from docker-compose.yml hermes home mount so profiles dir is
writable inside the container
Closes#44
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
On tag push (v*):
- Creates a GitHub Release with auto-generated release notes
- Builds multi-arch Docker image (linux/amd64, linux/arm64)
- Pushes to ghcr.io/nesquena/hermes-webui with semver tags
- Uses GitHub Actions cache for faster builds
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two fixes for Camanji rate limit UX:
1. api/streaming.py — pass fallback_model from profile config to AIAgent
The agent already supports fallback_model (a dict with provider/model/base_url)
for automatic rate-limit recovery, but streaming.py never read it from config
or passed it to AIAgent. Now reads get_config().get('fallback_model') at
call time (not module-level snapshot) and passes it through.
Also reads platform_toolsets.cli from the active profile's config at call
time so profiles with custom toolset lists use the right tools.
Camanji has fallback_model: {provider: openrouter, model: anthropic/claude-sonnet-4.6}
so hitting the direct-Anthropic rate limit will now automatically retry via
OpenRouter before giving up.
2. api/streaming.py + static/messages.js — show error inline, not 'Connection lost'
Previously: agent threw -> put('error', msg) -> SSE connection closed ->
browser's network-level 'error' event fired -> generic 'Connection lost'.
The actual error message was invisible to the user.
Fix: renamed server-side error event to 'apperror' (distinct from the SSE
spec's network error event). Added source.addEventListener('apperror', ...)
in messages.js that renders the error as a styled assistant message:
⏱️ Rate limit reached: <full message>
*Rate limit reached. Fallback model exhausted. Try again in a moment.*
Also added source.addEventListener('warning', ...) for non-fatal notices
(future use: fallback-activated status bar update).
Tests: 426 passed, 0 failed.
A session with messages belongs to the profile it was created under. Switching
profiles while a conversation is in progress should not retag that session or
update its workspace/model in place — that would corrupt the session's context.
New behavior:
- Session has NO messages (empty): profile switch updates it in place (model,
workspace). Works exactly as before — nothing was started yet.
- Session HAS messages (in progress): profile switch calls newSession() to
start a fresh session tagged to the new profile. The old session is left
untouched. Toast: 'Switched to profile: X — new conversation started'.
- Agent busy: blocked as before, no change.
Also: S._profileDefaultWorkspace is now consumed (set to null) inside
newSession() after the first use, so it doesn't keep forcing the same
workspace on every subsequent new session after a switch.
Root cause: resolve_model_provider() had a branch:
if config_provider and config_provider != 'openrouter' and prefix in _PROVIDER_MODELS:
return bare, prefix, None
When Camanji profile (config_provider='anthropic') picked openai/gpt-5.4-mini
from the OpenRouter dropdown, prefix='openai' matched _PROVIDER_MODELS and
config_provider was not 'openrouter', so it returned ('gpt-5.4-mini', 'openai', None).
The agent then demanded OPENAI_API_KEY directly -- not found -- RuntimeError --
stream crashed -- 'Connection lost'.
Fix: if prefix != config_provider (cross-provider selection), always route through
openrouter with the full provider/model string. Only strip the prefix and call a
direct provider API when the config_provider EXACTLY matches the model prefix.
Cases verified:
openrouter + openai/gpt-5.4-mini -> (openai/gpt-5.4-mini, openrouter) ✓
anthropic + openai/gpt-5.4-mini -> (openai/gpt-5.4-mini, openrouter) ✓ FIXED
anthropic + anthropic/claude-... -> (claude-..., anthropic) ✓
anthropic + claude-sonnet-4-6 bare -> (claude-sonnet-4-6, anthropic) ✓
openrouter + anthropic/claude-... -> (anthropic/claude-..., openrouter) ✓
Tests: 426 passed, 0 failed.
Root cause: three interacting bugs caused the model picker to show the wrong
model or flicker after a profile switch.
Bug 1 — syncTopbar() fought switchToProfile().
After switchToProfile() set the picker to the profile's model, syncTopbar()
was called (via renderSessionList -> loadSession, then explicitly at the end)
and overwrote it with S.session.model -- the old session's model.
Fix: added S._pendingProfileModel flag. switchToProfile() sets it;
syncTopbar() checks it first, applies the override, then clears it.
S.session.model is also updated to the resolved value so subsequent
syncTopbar() calls are consistent.
Bug 2 — Raw option injected at top of list for mismatched model IDs.
Profile configs store model IDs like 'claude-sonnet-4-6' (hermes-agent
format: hyphens, no namespace prefix) but the dropdown has
'anthropic/claude-sonnet-4.6' (OpenRouter format: dots, with prefix).
The old code did sel.value = id, found no match, then injected a new
<option> at the top of the list -- creating a lowercase duplicate that
didn't match any real provider group entry.
Fix: _findModelInDropdown() normalises both sides (strip prefix, hyphens->dots,
lowercase) and finds the best matching existing option. No new options are ever
injected for profile switching.
Bug 3 — populateModelDropdown() injected raw option on cold load.
Same issue: if default_model from /api/models didn't exactly match a dropdown
value, an extra option was added. Fixed by using _applyModelToDropdown()
which only selects existing options.
New helpers in ui.js:
_findModelInDropdown(modelId, sel) -- smart fuzzy match, returns matched value
_applyModelToDropdown(modelId, sel) -- sets picker, returns resolved value
Tests: 426 passed, 0 failed.
Three interrelated fixes:
1. api/workspace.py — clean workspace isolation with auto-migration
_clean_workspace_list(): sanitizes any workspace list by:
- Removing test artifacts (webui-mvp-test, test-workspace paths)
- Removing paths that no longer exist on disk
- Removing cross-profile leaks (paths under ~/.hermes/profiles/*)
- Renaming 'default' workspace label to 'Home' (avoids confusion
with the 'default' profile name)
_migrate_global_workspaces(): one-time migration for upgrading users.
Reads the legacy global workspaces.json, runs _clean_workspace_list,
rewrites it cleaned. This runs automatically on first load after upgrade
for the default profile only.
load_workspaces(): now cleans every read and persists cleaned version
if anything changed. Named profiles always start fresh (no global leak).
Empty results fall back to 'Home' entry pointing at profile's workspace.
Default label for auto-generated single-entry lists is 'Home', not 'default'.
2. api/models.py — legacy session profile backfill (already committed,
this commit adds the sessions.js filter tightening counterpart)
3. static/sessions.js — strict profile filter
Removed the '!s.profile' escape hatch from the profile filter.
Server now backfills profile='default' on legacy sessions, so every
session has an explicit tag. Filter is now exact:
s.profile === S.activeProfile
Named profiles see zero legacy clutter. Default profile sees its own
sessions. 'All profiles' toggle still shows everything.
Migration story for users pulling this update:
- Existing sessions (profile=null) -> attributed to 'default' at read time
- Global workspaces.json -> cleaned of test artifacts and cross-profile paths
on first server start after upgrade
- Named profile workspace files -> cleaned on first read, persisted clean
- No manual intervention needed
Tests: 426 passed, 0 failed.
Root cause: sessions created before Sprint 22 have no profile tag (profile=None).
The client filter was '!s.profile || s.profile === S.activeProfile' -- the
'!s.profile' guard made ALL 33 legacy sessions visible under every profile,
so switching to Camanji still showed the entire default session history.
Fix:
- api/models.py all_sessions(): backfill profile='default' on sessions with
no profile tag before returning. This is in-memory only (no disk writes) --
legacy sessions just get attributed to the default profile at read time.
Applied to both the index-path and the full-scan fallback path.
- static/sessions.js: tighten the client filter to s.profile === S.activeProfile
(remove the '!s.profile' escape hatch -- now redundant since server fills it).
Every session now has an explicit profile, so the filter is precise.
Result: switching to Camanji shows only Camanji sessions. Default profile shows
legacy + default-tagged sessions. 'All profiles' toggle still shows everything.
S.activeProfile defaults to 'default' in the S object so first render is safe.
Tests: 426 passed, 0 failed.
1. _profile_default_workspace() now checks terminal.cwd
Profile config.yaml files don't have a 'workspace' or 'default_workspace' key
— they store the working directory as terminal.cwd (the hermes-agent CLI
setting). Added it as the third fallback after 'workspace' and
'default_workspace', so switching to camanji correctly resolves
~/Camanji, webui resolves ~/webui-mvp, etc.
2. Workspace dropdown opens upward (bottom: calc(100% + 4px))
The dropdown is now anchored at the bottom of the sidebar. Opening it
downward (top: 100%) caused it to clip off screen. Flipped to open upward
with an upward shadow so it expands into the session list area instead.
Tests: 426 passed, 0 failed.
Two changes:
1. Workspace updates correctly on profile switch
switchToProfile() now applies data.default_workspace from the switch
response to the current session via /api/session/update, updates
S.session.workspace in-memory, and stores S._profileDefaultWorkspace
so the next new session also inherits the profile's workspace.
newSession() in sessions.js picks up S._profileDefaultWorkspace when
creating a new session after a profile switch.
2. Workspace chip removed from topbar
The workspace was shown in two places: the topbar chip (wsChip) AND
the sidebar bottom display (sidebarWsDisplay with name + full path).
The topbar chip was redundant, cluttered the topbar, and pushed other
chips (profile, model, clear, settings) off screen.
Removed wsChip from the topbar entirely. The sidebar display is now
the sole workspace UI, consistent and unambiguous.
Moved wsDropdown to live inside the sidebar position:relative wrapper
so it opens downward from sidebarWsDisplay. Updated the click-outside
listener to close on clicks outside sidebarWsDisplay/wsDropdown.
Removed stale wsChip update code from syncTopbar() in ui.js.
Tests: 426 passed, 0 failed.
Root cause: _DEFAULT_HERMES_HOME was evaluated at module import time from
os.getenv('HERMES_HOME'). HERMES_HOME is a MUTABLE env var -- init_profile_state()
at server startup calls _set_hermes_home() which writes to os.environ['HERMES_HOME'].
If the sticky active_profile file pointed to e.g. 'webui', HERMES_HOME was set to
~/.hermes/profiles/webui BEFORE api/profiles.py imported. So _DEFAULT_HERMES_HOME
resolved to ~/.hermes/profiles/webui. Then switch_profile('webui') computed:
home = ~/.hermes/profiles/webui / 'profiles' / 'webui'
= ~/.hermes/profiles/webui/profiles/webui -- doesn't exist -> 404 ValueError
Fix: replace the one-liner assignment with _resolve_base_hermes_home() which:
1. Checks HERMES_BASE_HOME env var (explicit override)
2. Checks HERMES_HOME -- but if it looks like a profiles/ subdir (parent.name ==
'profiles'), walks up two levels to the actual base
3. Falls back to Path.home() / '.hermes'
This means the server can start with HERMES_HOME pointing to any profile and
_DEFAULT_HERMES_HOME will still correctly point to ~/.hermes.
Also fix: api() helper in workspace.js was throwing new Error(await res.text())
which surfaced raw JSON to the UI: 'Switch failed: {"error":"Profile X does not exist."}'
Now parses the JSON and extracts j.error so the toast shows clean human-readable text.
Regression tests added in test_sprint23.py:
- test_profile_switch_base_home_not_subdir: static analysis verifying the resolver
- test_api_helper_returns_clean_error_message: verifies api() parses JSON errors
- test_profile_switch_resolve_base_home_logic: verifies the profiles/ subdir detection
Tests: 426 passed, 0 failed.
BUG-1 (critical): api/profiles.py _DEFAULT_HERMES_HOME used Path.home()/.hermes
hardcoded, ignoring the HERMES_HOME env var. conftest.py sets HERMES_HOME to a
test-isolated state dir -- but profiles.py bypassed it and read/wrote real ~/.hermes
during every test run (active_profile file, .env loading). Fixed by reading
os.getenv('HERMES_HOME', ...) at module load time.
BUG-7 (medium): api/workspace.py load_workspaces() fell back to the global
workspaces.json for ALL profiles when their profile-local file didn't exist yet.
New named profiles silently inherited the default profile's workspace list instead
of starting clean. Fixed: the global file fallback now only applies to the default
profile (migration path); named profiles start with a fresh default entry.
BUG-4 (high): test_sessions_list_includes_profile had a vacuous 'if matching:'
guard -- if the session wasn't found the assert was silently skipped and the test
passed. Fixed with hard assert. Also changed to use /api/session?session_id=
directly instead of scanning /api/sessions (which filters out empty Untitled
sessions with 0 messages, causing the test to always see an empty match list).
BUG-5 / test ordering regression: test_profile_switch_returns_default_model_and_workspace
failed with 409 because test_chat_stream_opens_successfully (runs earlier in the
suite) starts a real LLM stream that stays alive in STREAMS. Added a wait loop
(up to 30s) polling /health active_streams before attempting the profile switch.
BUG-8 (low): Removed dead import _profile_default_workspace in switch_profile()
-- was imported but never used (get_last_workspace() already delegates to it).
Also: test_profile_active_endpoint hardcoded assert data['name'] == 'default'
which fails if a prior run left a non-default active_profile on disk. Changed
to assert name is a non-empty string (the endpoint contract), not a specific value.
Tests: 423 passed, 0 failed.
Fix five coherence bugs in profile switching:
1. Model picker ignored profile default (localStorage stale key)
2. Workspace list was global (not profile-scoped)
3. DEFAULT_WORKSPACE was a boot-time singleton
4. Session list showed all profiles (no filtering)
5. switchToProfile() didn't refresh workspaces or sessions
Backend: workspace storage is now profile-local for named profiles,
switch_profile() returns default_model and default_workspace.
Frontend: switchToProfile() clears stale model pref, refreshes
workspace list and session list, sessions.js filters by active profile
with 'Show N from other profiles' toggle.
8 new tests. 400 pass / 23 fail (identical to baseline).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
BUG-3 (high): /api/profile/delete missing RuntimeError catch. When
deleting the active profile while an agent was running, delete_profile_api()
called switch_profile('default') which raises RuntimeError('Cannot switch
profiles while agent is running'). This propagated to the 500 handler
giving the user 'Internal server error' with no context. Added the same
except RuntimeError -> 409 pattern that /api/profile/switch already uses.
INFO-1 (defense-in-depth): /api/profile/create had no server-side name
validation before delegating to hermes_cli.validate_profile_name. Added
server-side ^[a-z0-9][a-z0-9_-]{0,63}$ check, consistent with client-side
regex in submitProfileCreate(). Prevents path-traversal-ish names from
reaching hermes_cli even if the client-side guard is bypassed.
INFO-2 (defense-in-depth): clone_from parameter was passed directly to
hermes_cli with no validation. Applied the same name regex check to
clone_from before delegating.
BUG-11 (low): toggleProfileDropdown() and toggleWsDropdown() could both
be open simultaneously. Added cross-dropdown close calls: opening the
profile dropdown now closes the workspace dropdown, and vice versa.
Tests: 415 passed, 0 failed.
Add full profile management to the web UI, matching the hermes-agent CLI
profile system. Profiles are isolated HERMES_HOME instances with their own
config, skills, memory, cron, and API keys.
Backend: new api/profiles.py wrapping hermes_cli.profiles, dynamic config
reloading, 5 new API endpoints, profile-aware path resolution, HERMES_HOME
env save/restore in streaming, module-level cache patching for skills_tool
and cron/jobs.
Frontend: profile chip in topbar with dropdown, Profiles sidebar panel with
CRUD UI, boot-time profile fetch, cascade refresh on switch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>