v0.46.0: security, Docker UID/GID, model discovery, i18n, cancel fix
* fix: decode HTML entities before markdown processing + zh/zh-Hant translations (#239) Adds decode() helper in renderMd() to fix double-escaping of HTML entities from LLM output (e.g. <code> becoming &lt;code&gt; instead of rendering). XSS-safe: decode runs before esc(), only 5 entity patterns. Also adds 40+ missing zh (Simplified Chinese) translation keys and a new zh-Hant (Traditional Chinese) locale with 163 keys. Fix applied: removed duplicate settings_label_notifications key in both zh and zh-Hant locales. Fixes #240 * fix: restore custom model list discovery with config api key (#238) get_available_models() now reads api_key from config.yaml before env vars: 1. model.api_key 2. providers.<active>.api_key / providers.custom.api_key 3. env var fallbacks (HERMES_API_KEY, OPENAI_API_KEY, etc.) Also adds OpenAI/Python User-Agent header and a regression test covering authenticated /v1/models discovery. Fixes users with LM Studio / Ollama custom endpoints configured in config.yaml whose model picker silently collapsed to the default model. * feat: Docker UID/GID matching to avoid root-owned .hermes files (#237) Adds docker_init.bash with hermeswebuitoo/hermeswebui user pattern so container files match the host user UID/GID. Prevents .hermes volume mounts from being owned by root when using a non-root host user. Configure via WANTED_UID and WANTED_GID env vars (default 1000/1000). Readme updated with setup instructions. Fix applied: removed duplicate WANTED_GID=1000 line in docker-compose.yml that was overriding the ${GID:-1000} variable expansion. * security: redact credentials from API responses and fix credential file permissions (#243) Adds response-layer credential redaction to three endpoints: - GET /api/session — messages[], tool_calls[], and title - GET /api/session/export — download also redacted - SSE done event — session payload in stream - GET /api/memory — MEMORY.md and USER.md content Adds api/startup.py with fix_credential_permissions() at server startup. Adds 13 tests in tests/test_security_redaction.py. Merged with #237 container detection changes in server.py. * fix: cancel button now interrupts agent and cleans up UI state (#244) Wires agent.interrupt() into cancel_stream() so the backend actually stops tool execution when the user clicks Cancel, rather than only stopping the SSE stream while the agent keeps running. Changes: - api/config.py: adds AGENT_INSTANCES dict (stream_id -> AIAgent) - api/streaming.py: stores agent in AGENT_INSTANCES after creation, checks CANCEL_FLAGS immediately after store (race condition fix), calls agent.interrupt() in cancel_stream(), cleans up in finally block - static/boot.js: removes stale setStatus(cancelling) call - static/messages.js: setBusy(false)/setStatus('') unconditionally on cancel Race condition fix: after storing agent in AGENT_INSTANCES, immediately checks if CANCEL_FLAGS[stream_id] is already set (cancel arrived during agent init) and interrupts before starting. Check is inside the same STREAMS_LOCK acquisition, making it atomic. New test file: tests/test_cancel_interrupt.py with 6 unit tests. * docs: v0.46.0 release notes, bump version, update test counts --------- Co-authored-by: Nathan Esquenazi <nesquena@gmail.com>
This commit is contained in:
26
CHANGELOG.md
26
CHANGELOG.md
@@ -6,6 +6,32 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## [v0.46.0] — 2026-04-11
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- **Docker UID/GID matching** (PR #237 by @mmartial): New `docker_init.bash` entrypoint adds `hermeswebui`/`hermeswebuitoo` user pattern so container-created files match the host user UID/GID. Prevents `.hermes` volume mounts from being owned by root. Configure via `WANTED_UID` and `WANTED_GID` env vars (default 1000/1000). README updated with setup instructions.
|
||||||
|
- `Dockerfile` — two-user pattern with passwordless sudo; `/.within_container` marker for in-container detection; starts as `hermeswebuitoo`, switches to correct UID/GID
|
||||||
|
- `docker-compose.yml` — mounts `.hermes` at `/home/hermeswebui/.hermes`; uses `${UID:-1000}/${GID:-1000}` for UID/GID passthrough
|
||||||
|
- `server.py` — detects `/.within_container` and prints a note when binding to 0.0.0.0
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- **Credential redaction in API responses** (PR #243 by @kcclaw001): All API endpoints now redact credentials from responses at the response layer. Session files on disk are unchanged; only the API output is masked.
|
||||||
|
- `api/helpers.py` — `redact_session_data()` and `_redact_value()` apply pattern-based redaction to messages, tool_calls, and title; covers GitHub PATs, OpenAI/Anthropic keys, AWS keys, Slack tokens, HuggingFace tokens, Authorization Bearer headers, and PEM private key blocks
|
||||||
|
- `api/routes.py` — `GET /api/session`, `GET /api/session/export`, `GET /api/memory` all wrapped with redaction
|
||||||
|
- `api/streaming.py` — SSE `done` event payload redacted before broadcast
|
||||||
|
- `api/startup.py` — new `fix_credential_permissions()` called at startup; `chmod 600` on `.env`, `google_token.json`, `auth.json`, `.signing_key` if they have group/other read bits set
|
||||||
|
- `tests/test_security_redaction.py` — 13 new tests covering redaction functions and endpoint structural verification
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
- **Custom model list discovery with config API key** (PR #238 by @ccqqlo): `get_available_models()` now reads `api_key` from `config.yaml` before env vars when fetching `/v1/models` from custom endpoints (LM Studio, Ollama, etc.). Priority: `model.api_key` → `providers.<active>.api_key` → `providers.custom.api_key` → env vars. Also adds `OpenAI/Python 1.0` User-Agent header. Fixes model picker collapsing to single default model for config-only setups. 1 new regression test.
|
||||||
|
- **HTML entity decode before markdown processing** (PR #239 by @Argonaut790): Adds `decode()` helper in `renderMd()` to fix double-escaping of HTML entities from LLM output (e.g. `<code>` becoming `&lt;code&gt;` instead of rendering). XSS-safe: decode runs before `esc()`, only 5 entity patterns (`<`, `>`, `&`, `"`, `'`).
|
||||||
|
- **Simplified Chinese translations completed** (PR #239 by @Argonaut790): 40+ missing keys added to `zh` locale (123 → 164 keys). New `zh-Hant` (Traditional Chinese) locale with 163 keys.
|
||||||
|
- **Cancel button now interrupts agent execution** (PR #244 by @huangzt): `cancel_stream()` now calls `agent.interrupt()` to stop backend tool execution, not just the SSE stream. `AGENT_INSTANCES` dict (protected by `STREAMS_LOCK`) tracks active agents. Race condition fixed: after storing agent, immediately checks if cancel was already requested. Frontend: removes stale "Cancelling..." status text; `setBusy(false)` always called on cancel. 6 new unit tests in `tests/test_cancel_interrupt.py`.
|
||||||
|
|
||||||
|
**624 tests (up from 604 on v0.45.0 — +20 new tests)**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## [v0.45.0] — 2026-04-10
|
## [v0.45.0] — 2026-04-10
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|||||||
76
Dockerfile
76
Dockerfile
@@ -3,21 +3,79 @@ FROM python:3.12-slim
|
|||||||
LABEL maintainer="nesquena"
|
LABEL maintainer="nesquena"
|
||||||
LABEL description="Hermes Web UI — browser interface for Hermes Agent"
|
LABEL description="Hermes Web UI — browser interface for Hermes Agent"
|
||||||
|
|
||||||
WORKDIR /app
|
# Install system packages
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive
|
||||||
|
|
||||||
# Copy source
|
# Make use of apt-cacher-ng if available
|
||||||
COPY . /app
|
RUN if [ "A${BUILD_APT_PROXY:-}" != "A" ]; then \
|
||||||
|
echo "Using APT proxy: ${BUILD_APT_PROXY}"; \
|
||||||
|
printf 'Acquire::http::Proxy "%s";\n' "$BUILD_APT_PROXY" > /etc/apt/apt.conf.d/01proxy; \
|
||||||
|
fi \
|
||||||
|
&& apt-get update \
|
||||||
|
&& apt-get install -y --no-install-recommends ca-certificates wget gnupg \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
&& apt-get clean
|
||||||
|
|
||||||
# Install Python dependencies
|
RUN apt-get update -y --fix-missing --no-install-recommends \
|
||||||
RUN pip install --no-cache-dir -r requirements.txt
|
&& apt-get install -y --no-install-recommends \
|
||||||
|
apt-utils \
|
||||||
|
locales \
|
||||||
|
ca-certificates \
|
||||||
|
sudo \
|
||||||
|
curl \
|
||||||
|
rsync \
|
||||||
|
&& apt-get upgrade -y \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# UTF-8
|
||||||
|
RUN localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
|
||||||
|
ENV LANG=en_US.utf8
|
||||||
|
ENV LC_ALL=C
|
||||||
|
|
||||||
|
# Set environment variables
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
|
PYTHONUNBUFFERED=1 \
|
||||||
|
PYTHONIOENCODING=utf-8
|
||||||
|
|
||||||
|
WORKDIR /apptoo
|
||||||
|
|
||||||
|
# Every sudo group user does not need a password
|
||||||
|
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
|
||||||
|
|
||||||
|
# Create a new group for the hermeswebui and hermeswebuitoo users
|
||||||
|
RUN groupadd -g 1024 hermeswebui \
|
||||||
|
&& groupadd -g 1025 hermeswebuitoo
|
||||||
|
|
||||||
|
# The hermeswebui (resp. hermeswebuitoo) user will have UID 1024 (resp. 1025),
|
||||||
|
# be part of the hermeswebui (resp. hermeswebuitoo) and users groups and be sudo capable (passwordless)
|
||||||
|
RUN useradd -u 1024 -d /home/hermeswebui -g hermeswebui -s /bin/bash -m hermeswebui \
|
||||||
|
&& usermod -G users hermeswebui \
|
||||||
|
&& adduser hermeswebui sudo
|
||||||
|
RUN useradd -u 1025 -d /home/hermeswebuitoo -g hermeswebuitoo -s /bin/bash -m hermeswebuitoo \
|
||||||
|
&& usermod -G users hermeswebuitoo \
|
||||||
|
&& adduser hermeswebuitoo sudo
|
||||||
|
RUN chown -R hermeswebuitoo:hermeswebuitoo /apptoo
|
||||||
|
|
||||||
|
USER root
|
||||||
|
|
||||||
|
COPY --chmod=555 docker_init.bash /hermeswebui_init.bash
|
||||||
|
|
||||||
|
RUN touch /.within_container
|
||||||
|
|
||||||
|
# Remove APT proxy configuration and clean up APT downloaded files
|
||||||
|
RUN rm -rf /var/lib/apt/lists/* /etc/apt/apt.conf.d/01proxy \
|
||||||
|
&& apt-get clean
|
||||||
|
|
||||||
|
USER hermeswebuitoo
|
||||||
|
|
||||||
|
COPY . /apptoo
|
||||||
|
|
||||||
# Default to binding all interfaces (required for container networking)
|
# Default to binding all interfaces (required for container networking)
|
||||||
ENV HERMES_WEBUI_HOST=0.0.0.0
|
ENV HERMES_WEBUI_HOST=0.0.0.0
|
||||||
ENV HERMES_WEBUI_PORT=8787
|
ENV HERMES_WEBUI_PORT=8787
|
||||||
|
|
||||||
# State directory (mount as volume for persistence)
|
|
||||||
ENV HERMES_WEBUI_STATE_DIR=/data
|
|
||||||
|
|
||||||
EXPOSE 8787
|
EXPOSE 8787
|
||||||
|
|
||||||
CMD ["python", "server.py"]
|
CMD ["/hermeswebui_init.bash"]
|
||||||
|
|
||||||
|
|||||||
25
README.md
25
README.md
@@ -122,14 +122,23 @@ That is it! The script will:
|
|||||||
|
|
||||||
**Pre-built images** (amd64 + arm64) are published to GHCR on every release:
|
**Pre-built images** (amd64 + arm64) are published to GHCR on every release:
|
||||||
|
|
||||||
|
Make sure the `HERMES_WEBUI_STATE_DIR` (by default `~/.hermes/webui-mvp`, as detailed in the `.env.example` file) folder exist with the UID/GID of the owner of the `.hermes` folder.
|
||||||
|
The container will also mount your configured "workspace" (also from the example .env.example) as `/workspace`. adapt the location as needed.
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker pull ghcr.io/nesquena/hermes-webui:latest
|
docker pull ghcr.io/nesquena/hermes-webui:latest
|
||||||
docker run -d -p 8787:8787 -v ~/.hermes:/root/.hermes ghcr.io/nesquena/hermes-webui:latest
|
docker run -d \
|
||||||
|
-e WANTED_UID=`id -u` -e WANTED_GID=`id -g` \
|
||||||
|
-v ~/.hermes:/home/hermeswebui/.hermes -e HERMES_WEBUI_STATE_DIR=/home/hermeswebui/.hermes/webui-mvp \
|
||||||
|
-v ~/workspace:/workspace \
|
||||||
|
-p 8787:8787 ghcr.io/nesquena/hermes-webui:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
Or run with Docker Compose (recommended):
|
Or run with Docker Compose (recommended):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Check the docker-compose.yml and make sure to adapt as needed, at minimum WANTED_UID/WANTED_GID
|
||||||
docker compose up -d
|
docker compose up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -137,7 +146,11 @@ Or build locally:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker build -t hermes-webui .
|
docker build -t hermes-webui .
|
||||||
docker run -d -p 8787:8787 -v ~/.hermes:/root/.hermes hermes-webui
|
docker run -d \
|
||||||
|
-e WANTED_UID=`id -u` -e WANTED_GID=`id -g` \
|
||||||
|
-v ~/.hermes:/home/hermeswebui/.hermes -e HERMES_WEBUI_STATE_DIR=/home/hermeswebui/.hermes/webui-mvp \
|
||||||
|
-v ~/workspace:/workspace \
|
||||||
|
-p 8787:8787 hermes-webui
|
||||||
```
|
```
|
||||||
|
|
||||||
Open http://localhost:8787 in your browser.
|
Open http://localhost:8787 in your browser.
|
||||||
@@ -145,11 +158,13 @@ Open http://localhost:8787 in your browser.
|
|||||||
To enable password protection:
|
To enable password protection:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker run -d -p 8787:8787 -e HERMES_WEBUI_PASSWORD=your-secret -v ~/.hermes:/root/.hermes ghcr.io/nesquena/hermes-webui:latest
|
docker run -d \
|
||||||
|
-e WANTED_UID=`id -u` -e WANTED_GID=`id -g` \
|
||||||
|
-v ~/.hermes:/home/hermeswebui/.hermes -e HERMES_WEBUI_STATE_DIR=/home/hermeswebui/.hermes/webui-mvp \
|
||||||
|
-v ~/workspace:/workspace \
|
||||||
|
-p 8787:8787 -e HERMES_WEBUI_PASSWORD=your-secret ghcr.io/nesquena/hermes-webui:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
Session data persists in a named volume (`hermes-data`) across restarts.
|
|
||||||
|
|
||||||
> **Note:** By default, Docker Compose binds to `127.0.0.1` (localhost only).
|
> **Note:** By default, Docker Compose binds to `127.0.0.1` (localhost only).
|
||||||
> To expose on a network, change the port to `"8787:8787"` in `docker-compose.yml`
|
> To expose on a network, change the port to `"8787:8787"` in `docker-compose.yml`
|
||||||
> and set `HERMES_WEBUI_PASSWORD` to enable authentication.
|
> and set `HERMES_WEBUI_PASSWORD` to enable authentication.
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
> Goal: Full 1:1 parity with the Hermes CLI experience via a clean dark web UI.
|
> Goal: Full 1:1 parity with the Hermes CLI experience via a clean dark web UI.
|
||||||
> Everything you can do from the CLI terminal, you can do from this UI.
|
> Everything you can do from the CLI terminal, you can do from this UI.
|
||||||
>
|
>
|
||||||
> Last updated: v0.45.0 (April 10, 2026) — 604 tests, 604 passing
|
> Last updated: v0.46.0 (April 11, 2026) — 624 tests, 624 passing
|
||||||
> Tests: 604 total (604 passing, 0 failures)
|
> Tests: 604 total (604 passing, 0 failures)
|
||||||
> Source: <repo>/
|
> Source: <repo>/
|
||||||
|
|
||||||
@@ -42,6 +42,7 @@
|
|||||||
| Sprint 23 | Agentic transparency | Token/cost display, subagent cards, skill picker in cron, skill linked files, workspace tree persistence, timestamp fixes | 424 |
|
| Sprint 23 | Agentic transparency | Token/cost display, subagent cards, skill picker in cron, skill linked files, workspace tree persistence, timestamp fixes | 424 |
|
||||||
| v0.44.0 patch | Fix batch: approval card, login CSP, update diagnostics, Lucide icons | PRs #221 #225 #226 #227 #228 | 579 |
|
| v0.44.0 patch | Fix batch: approval card, login CSP, update diagnostics, Lucide icons | PRs #221 #225 #226 #227 #228 | 579 |
|
||||||
| v0.45.0 | Custom endpoint in new profile form | Base URL + API key fields; server-side URL validation; config.yaml merge; 9 new tests (PR #233, fixes #170) | 604 |
|
| v0.45.0 | Custom endpoint in new profile form | Base URL + API key fields; server-side URL validation; config.yaml merge; 9 new tests (PR #233, fixes #170) | 604 |
|
||||||
|
| v0.46.0 | Security, Docker UID/GID, model discovery, i18n, cancel fix | Credential redaction in API responses (PR #243); Docker UID/GID matching (PR #237); custom model API key discovery (PR #238); HTML entity decode + zh/zh-Hant i18n (PR #239); cancel interrupts agent (PR #244); +20 tests | 624 |
|
||||||
| v0.32 | Auto-compaction handling | Compression detection, /compact command, real context window indicator | 424 |
|
| v0.32 | Auto-compaction handling | Compression detection, /compact command, real context window indicator | 424 |
|
||||||
| v0.33 | /insights sync | Opt-in state.db sync so `hermes /insights` includes WebUI sessions | 424 |
|
| v0.33 | /insights sync | Opt-in state.db sync so `hermes /insights` includes WebUI sessions | 424 |
|
||||||
| v0.34 | Sprint 26 — Pluggable themes | Dark, Light, Slate, Solarized, Monokai, Nord; settings unsaved-changes guard; /theme command | 433 |
|
| v0.34 | Sprint 26 — Pluggable themes | Dark, Light, Slate, Solarized, Monokai, Nord; settings unsaved-changes guard; /theme command | 433 |
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
> Prerequisites: SSH tunnel is active on port 8786. Open http://localhost:8786 in browser.
|
> Prerequisites: SSH tunnel is active on port 8786. Open http://localhost:8786 in browser.
|
||||||
> Server health check: curl http://127.0.0.1:8786/health should return {"status":"ok"}.
|
> Server health check: curl http://127.0.0.1:8786/health should return {"status":"ok"}.
|
||||||
>
|
>
|
||||||
> Automated tests: 604 total (604 passing, 0 skipped, 0 known failures)
|
> Automated tests: 624 total (624 passing, 0 skipped, 0 known failures)
|
||||||
> Run: `pytest tests/ -v --timeout=60`
|
> Run: `pytest tests/ -v --timeout=60`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -613,15 +613,33 @@ def get_available_models() -> dict:
|
|||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Resolve API key from environment (check profile .env keys too)
|
# Resolve API key for the custom / OpenAI-compatible endpoint.
|
||||||
|
# Priority:
|
||||||
|
# 1. model.api_key in config.yaml
|
||||||
|
# 2. provider-specific providers.<active>.api_key / providers.custom.api_key
|
||||||
|
# 3. env/.env fallbacks
|
||||||
headers = {}
|
headers = {}
|
||||||
api_key_vars = ('HERMES_API_KEY', 'HERMES_OPENAI_API_KEY', 'OPENAI_API_KEY',
|
api_key = ''
|
||||||
'LOCAL_API_KEY', 'OPENROUTER_API_KEY', 'API_KEY')
|
if isinstance(model_cfg, dict):
|
||||||
for key in api_key_vars:
|
api_key = (model_cfg.get('api_key') or '').strip()
|
||||||
api_key = all_env.get(key) or os.getenv(key)
|
if not api_key:
|
||||||
if api_key:
|
providers_cfg = cfg.get('providers', {})
|
||||||
headers['Authorization'] = f'Bearer {api_key}'
|
if isinstance(providers_cfg, dict):
|
||||||
break
|
for provider_key in filter(None, [active_provider, 'custom']):
|
||||||
|
provider_cfg = providers_cfg.get(provider_key, {})
|
||||||
|
if isinstance(provider_cfg, dict):
|
||||||
|
api_key = (provider_cfg.get('api_key') or '').strip()
|
||||||
|
if api_key:
|
||||||
|
break
|
||||||
|
if not api_key:
|
||||||
|
api_key_vars = ('HERMES_API_KEY', 'HERMES_OPENAI_API_KEY', 'OPENAI_API_KEY',
|
||||||
|
'LOCAL_API_KEY', 'OPENROUTER_API_KEY', 'API_KEY')
|
||||||
|
for key in api_key_vars:
|
||||||
|
api_key = (all_env.get(key) or os.getenv(key) or '').strip()
|
||||||
|
if api_key:
|
||||||
|
break
|
||||||
|
if api_key:
|
||||||
|
headers['Authorization'] = f'Bearer {api_key}'
|
||||||
|
|
||||||
# Fetch model list from endpoint (with SSRF protection)
|
# Fetch model list from endpoint (with SSRF protection)
|
||||||
import socket
|
import socket
|
||||||
@@ -641,6 +659,7 @@ def get_available_models() -> dict:
|
|||||||
except socket.gaierror:
|
except socket.gaierror:
|
||||||
pass # DNS resolution failed -- let urllib handle it
|
pass # DNS resolution failed -- let urllib handle it
|
||||||
req = urllib.request.Request(endpoint_url, method='GET')
|
req = urllib.request.Request(endpoint_url, method='GET')
|
||||||
|
req.add_header('User-Agent', 'OpenAI/Python 1.0')
|
||||||
for k, v in headers.items():
|
for k, v in headers.items():
|
||||||
req.add_header(k, v)
|
req.add_header(k, v)
|
||||||
with urllib.request.urlopen(req, timeout=10) as response:
|
with urllib.request.urlopen(req, timeout=10) as response:
|
||||||
@@ -789,6 +808,7 @@ CHAT_LOCK = threading.Lock()
|
|||||||
STREAMS: dict = {}
|
STREAMS: dict = {}
|
||||||
STREAMS_LOCK = threading.Lock()
|
STREAMS_LOCK = threading.Lock()
|
||||||
CANCEL_FLAGS: dict = {}
|
CANCEL_FLAGS: dict = {}
|
||||||
|
AGENT_INSTANCES: dict = {} # stream_id -> AIAgent instance for interrupt propagation
|
||||||
SERVER_START_TIME = time.time()
|
SERVER_START_TIME = time.time()
|
||||||
|
|
||||||
# ── Thread-local env context ─────────────────────────────────────────────────
|
# ── Thread-local env context ─────────────────────────────────────────────────
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
Hermes Web UI -- HTTP helper functions.
|
Hermes Web UI -- HTTP helper functions.
|
||||||
"""
|
"""
|
||||||
import json as _json
|
import json as _json
|
||||||
|
import re as _re
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from api.config import IMAGE_EXTS, MD_EXTS
|
from api.config import IMAGE_EXTS, MD_EXTS
|
||||||
|
|
||||||
@@ -80,6 +81,88 @@ def t(handler, payload, status: int=200, content_type: str='text/plain; charset=
|
|||||||
MAX_BODY_BYTES = 20 * 1024 * 1024 # 20MB limit for non-upload POST bodies
|
MAX_BODY_BYTES = 20 * 1024 * 1024 # 20MB limit for non-upload POST bodies
|
||||||
|
|
||||||
|
|
||||||
|
# ── Credential redaction ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def _build_redact_fn():
|
||||||
|
"""Return redact_sensitive_text from hermes-agent if available, else a fallback."""
|
||||||
|
try:
|
||||||
|
from agent.redact import redact_sensitive_text
|
||||||
|
return redact_sensitive_text
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Minimal fallback covering the most common credential prefixes
|
||||||
|
_CRED_RE = _re.compile(
|
||||||
|
r"(?<![A-Za-z0-9_-])("
|
||||||
|
r"sk-[A-Za-z0-9_-]{10,}" # OpenAI / Anthropic / OpenRouter
|
||||||
|
r"|ghp_[A-Za-z0-9]{10,}" # GitHub PAT (classic)
|
||||||
|
r"|github_pat_[A-Za-z0-9_]{10,}" # GitHub PAT (fine-grained)
|
||||||
|
r"|gho_[A-Za-z0-9]{10,}" # GitHub OAuth token
|
||||||
|
r"|ghu_[A-Za-z0-9]{10,}" # GitHub user-to-server token
|
||||||
|
r"|ghs_[A-Za-z0-9]{10,}" # GitHub server-to-server token
|
||||||
|
r"|ghr_[A-Za-z0-9]{10,}" # GitHub refresh token
|
||||||
|
r"|AKIA[A-Z0-9]{16}" # AWS Access Key ID
|
||||||
|
r"|xox[baprs]-[A-Za-z0-9-]{10,}" # Slack tokens
|
||||||
|
r"|hf_[A-Za-z0-9]{10,}" # HuggingFace token
|
||||||
|
r"|SG\.[A-Za-z0-9_-]{10,}" # SendGrid API key
|
||||||
|
r")(?![A-Za-z0-9_-])"
|
||||||
|
)
|
||||||
|
_AUTH_HDR_RE = _re.compile(r"(Authorization:\s*Bearer\s+)(\S+)", _re.IGNORECASE)
|
||||||
|
_ENV_RE = _re.compile(
|
||||||
|
r"([A-Z0-9_]{0,50}(?:API_?KEY|TOKEN|SECRET|PASSWORD|PASSWD|CREDENTIAL|AUTH)[A-Z0-9_]{0,50})"
|
||||||
|
r"\s*=\s*(['\"]?)(\S+)\2"
|
||||||
|
)
|
||||||
|
_PRIVKEY_RE = _re.compile(
|
||||||
|
r"-----BEGIN[A-Z ]*PRIVATE KEY-----[\s\S]*?-----END[A-Z ]*PRIVATE KEY-----"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _mask(token: str) -> str:
|
||||||
|
return f"{token[:6]}...{token[-4:]}" if len(token) >= 18 else "***"
|
||||||
|
|
||||||
|
def _fallback_redact(text: str) -> str:
|
||||||
|
if not isinstance(text, str) or not text:
|
||||||
|
return text
|
||||||
|
text = _CRED_RE.sub(lambda m: _mask(m.group(1)), text)
|
||||||
|
text = _AUTH_HDR_RE.sub(lambda m: m.group(1) + _mask(m.group(2)), text)
|
||||||
|
text = _ENV_RE.sub(
|
||||||
|
lambda m: f"{m.group(1)}={m.group(2)}{_mask(m.group(3))}{m.group(2)}", text
|
||||||
|
)
|
||||||
|
text = _PRIVKEY_RE.sub("[REDACTED PRIVATE KEY]", text)
|
||||||
|
return text
|
||||||
|
|
||||||
|
return _fallback_redact
|
||||||
|
|
||||||
|
|
||||||
|
_redact_text = _build_redact_fn()
|
||||||
|
|
||||||
|
|
||||||
|
def _redact_value(v):
|
||||||
|
"""Recursively redact credentials from strings, dicts, and lists."""
|
||||||
|
if isinstance(v, str):
|
||||||
|
return _redact_text(v)
|
||||||
|
if isinstance(v, dict):
|
||||||
|
return {k: _redact_value(val) for k, val in v.items()}
|
||||||
|
if isinstance(v, list):
|
||||||
|
return [_redact_value(item) for item in v]
|
||||||
|
return v
|
||||||
|
|
||||||
|
|
||||||
|
def redact_session_data(session_dict: dict) -> dict:
|
||||||
|
"""Redact credentials from message content and tool_call data before API response.
|
||||||
|
|
||||||
|
Applies to: messages[], tool_calls[], and title.
|
||||||
|
The underlying session file is not modified; redaction is response-layer only.
|
||||||
|
"""
|
||||||
|
result = dict(session_dict)
|
||||||
|
if isinstance(result.get('title'), str):
|
||||||
|
result['title'] = _redact_text(result['title'])
|
||||||
|
if 'messages' in result:
|
||||||
|
result['messages'] = _redact_value(result['messages'])
|
||||||
|
if 'tool_calls' in result:
|
||||||
|
result['tool_calls'] = _redact_value(result['tool_calls'])
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
def read_body(handler) -> dict:
|
def read_body(handler) -> dict:
|
||||||
"""Read and JSON-parse a POST request body (capped at 20MB)."""
|
"""Read and JSON-parse a POST request body (capped at 20MB)."""
|
||||||
length = int(handler.headers.get('Content-Length', 0))
|
length = int(handler.headers.get('Content-Length', 0))
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ from api.config import (
|
|||||||
IMAGE_EXTS, MD_EXTS, MIME_MAP, MAX_FILE_BYTES, MAX_UPLOAD_BYTES,
|
IMAGE_EXTS, MD_EXTS, MIME_MAP, MAX_FILE_BYTES, MAX_UPLOAD_BYTES,
|
||||||
CHAT_LOCK, load_settings, save_settings,
|
CHAT_LOCK, load_settings, save_settings,
|
||||||
)
|
)
|
||||||
from api.helpers import require, bad, safe_resolve, j, t, read_body, _security_headers, _sanitize_error
|
from api.helpers import require, bad, safe_resolve, j, t, read_body, _security_headers, _sanitize_error, redact_session_data, _redact_text
|
||||||
|
|
||||||
# ── CSRF: validate Origin/Referer on POST ────────────────────────────────────
|
# ── CSRF: validate Origin/Referer on POST ────────────────────────────────────
|
||||||
import re as _re
|
import re as _re
|
||||||
@@ -203,10 +203,11 @@ def handle_get(handler, parsed) -> bool:
|
|||||||
return j(handler, {'error': 'session_id is required'}, status=400)
|
return j(handler, {'error': 'session_id is required'}, status=400)
|
||||||
try:
|
try:
|
||||||
s = get_session(sid)
|
s = get_session(sid)
|
||||||
return j(handler, {'session': s.compact() | {
|
raw = s.compact() | {
|
||||||
'messages': s.messages,
|
'messages': s.messages,
|
||||||
'tool_calls': getattr(s, 'tool_calls', []),
|
'tool_calls': getattr(s, 'tool_calls', []),
|
||||||
}})
|
}
|
||||||
|
return j(handler, {'session': redact_session_data(raw)})
|
||||||
except KeyError:
|
except KeyError:
|
||||||
# Not a WebUI session -- try CLI store
|
# Not a WebUI session -- try CLI store
|
||||||
msgs = get_cli_session_messages(sid)
|
msgs = get_cli_session_messages(sid)
|
||||||
@@ -232,7 +233,7 @@ def handle_get(handler, parsed) -> bool:
|
|||||||
'messages': msgs,
|
'messages': msgs,
|
||||||
'tool_calls': [],
|
'tool_calls': [],
|
||||||
}
|
}
|
||||||
return j(handler, {'session': sess})
|
return j(handler, {'session': redact_session_data(sess)})
|
||||||
return bad(handler, 'Session not found', 404)
|
return bad(handler, 'Session not found', 404)
|
||||||
|
|
||||||
if parsed.path == '/api/sessions':
|
if parsed.path == '/api/sessions':
|
||||||
@@ -817,7 +818,8 @@ def _handle_session_export(handler, parsed):
|
|||||||
if not sid: return bad(handler, 'session_id is required')
|
if not sid: return bad(handler, 'session_id is required')
|
||||||
try: s = get_session(sid)
|
try: s = get_session(sid)
|
||||||
except KeyError: return bad(handler, 'Session not found', 404)
|
except KeyError: return bad(handler, 'Session not found', 404)
|
||||||
payload = json.dumps(s.__dict__, ensure_ascii=False, indent=2)
|
safe = redact_session_data(s.__dict__)
|
||||||
|
payload = json.dumps(safe, ensure_ascii=False, indent=2)
|
||||||
handler.send_response(200)
|
handler.send_response(200)
|
||||||
handler.send_header('Content-Type', 'application/json; charset=utf-8')
|
handler.send_header('Content-Type', 'application/json; charset=utf-8')
|
||||||
handler.send_header('Content-Disposition', f'attachment; filename="hermes-{sid}.json"')
|
handler.send_header('Content-Disposition', f'attachment; filename="hermes-{sid}.json"')
|
||||||
@@ -1043,7 +1045,7 @@ def _handle_memory_read(handler):
|
|||||||
memory = mem_file.read_text(encoding='utf-8', errors='replace') if mem_file.exists() else ''
|
memory = mem_file.read_text(encoding='utf-8', errors='replace') if mem_file.exists() else ''
|
||||||
user = user_file.read_text(encoding='utf-8', errors='replace') if user_file.exists() else ''
|
user = user_file.read_text(encoding='utf-8', errors='replace') if user_file.exists() else ''
|
||||||
return j(handler, {
|
return j(handler, {
|
||||||
'memory': memory, 'user': user,
|
'memory': _redact_text(memory), 'user': _redact_text(user),
|
||||||
'memory_path': str(mem_file), 'user_path': str(user_file),
|
'memory_path': str(mem_file), 'user_path': str(user_file),
|
||||||
'memory_mtime': mem_file.stat().st_mtime if mem_file.exists() else None,
|
'memory_mtime': mem_file.stat().st_mtime if mem_file.exists() else None,
|
||||||
'user_mtime': user_file.stat().st_mtime if user_file.exists() else None,
|
'user_mtime': user_file.stat().st_mtime if user_file.exists() else None,
|
||||||
|
|||||||
@@ -1,8 +1,36 @@
|
|||||||
"""Hermes Web UI -- startup helpers."""
|
"""Hermes Web UI -- startup helpers."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
import os, subprocess, sys
|
import os, stat, subprocess, sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Credential files that should never be world-readable
|
||||||
|
_SENSITIVE_FILES = (
|
||||||
|
'.env',
|
||||||
|
'google_token.json',
|
||||||
|
'google_client_secret.json',
|
||||||
|
'.signing_key',
|
||||||
|
'auth.json',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def fix_credential_permissions() -> None:
|
||||||
|
"""Ensure sensitive files in HERMES_HOME are chmod 600 (owner-only)."""
|
||||||
|
hermes_home = Path(os.environ.get('HERMES_HOME', str(Path.home() / '.hermes')))
|
||||||
|
if not hermes_home.is_dir():
|
||||||
|
return
|
||||||
|
for name in _SENSITIVE_FILES:
|
||||||
|
fpath = hermes_home / name
|
||||||
|
if not fpath.exists():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
current = stat.S_IMODE(fpath.stat().st_mode)
|
||||||
|
if current & 0o077: # group or other bits set
|
||||||
|
fpath.chmod(0o600)
|
||||||
|
print(f' [security] fixed permissions on {fpath.name} ({oct(current)} -> 0600)', flush=True)
|
||||||
|
except OSError:
|
||||||
|
pass # best-effort; don't abort startup
|
||||||
|
|
||||||
|
|
||||||
def _agent_dir() -> Path | None:
|
def _agent_dir() -> Path | None:
|
||||||
hermes_home = Path(os.environ.get('HERMES_HOME', str(Path.home() / '.hermes')))
|
hermes_home = Path(os.environ.get('HERMES_HOME', str(Path.home() / '.hermes')))
|
||||||
for raw in [os.environ.get('HERMES_WEBUI_AGENT_DIR', '').strip(), str(hermes_home / 'hermes-agent')]:
|
for raw in [os.environ.get('HERMES_WEBUI_AGENT_DIR', '').strip(), str(hermes_home / 'hermes-agent')]:
|
||||||
|
|||||||
@@ -11,11 +11,12 @@ import traceback
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from api.config import (
|
from api.config import (
|
||||||
STREAMS, STREAMS_LOCK, CANCEL_FLAGS, CLI_TOOLSETS,
|
STREAMS, STREAMS_LOCK, CANCEL_FLAGS, AGENT_INSTANCES, CLI_TOOLSETS,
|
||||||
LOCK, SESSIONS, SESSION_DIR,
|
LOCK, SESSIONS, SESSION_DIR,
|
||||||
_get_session_agent_lock, _set_thread_env, _clear_thread_env,
|
_get_session_agent_lock, _set_thread_env, _clear_thread_env,
|
||||||
resolve_model_provider,
|
resolve_model_provider,
|
||||||
)
|
)
|
||||||
|
from api.helpers import redact_session_data
|
||||||
|
|
||||||
# Global lock for os.environ writes. Per-session locks (_agent_lock) prevent
|
# Global lock for os.environ writes. Per-session locks (_agent_lock) prevent
|
||||||
# concurrent runs of the SAME session, but two DIFFERENT sessions can still
|
# concurrent runs of the SAME session, but two DIFFERENT sessions can still
|
||||||
@@ -28,6 +29,23 @@ try:
|
|||||||
from run_agent import AIAgent
|
from run_agent import AIAgent
|
||||||
except ImportError:
|
except ImportError:
|
||||||
AIAgent = None
|
AIAgent = None
|
||||||
|
|
||||||
|
def _get_ai_agent():
|
||||||
|
"""Return AIAgent class, retrying the import if the initial attempt failed.
|
||||||
|
|
||||||
|
auto_install_agent_deps() in server.py may install missing packages after
|
||||||
|
this module is first imported (common in Docker with a volume-mounted agent).
|
||||||
|
Re-attempting the import here picks up the newly installed packages without
|
||||||
|
requiring a server restart.
|
||||||
|
"""
|
||||||
|
global AIAgent
|
||||||
|
if AIAgent is None:
|
||||||
|
try:
|
||||||
|
from run_agent import AIAgent as _cls # noqa: PLC0415
|
||||||
|
AIAgent = _cls
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
return AIAgent
|
||||||
from api.models import get_session, title_from
|
from api.models import get_session, title_from
|
||||||
from api.workspace import set_last_workspace
|
from api.workspace import set_last_workspace
|
||||||
|
|
||||||
@@ -111,15 +129,15 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
# The finally block re-acquires to restore — keeping critical sections short
|
# The finally block re-acquires to restore — keeping critical sections short
|
||||||
# and preventing a deadlock where the restore would re-enter the same lock.
|
# and preventing a deadlock where the restore would re-enter the same lock.
|
||||||
with _ENV_LOCK:
|
with _ENV_LOCK:
|
||||||
old_cwd = os.environ.get('TERMINAL_CWD')
|
old_cwd = os.environ.get('TERMINAL_CWD')
|
||||||
old_exec_ask = os.environ.get('HERMES_EXEC_ASK')
|
old_exec_ask = os.environ.get('HERMES_EXEC_ASK')
|
||||||
old_session_key = os.environ.get('HERMES_SESSION_KEY')
|
old_session_key = os.environ.get('HERMES_SESSION_KEY')
|
||||||
old_hermes_home = os.environ.get('HERMES_HOME')
|
old_hermes_home = os.environ.get('HERMES_HOME')
|
||||||
os.environ['TERMINAL_CWD'] = str(s.workspace)
|
os.environ['TERMINAL_CWD'] = str(s.workspace)
|
||||||
os.environ['HERMES_EXEC_ASK'] = '1'
|
os.environ['HERMES_EXEC_ASK'] = '1'
|
||||||
os.environ['HERMES_SESSION_KEY'] = session_id
|
os.environ['HERMES_SESSION_KEY'] = session_id
|
||||||
if _profile_home:
|
if _profile_home:
|
||||||
os.environ['HERMES_HOME'] = _profile_home
|
os.environ['HERMES_HOME'] = _profile_home
|
||||||
# Lock released — agent runs without holding it
|
# Lock released — agent runs without holding it
|
||||||
# Register a gateway-style notify callback so the approval system can
|
# Register a gateway-style notify callback so the approval system can
|
||||||
# push the `approval` SSE event the moment a dangerous command is
|
# push the `approval` SSE event the moment a dangerous command is
|
||||||
@@ -165,7 +183,8 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
if AIAgent is None:
|
_AIAgent = _get_ai_agent()
|
||||||
|
if _AIAgent is None:
|
||||||
raise ImportError("AIAgent not available -- check that hermes-agent is on sys.path")
|
raise ImportError("AIAgent not available -- check that hermes-agent is on sys.path")
|
||||||
resolved_model, resolved_provider, resolved_base_url = resolve_model_provider(model)
|
resolved_model, resolved_provider, resolved_base_url = resolve_model_provider(model)
|
||||||
|
|
||||||
@@ -206,7 +225,7 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
else:
|
else:
|
||||||
_fallback_resolved = None
|
_fallback_resolved = None
|
||||||
|
|
||||||
agent = AIAgent(
|
agent = _AIAgent(
|
||||||
model=resolved_model,
|
model=resolved_model,
|
||||||
provider=resolved_provider,
|
provider=resolved_provider,
|
||||||
base_url=resolved_base_url,
|
base_url=resolved_base_url,
|
||||||
@@ -219,6 +238,20 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
stream_delta_callback=on_token,
|
stream_delta_callback=on_token,
|
||||||
tool_progress_callback=on_tool,
|
tool_progress_callback=on_tool,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Store agent instance for cancel/interrupt propagation
|
||||||
|
with STREAMS_LOCK:
|
||||||
|
AGENT_INSTANCES[stream_id] = agent
|
||||||
|
# Check if cancel was requested during agent initialization
|
||||||
|
if stream_id in CANCEL_FLAGS and CANCEL_FLAGS[stream_id].is_set():
|
||||||
|
# Cancel arrived during agent creation - interrupt immediately
|
||||||
|
try:
|
||||||
|
agent.interrupt("Cancelled before start")
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
put('cancel', {'message': 'Cancelled by user'})
|
||||||
|
return
|
||||||
|
|
||||||
# Prepend workspace context so the agent always knows which directory
|
# Prepend workspace context so the agent always knows which directory
|
||||||
# to use for file operations, regardless of session age or AGENTS.md defaults.
|
# to use for file operations, regardless of session age or AGENTS.md defaults.
|
||||||
workspace_ctx = f"[Workspace: {s.workspace}]\n"
|
workspace_ctx = f"[Workspace: {s.workspace}]\n"
|
||||||
@@ -404,7 +437,8 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
usage['context_length'] = getattr(_cc, 'context_length', 0) or 0
|
usage['context_length'] = getattr(_cc, 'context_length', 0) or 0
|
||||||
usage['threshold_tokens'] = getattr(_cc, 'threshold_tokens', 0) or 0
|
usage['threshold_tokens'] = getattr(_cc, 'threshold_tokens', 0) or 0
|
||||||
usage['last_prompt_tokens'] = getattr(_cc, 'last_prompt_tokens', 0) or 0
|
usage['last_prompt_tokens'] = getattr(_cc, 'last_prompt_tokens', 0) or 0
|
||||||
put('done', {'session': s.compact() | {'messages': s.messages, 'tool_calls': tool_calls}, 'usage': usage})
|
raw_session = s.compact() | {'messages': s.messages, 'tool_calls': tool_calls}
|
||||||
|
put('done', {'session': redact_session_data(raw_session), 'usage': usage})
|
||||||
finally:
|
finally:
|
||||||
# Unregister the gateway approval callback and unblock any threads
|
# Unregister the gateway approval callback and unblock any threads
|
||||||
# still waiting on approval (e.g. stream cancelled mid-approval).
|
# still waiting on approval (e.g. stream cancelled mid-approval).
|
||||||
@@ -442,6 +476,7 @@ def _run_agent_streaming(session_id, msg_text, model, workspace, stream_id, atta
|
|||||||
with STREAMS_LOCK:
|
with STREAMS_LOCK:
|
||||||
STREAMS.pop(stream_id, None)
|
STREAMS.pop(stream_id, None)
|
||||||
CANCEL_FLAGS.pop(stream_id, None)
|
CANCEL_FLAGS.pop(stream_id, None)
|
||||||
|
AGENT_INSTANCES.pop(stream_id, None) # Clean up agent instance reference
|
||||||
|
|
||||||
# ============================================================
|
# ============================================================
|
||||||
# SECTION: HTTP Request Handler
|
# SECTION: HTTP Request Handler
|
||||||
@@ -456,9 +491,31 @@ def cancel_stream(stream_id: str) -> bool:
|
|||||||
with STREAMS_LOCK:
|
with STREAMS_LOCK:
|
||||||
if stream_id not in STREAMS:
|
if stream_id not in STREAMS:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# Set WebUI layer cancel flag
|
||||||
flag = CANCEL_FLAGS.get(stream_id)
|
flag = CANCEL_FLAGS.get(stream_id)
|
||||||
if flag:
|
if flag:
|
||||||
flag.set()
|
flag.set()
|
||||||
|
|
||||||
|
# Interrupt the AIAgent instance to stop tool execution
|
||||||
|
agent = AGENT_INSTANCES.get(stream_id)
|
||||||
|
if agent:
|
||||||
|
try:
|
||||||
|
agent.interrupt("Cancelled by user")
|
||||||
|
except Exception as e:
|
||||||
|
# Log but don't block the cancel flow
|
||||||
|
import logging
|
||||||
|
logging.getLogger(__name__).debug(
|
||||||
|
f"Failed to interrupt agent for stream {stream_id}: {e}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Agent not yet stored - cancel_event flag will be checked by agent thread
|
||||||
|
import logging
|
||||||
|
logging.getLogger(__name__).debug(
|
||||||
|
f"Cancel requested for stream {stream_id} before agent ready - "
|
||||||
|
f"cancel_event flag set, will be checked on agent startup"
|
||||||
|
)
|
||||||
|
|
||||||
# Put a cancel sentinel into the queue so the SSE handler wakes up
|
# Put a cancel sentinel into the queue so the SSE handler wakes up
|
||||||
q = STREAMS.get(stream_id)
|
q = STREAMS.get(stream_id)
|
||||||
if q:
|
if q:
|
||||||
|
|||||||
@@ -4,19 +4,27 @@ services:
|
|||||||
hermes-webui:
|
hermes-webui:
|
||||||
build: .
|
build: .
|
||||||
ports:
|
ports:
|
||||||
|
# select only one; use 127.0.0.1 version to expose to localhost only
|
||||||
- "127.0.0.1:8787:8787"
|
- "127.0.0.1:8787:8787"
|
||||||
|
# - "8787:8787"
|
||||||
volumes:
|
volumes:
|
||||||
# Persist session data, settings, and projects across restarts
|
# Within the containe the tool expects to find the .hermes location at /home/hermeswebui/.hermes, so we mount it there; this allows you to manage agent profiles and other features that rely on the .hermes directory from your host machine, make sure to adapt the path if your HERMES_HOME is different
|
||||||
- hermes-data:/data
|
|
||||||
# Mount hermes home for agent features and profile management
|
# Mount hermes home for agent features and profile management
|
||||||
- ${HERMES_HOME:-${HOME}/.hermes}:/root/.hermes
|
- ${HERMES_HOME:-${HOME}/.hermes}:/home/hermeswebui/.hermes
|
||||||
|
# Your workspace directory shown on first launch (adapt if yours is different, the container will use the mounted /workspace)
|
||||||
|
- ${HERMES_HOME:-${HOME}}/workspace:/workspace
|
||||||
environment:
|
environment:
|
||||||
|
# Modify the UID and GID to match your user; docker compose starts as root by default, but the container will drop privileges to the specified UID/GID
|
||||||
|
- WANTED_UID=${UID:-1000}
|
||||||
|
- WANTED_GID=${GID:-1000}
|
||||||
|
# Required: bind address and port
|
||||||
- HERMES_WEBUI_HOST=0.0.0.0
|
- HERMES_WEBUI_HOST=0.0.0.0
|
||||||
- HERMES_WEBUI_PORT=8787
|
- HERMES_WEBUI_PORT=8787
|
||||||
- HERMES_WEBUI_STATE_DIR=/data
|
# Where to store sessions, workspaces, and other state (default: ~/.hermes/webui-mvp)
|
||||||
|
- HERMES_WEBUI_STATE_DIR=/home/hermeswebui/.hermes/webui-mvp
|
||||||
|
# Default workspace directory shown on first launch
|
||||||
|
# - HERMES_WEBUI_DEFAULT_WORKSPACE=/workspace
|
||||||
# Optional: set a password for remote access
|
# Optional: set a password for remote access
|
||||||
# - HERMES_WEBUI_PASSWORD=your-secret-password
|
# - HERMES_WEBUI_PASSWORD=your-secret-password
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
|
||||||
volumes:
|
|
||||||
hermes-data:
|
|
||||||
|
|||||||
228
docker_init.bash
Normal file
228
docker_init.bash
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
error_exit() {
|
||||||
|
echo -n "!! ERROR: "
|
||||||
|
echo $*
|
||||||
|
echo "!! Exiting script (ID: $$)"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
ok_exit() {
|
||||||
|
echo $*
|
||||||
|
echo "++ Exiting script (ID: $$)"
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
## Environment variables loaded when passing environment variables from user to user
|
||||||
|
# Ignore list: variables to ignore when loading environment variables from user to user
|
||||||
|
export ENV_IGNORELIST="HOME PWD USER SHLVL TERM OLDPWD SHELL _ SUDO_COMMAND HOSTNAME LOGNAME MAIL SUDO_GID SUDO_UID SUDO_USER CHECK_NV_CUDNN_VERSION VIRTUAL_ENV VIRTUAL_ENV_PROMPT ENV_IGNORELIST ENV_OBFUSCATE_PART"
|
||||||
|
# Obfuscate part: part of the key to obfuscate when loading environment variables from user to user, ex: HF_TOKEN, ...
|
||||||
|
export ENV_OBFUSCATE_PART="TOKEN API KEY"
|
||||||
|
|
||||||
|
# Check for ENV_IGNORELIST and ENV_OBFUSCATE_PART
|
||||||
|
if [ -z "${ENV_IGNORELIST+x}" ]; then error_exit "ENV_IGNORELIST not set"; fi
|
||||||
|
if [ -z "${ENV_OBFUSCATE_PART+x}" ]; then error_exit "ENV_OBFUSCATE_PART not set"; fi
|
||||||
|
|
||||||
|
whoami=`whoami`
|
||||||
|
script_dir=$(dirname $0)
|
||||||
|
script_name=$(basename $0)
|
||||||
|
echo ""; echo ""
|
||||||
|
echo "======================================"
|
||||||
|
echo "=================== Starting script (ID: $$)"
|
||||||
|
echo "== Running ${script_name} in ${script_dir} as ${whoami}"
|
||||||
|
script_fullname=$0
|
||||||
|
echo " - script_fullname: ${script_fullname}"
|
||||||
|
ignore_value="VALUE_TO_IGNORE"
|
||||||
|
|
||||||
|
# everyone can read our files by default
|
||||||
|
umask 0022
|
||||||
|
|
||||||
|
# Write a world-writeable file (preferably inside /tmp -- ie within the container)
|
||||||
|
write_worldtmpfile() {
|
||||||
|
tmpfile=$1
|
||||||
|
if [ -z "${tmpfile}" ]; then error_exit "write_worldfile: missing argument"; fi
|
||||||
|
if [ -f $tmpfile ]; then rm -f $tmpfile; fi
|
||||||
|
echo -n $2 > ${tmpfile}
|
||||||
|
chmod 777 ${tmpfile}
|
||||||
|
}
|
||||||
|
|
||||||
|
itdir=/tmp/hermeswebui_init
|
||||||
|
if [ ! -d $itdir ]; then mkdir $itdir; chmod 777 $itdir; fi
|
||||||
|
if [ ! -d $itdir ]; then error_exit "Failed to create $itdir"; fi
|
||||||
|
|
||||||
|
# Set user and group id
|
||||||
|
# logic: if not set and file exists, use file value, else use default. Create file for persistence when the container is re-run
|
||||||
|
# reasoning: needed when using docker compose as the file will exist in the stopped container, and changing the value from environment variables or configuration file must be propagated from hermeswebuitoo to hermeswebuitoo transition (those values are the only ones loaded before the environment variables dump file are loaded)
|
||||||
|
it=$itdir/hermeswebui_user_uid
|
||||||
|
if [ -z "${WANTED_UID+x}" ]; then
|
||||||
|
if [ -f $it ]; then WANTED_UID=$(cat $it); fi
|
||||||
|
fi
|
||||||
|
WANTED_UID=${WANTED_UID:-1024}
|
||||||
|
write_worldtmpfile $it "$WANTED_UID"
|
||||||
|
echo "-- WANTED_UID: \"${WANTED_UID}\""
|
||||||
|
|
||||||
|
it=$itdir/hermeswebui_user_gid
|
||||||
|
if [ -z "${WANTED_GID+x}" ]; then
|
||||||
|
if [ -f $it ]; then WANTED_GID=$(cat $it); fi
|
||||||
|
fi
|
||||||
|
WANTED_GID=${WANTED_GID:-1024}
|
||||||
|
write_worldtmpfile $it "$WANTED_GID"
|
||||||
|
echo "-- WANTED_GID: \"${WANTED_GID}\""
|
||||||
|
|
||||||
|
echo "== Most Environment variables set"
|
||||||
|
|
||||||
|
# Check user id and group id
|
||||||
|
new_gid=`id -g`
|
||||||
|
new_uid=`id -u`
|
||||||
|
echo "== user ($whoami)"
|
||||||
|
echo " uid: $new_uid / WANTED_UID: $WANTED_UID"
|
||||||
|
echo " gid: $new_gid / WANTED_GID: $WANTED_GID"
|
||||||
|
|
||||||
|
save_env() {
|
||||||
|
tosave=$1
|
||||||
|
echo "-- Saving environment variables to $tosave"
|
||||||
|
env | sort > "$tosave"
|
||||||
|
}
|
||||||
|
|
||||||
|
load_env() {
|
||||||
|
tocheck=$1
|
||||||
|
overwrite_if_different=$2
|
||||||
|
ignore_list="${ENV_IGNORELIST}"
|
||||||
|
obfuscate_part="${ENV_OBFUSCATE_PART}"
|
||||||
|
if [ -f "$tocheck" ]; then
|
||||||
|
echo "-- Loading environment variables from $tocheck (overwrite existing: $overwrite_if_different) (ignorelist: $ignore_list) (obfuscate: $obfuscate_part)"
|
||||||
|
while IFS='=' read -r key value; do
|
||||||
|
doit=false
|
||||||
|
# checking if the key is in the ignorelist
|
||||||
|
for i in $ignore_list; do
|
||||||
|
if [[ "A$key" == "A$i" ]]; then doit=ignore; break; fi
|
||||||
|
done
|
||||||
|
if [[ "A$doit" == "Aignore" ]]; then continue; fi
|
||||||
|
rvalue=$value
|
||||||
|
# checking if part of the key is in the obfuscate list
|
||||||
|
doobs=false
|
||||||
|
for i in $obfuscate_part; do
|
||||||
|
if [[ "A$key" == *"$i"* ]]; then doobs=obfuscate; break; fi
|
||||||
|
done
|
||||||
|
if [[ "A$doobs" == "Aobfuscate" ]]; then rvalue="**OBFUSCATED**"; fi
|
||||||
|
|
||||||
|
if [ -z "${!key}" ]; then
|
||||||
|
echo " ++ Setting environment variable $key [$rvalue]"
|
||||||
|
doit=true
|
||||||
|
elif [ "A$overwrite_if_different" == "Atrue" ]; then
|
||||||
|
cvalue="${!key}"
|
||||||
|
if [[ "A${doobs}" == "Aobfuscate" ]]; then cvalue="**OBFUSCATED**"; fi
|
||||||
|
if [[ "A${!key}" != "A${value}" ]]; then
|
||||||
|
echo " @@ Overwriting environment variable $key [$cvalue] -> [$rvalue]"
|
||||||
|
doit=true
|
||||||
|
else
|
||||||
|
echo " == Environment variable $key [$rvalue] already set and value is unchanged"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
if [[ "A$doit" == "Atrue" ]]; then
|
||||||
|
export "$key=$value"
|
||||||
|
fi
|
||||||
|
done < "$tocheck"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# hermeswebuitoo is a specfiic user not existing by default on ubuntu, we can check its whomai
|
||||||
|
if [ "A${whoami}" == "Ahermeswebuitoo" ]; then
|
||||||
|
echo "-- Running as hermeswebuitoo, will switch hermeswebui to the desired UID/GID"
|
||||||
|
# The script is started as hermeswebuitoo -- UID/GID 1025/1025
|
||||||
|
|
||||||
|
# We are altering the UID/GID of the hermeswebui user to the desired ones and restarting as that user
|
||||||
|
# using usermod for the already create hermeswebui user, knowing it is not already in use
|
||||||
|
# per usermod manual: "You must make certain that the named user is not executing any processes when this command is being executed"
|
||||||
|
sudo groupmod -o -g ${WANTED_GID} hermeswebui || error_exit "Failed to set GID of hermeswebui user"
|
||||||
|
sudo usermod -o -u ${WANTED_UID} hermeswebui || error_exit "Failed to set UID of hermeswebui user"
|
||||||
|
sudo chown -R ${WANTED_UID}:${WANTED_GID} /home/hermeswebui || error_exit "Failed to set owner of /home/hermeswebui"
|
||||||
|
save_env /tmp/hermeswebuitoo_env.txt
|
||||||
|
# restart the script as hermeswebui set with the correct UID/GID this time
|
||||||
|
echo "-- Restarting as hermeswebui user with UID ${WANTED_UID} GID ${WANTED_GID}"
|
||||||
|
sudo su hermeswebui $script_fullname || error_exit "subscript failed"
|
||||||
|
ok_exit "Clean exit"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If we are here, the script is started as another user than hermeswebuitoo
|
||||||
|
# because the whoami value for the hermeswebui user can be any existing user, we can not check against it
|
||||||
|
# instead we check if the UID/GID are the expected ones
|
||||||
|
if [ "$WANTED_GID" != "$new_gid" ]; then error_exit "hermeswebui MUST be running as UID ${WANTED_UID} GID ${WANTED_GID}, current UID ${new_uid} GID ${new_gid}"; fi
|
||||||
|
if [ "$WANTED_UID" != "$new_uid" ]; then error_exit "hermeswebui MUST be running as UID ${WANTED_UID} GID ${WANTED_GID}, current UID ${new_uid} GID ${new_gid}"; fi
|
||||||
|
|
||||||
|
########## 'hermeswebui' specific section below
|
||||||
|
|
||||||
|
# We are therefore running as hermeswebui
|
||||||
|
echo ""; echo "== Running as hermeswebui"
|
||||||
|
|
||||||
|
# Load environment variables one by one if they do not exist from /tmp/hermeswebuitoo_env.txt
|
||||||
|
it=/tmp/hermeswebuitoo_env.txt
|
||||||
|
if [ -f $it ]; then
|
||||||
|
echo "-- Loading not already set environment variables from $it"
|
||||||
|
load_env $it true
|
||||||
|
fi
|
||||||
|
|
||||||
|
##
|
||||||
|
echo ""; echo "-- Making sure /app is owned by the hermeswebui user to avoid permission issues when running the server "
|
||||||
|
sudo mkdir -p /app || error_exit "Failed to create /app directory"
|
||||||
|
sudo chown hermeswebui:hermeswebui /app || error_exit "Failed to set owner of /app to hermeswebui user"
|
||||||
|
sudo rsync -av --chown=hermeswebui:hermeswebui /apptoo/ /app/ || error_exit "Failed to sync /apptoo to /app with correct ownership"
|
||||||
|
it=/app/.testfile; touch $it || error_exit "Failed to verify /app directory"
|
||||||
|
rm -f $it || error_exit "Failed to delete test file in /app"
|
||||||
|
|
||||||
|
######## Environment variables (consume AFTER the load_env)
|
||||||
|
|
||||||
|
echo ""; echo "== Checking required environment variables for hermes-webui"
|
||||||
|
|
||||||
|
echo ""; echo "-- HERMES_WEBUI_VERSION: Where to store sessions, workspaces, and other state (default: ~/.hermes/webui-mvp)"
|
||||||
|
if [ -z "${HERMES_WEBUI_STATE_DIR+x}" ]; then error_exit "HERMES_WEBUI_STATE_DIR not set"; fi;
|
||||||
|
echo "-- HERMES_WEBUI_STATE_DIR: $HERMES_WEBUI_STATE_DIR"
|
||||||
|
if [ ! -d "$HERMES_WEBUI_STATE_DIR" ]; then mkdir -p $HERMES_WEBUI_STATE_DIR || error_exit "Failed to create state directory at $HERMES_WEBUI_STATE_DIR"; fi
|
||||||
|
if [ ! -d "$HERMES_WEBUI_STATE_DIR" ]; then error_exit "HERMES_WEBUI_STATE_DIR directory does not exist at $HERMES_WEBUI_STATE_DIR"; fi
|
||||||
|
it="$HERMES_WEBUI_STATE_DIR/.testfile"; touch $it || error_exit "Failed to verify state directory at $HERMES_WEBUI_STATE_DIR"
|
||||||
|
rm -f $it || error_exit "Failed to delete test file in $HERMES_WEBUI_STATE_DIR"
|
||||||
|
|
||||||
|
echo ""; echo "-- HERMES_WEBUI_DEFAULT_WORKSPACE: Default workspace directory shown on first launch"
|
||||||
|
if [ -z "${HERMES_WEBUI_DEFAULT_WORKSPACE+x}" ]; then echo "HERMES_WEBUI_DEFAULT_WORKSPACE not set, setting to /workspace"; export HERMES_WEBUI_DEFAULT_WORKSPACE="/workspace"; fi;
|
||||||
|
echo "-- HERMES_WEBUI_DEFAULT_WORKSPACE: $HERMES_WEBUI_DEFAULT_WORKSPACE"
|
||||||
|
if [ ! -d "$HERMES_WEBUI_DEFAULT_WORKSPACE" ]; then mkdir -p $HERMES_WEBUI_DEFAULT_WORKSPACE || error_exit "Failed to create default workspace at $HERMES_WEBUI_DEFAULT_WORKSPACE"; fi
|
||||||
|
if [ ! -d "$HERMES_WEBUI_DEFAULT_WORKSPACE" ]; then error_exit "HERMES_WEBUI_DEFAULT_WORKSPACE directory does not exist at $HERMES_WEBUI_DEFAULT_WORKSPACE"; fi
|
||||||
|
it="$HERMES_WEBUI_DEFAULT_WORKSPACE/.testfile"; touch $it || error_exit "Failed to verify default workspace at $HERMES_WEBUI_DEFAULT_WORKSPACE"
|
||||||
|
rm -f $it || error_exit "Failed to delete test file in $HERMES_WEBUI_DEFAULT_WORKSPACE"
|
||||||
|
|
||||||
|
echo ""; echo "==================="
|
||||||
|
echo ""; echo "== Installing uv and creating a new virtual environment for hermes-webui"
|
||||||
|
|
||||||
|
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||||
|
export PATH="/home/hermeswebui/.local/bin/:$PATH"
|
||||||
|
export UV_PROJECT_ENVIRONMENT=venv
|
||||||
|
|
||||||
|
export UV_CACHE_DIR=/uv_cache
|
||||||
|
sudo mkdir -p ${UV_CACHE_DIR} || error_exit "Failed to create /uv_cache directory"
|
||||||
|
sudo chown hermeswebui:hermeswebui ${UV_CACHE_DIR} || error_exit "Failed to set owner of ${UV_CACHE_DIR} to hermeswebui user"
|
||||||
|
|
||||||
|
cd /app
|
||||||
|
uv venv venv
|
||||||
|
export VIRTUAL_ENV=/app/venv
|
||||||
|
test -d /app/venv
|
||||||
|
test -f /app/venv/bin/activate
|
||||||
|
|
||||||
|
echo "";echo "== Activating hermes webui's virtual environment"
|
||||||
|
source /app/venv/bin/activate || error_exit "Failed to activate hermeswebui virtual environment"
|
||||||
|
test -x /app/venv/bin/python3
|
||||||
|
|
||||||
|
echo ""; echo "== Installing hermes-webui dependencies"
|
||||||
|
uv pip install -r requirements.txt --trusted-host pypi.org --trusted-host files.pythonhosted.org
|
||||||
|
uv pip install -U pip setuptools --trusted-host pypi.org --trusted-host files.pythonhosted.org
|
||||||
|
test -x /app/venv/bin/pip
|
||||||
|
|
||||||
|
echo ""; echo "== Adding hermes-agent's pyproject.toml base dependencies to the virtual environment"
|
||||||
|
uv pip install /home/hermeswebui/.hermes/hermes-agent --trusted-host pypi.org --trusted-host files.pythonhosted.org || error_exit "Failed to install hermes-agent's requirements"
|
||||||
|
|
||||||
|
echo ""; echo "== Running hermes-webui"
|
||||||
|
cd /app; python server.py || error_exit "hermes-webui failed or exited with an error"
|
||||||
|
|
||||||
|
# we should never be here because the server should be running indefinitely, but if we are, we exit safely
|
||||||
|
ok_exit "Clean exit"
|
||||||
24
server.py
24
server.py
@@ -12,7 +12,7 @@ from api.auth import check_auth
|
|||||||
from api.config import HOST, PORT, STATE_DIR, SESSION_DIR, DEFAULT_WORKSPACE
|
from api.config import HOST, PORT, STATE_DIR, SESSION_DIR, DEFAULT_WORKSPACE
|
||||||
from api.helpers import j
|
from api.helpers import j
|
||||||
from api.routes import handle_get, handle_post
|
from api.routes import handle_get, handle_post
|
||||||
from api.startup import auto_install_agent_deps
|
from api.startup import auto_install_agent_deps, fix_credential_permissions
|
||||||
|
|
||||||
|
|
||||||
class Handler(BaseHTTPRequestHandler):
|
class Handler(BaseHTTPRequestHandler):
|
||||||
@@ -63,6 +63,20 @@ def main() -> None:
|
|||||||
|
|
||||||
print_startup_config()
|
print_startup_config()
|
||||||
|
|
||||||
|
# Fix sensitive file permissions before doing anything else
|
||||||
|
fix_credential_permissions()
|
||||||
|
|
||||||
|
within_container = False
|
||||||
|
# Check for the "/.within_container" file to determine if we're running inside a container; this file is created in the Dockerfile
|
||||||
|
try:
|
||||||
|
with open('/.within_container', 'r') as f:
|
||||||
|
within_container = True
|
||||||
|
except FileNotFoundError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if within_container:
|
||||||
|
print('[ok] Running within container.', flush=True)
|
||||||
|
|
||||||
# Security: warn if binding non-loopback without authentication
|
# Security: warn if binding non-loopback without authentication
|
||||||
from api.auth import is_auth_enabled
|
from api.auth import is_auth_enabled
|
||||||
if HOST not in ('127.0.0.1', '::1', 'localhost') and not is_auth_enabled():
|
if HOST not in ('127.0.0.1', '::1', 'localhost') and not is_auth_enabled():
|
||||||
@@ -70,6 +84,12 @@ def main() -> None:
|
|||||||
print(f' Anyone on the network can access your filesystem and agent.', flush=True)
|
print(f' Anyone on the network can access your filesystem and agent.', flush=True)
|
||||||
print(f' Set a password via Settings or HERMES_WEBUI_PASSWORD env var.', flush=True)
|
print(f' Set a password via Settings or HERMES_WEBUI_PASSWORD env var.', flush=True)
|
||||||
print(f' To suppress: bind to 127.0.0.1 or set a password.', flush=True)
|
print(f' To suppress: bind to 127.0.0.1 or set a password.', flush=True)
|
||||||
|
if within_container:
|
||||||
|
print(f' Note: You are running within a container, must bind to 0.0.0.0 to publish the port.', flush=True)
|
||||||
|
elif not is_auth_enabled():
|
||||||
|
print(f' [tip] No password set. Any process on this machine can read sessions', flush=True)
|
||||||
|
print(f' and memory via the local API. Set HERMES_WEBUI_PASSWORD to', flush=True)
|
||||||
|
print(f' enable authentication.', flush=True)
|
||||||
|
|
||||||
ok, missing, errors = verify_hermes_imports()
|
ok, missing, errors = verify_hermes_imports()
|
||||||
if not ok and _HERMES_FOUND:
|
if not ok and _HERMES_FOUND:
|
||||||
@@ -108,7 +128,7 @@ def main() -> None:
|
|||||||
scheme = 'http'
|
scheme = 'http'
|
||||||
|
|
||||||
print(f' Hermes Web UI listening on {scheme}://{HOST}:{PORT}', flush=True)
|
print(f' Hermes Web UI listening on {scheme}://{HOST}:{PORT}', flush=True)
|
||||||
if HOST == '127.0.0.1':
|
if HOST == '127.0.0.1' or within_container:
|
||||||
print(f' Remote access: ssh -N -L {PORT}:127.0.0.1:{PORT} <user>@<your-server>', flush=True)
|
print(f' Remote access: ssh -N -L {PORT}:127.0.0.1:{PORT} <user>@<your-server>', flush=True)
|
||||||
print(f' Then open: {scheme}://localhost:{PORT}', flush=True)
|
print(f' Then open: {scheme}://localhost:{PORT}', flush=True)
|
||||||
print('', flush=True)
|
print('', flush=True)
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ async function cancelStream(){
|
|||||||
try{
|
try{
|
||||||
await fetch(new URL(`/api/chat/cancel?stream_id=${encodeURIComponent(streamId)}`,location.origin).href,{credentials:'include'});
|
await fetch(new URL(`/api/chat/cancel?stream_id=${encodeURIComponent(streamId)}`,location.origin).href,{credentials:'include'});
|
||||||
const btn=$('btnCancel');if(btn)btn.style.display='none';
|
const btn=$('btnCancel');if(btn)btn.style.display='none';
|
||||||
setStatus(t('cancelling'));
|
// Don't set status here - let the SSE cancel event handle UI cleanup
|
||||||
}catch(e){setStatus(t('cancel_failed')+e.message);}
|
}catch(e){setStatus(t('cancel_failed')+e.message);}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
242
static/i18n.js
242
static/i18n.js
@@ -366,7 +366,7 @@ const LOCALES = {
|
|||||||
|
|
||||||
zh: {
|
zh: {
|
||||||
_lang: 'zh',
|
_lang: 'zh',
|
||||||
_label: '\u4e2d\u6587',
|
_label: '\u7b80\u4f53\u4e2d\u6587',
|
||||||
_speech: 'zh-CN',
|
_speech: 'zh-CN',
|
||||||
// boot.js
|
// boot.js
|
||||||
cancelling: '\u6b63\u5728\u53d6\u6d88...',
|
cancelling: '\u6b63\u5728\u53d6\u6d88...',
|
||||||
@@ -496,6 +496,246 @@ const LOCALES = {
|
|||||||
login_btn: '\u767b\u5f55',
|
login_btn: '\u767b\u5f55',
|
||||||
login_invalid_pw: '\u5bc6\u7801\u9519\u8bef',
|
login_invalid_pw: '\u5bc6\u7801\u9519\u8bef',
|
||||||
login_conn_failed: '\u8fde\u63a5\u5931\u8d25',
|
login_conn_failed: '\u8fde\u63a5\u5931\u8d25',
|
||||||
|
// missing keys from English
|
||||||
|
tab_chat: '\u804a\u5929',
|
||||||
|
tab_memory: '\u8a18\u61b6',
|
||||||
|
tab_skills: '\u6280\u80fd',
|
||||||
|
tab_tasks: '\u4efb\u52d9',
|
||||||
|
tab_todos: '\u5f85\u8e29',
|
||||||
|
tab_workspaces: '\u5de5\u4f5c\u5340',
|
||||||
|
new_conversation: '\u65b0\u5b58\u5c0d\u8a71',
|
||||||
|
filter_conversations: '\u7b5c\u9078\u5b58\u5c0d\u8a71',
|
||||||
|
scheduled_jobs: '\u5b58\u5287\u4efb\u52d9',
|
||||||
|
new_job: '\u65b0\u4efb\u52d9',
|
||||||
|
search_skills: '\u641c\u5c0b\u6280\u80fd',
|
||||||
|
new_skill: '\u65b0\u6280\u80fd',
|
||||||
|
save_skill: '\u5132\u5b58\u6280\u80fd',
|
||||||
|
personal_memory: '\u500b\u4eba\u8a18\u61b6',
|
||||||
|
current_task_list: '\u76ee\u524d\u4efb\u52d9\u6e05\u55ae',
|
||||||
|
new_profile: '\u65b0\u914d\u7f6e\u6a94',
|
||||||
|
transcript: '\u8a18\u9304',
|
||||||
|
download_transcript: '\u4e0b\u8f09\u8a18\u9304',
|
||||||
|
import: '\u5c0e\u5165',
|
||||||
|
editing: '\u7de8\u8f2f\u4e2d',
|
||||||
|
empty_title: '\u7a7a\u767c\u5b58\u7a7a\u9593',
|
||||||
|
empty_subtitle: '\u9ede\u64ca\u4e0a\u65b9\u6309\u9215\u958b\u59cb\u5c0d\u8a71',
|
||||||
|
cancel: '\u53d6\u6d88',
|
||||||
|
loading: '\u52a0\u8f09\u4e2d',
|
||||||
|
create_job: '\u5efa\u7acb\u4efb\u52d9',
|
||||||
|
suggest_plan: '\u5efa\u8b70\u8a08\u5287',
|
||||||
|
suggest_schedule: '\u5efa\u8b70\u6642\u7a0b',
|
||||||
|
suggest_files: '\u5efa\u8b70\u6a94\u6848',
|
||||||
|
sign_out: '\u767b\u51fa',
|
||||||
|
password_placeholder: '\u5bc6\u7801',
|
||||||
|
disable_auth: '\u505c\u7528\u9a57\u8b49',
|
||||||
|
settings_label_sound: '\u901a\u77e5\u8072\u97f3',
|
||||||
|
settings_label_notifications: '\u700f\u89bd\u901a\u77e5',
|
||||||
|
settings_desc_sound: '\u52a9\u624b\u5b8c\u6210\u56de\u7b54\u6642\u64a9\u653e\u8072\u97f3\u3002',
|
||||||
|
settings_desc_notifications: '\u7576\u5206\u9801\u5728\u5f8c\u53f0\u6642\uff0c\u6709\u56de\u7b54\u5b8c\u6210\u6e05\u55ae\u6703\u986f\u793a\u7cfb\u7d71\u901a\u77e5\u3002',
|
||||||
|
settings_desc_token_usage: '\u5728\u52a9\u624b\u6bcf\u6b21\u56de\u7b54\u4e0b\u65b9\u986f\u793a Input/Output token \u6578\u91cf\u3002\u4e5f\u53ef\u4ee5\u7528 /usage \u5207\u63db\u3002',
|
||||||
|
settings_desc_cli_sessions: '\u5c07 Hermes CLI (\u7684 state.db) \u4e2d\u7684\u4f1a\u8a71\u6dfb\u52a0\u5230\u4f1a\u8a71\u6e05\u55ae\u3002\u9ede\u64ca\u4e00\u500b CLI \u4f1a\u8a71\u5c07\u5c0e\u5165\u5b83\u7a0b\u5f0f\u4e26\u7e7c\u7e8c\u5b58\u5c0d\u8a71\u3002',
|
||||||
|
settings_desc_sync_insights: '\u5c07 WebUI token \u4f7f\u7528\u60c5\u6cc1\u540c\u6b65\u5230 state.db\uff0c\u8a93 hermes /insights \u5305\u542b\u700f\u89bd\u5668\u4f1a\u8a71\u6578\u64da\u3002\u9810\u8a2d\u70b8\u555f\u7528\u3002',
|
||||||
|
settings_desc_check_updates: '\u7576\u6709\u66f4\u65b0\u7684 WebUI \u6216\u52a9\u624b\u7248\u672c\u6642\u986f\u793a\u6a19\u8a18\u3002\u5c07\u5728\u5f8c\u81ea\u6b63\u5e38\u57f7\u884c Git-Fetch\u3002',
|
||||||
|
settings_desc_bot_name: '\u52a9\u624b\u5728 UI \u4e2d\u7684\u986f\u793a\u540d\u7a31\u3002\u9810\u8a2d\u70b8\u7528\u6539\u3002',
|
||||||
|
settings_desc_password: '\u8a2d\u5b9a WebUI \u767b\u5165\u5bc6\u7801\u3002\u5047\u5982\u5df2\u8a2d\u7f6e\uff0c\u6bcf\u6b21\u52a0\u8f09\u90fd\u9700\u8981\u767b\u5165\u3002',
|
||||||
|
settings_label_sound: '\u901a\u77e5\u8072\u97f3',
|
||||||
|
},
|
||||||
|
|
||||||
|
// Traditional Chinese (zh-Hant)
|
||||||
|
'zh-Hant': {
|
||||||
|
_lang: 'zh-Hant',
|
||||||
|
_label: '\u7e41\u9ad4\u4e2d\u6587',
|
||||||
|
_speech: 'zh-TW',
|
||||||
|
// boot.js
|
||||||
|
cancelling: '\u6b63\u5728\u53d6\u6d88...',
|
||||||
|
cancel_failed: '\u53d6\u6d88\u5931\u6557\uff1a',
|
||||||
|
mic_denied: '\u9ea6\u514b\u98a8\u8a2a\u554f\u88ab\u62d2\u7d75\uff0c\u8acb\u6aa2\u67e5\u700f\u89bd\u5668\u6b0a\u9650\u3002',
|
||||||
|
mic_no_speech: '\u6c92\u6709\u6aa2\u6e2c\u5230\u8a71\u97f3\uff0c\u8acb\u518d\u5617\u4e00\u6b21\u3002',
|
||||||
|
mic_network: '\u8a71\u97f3\u8b58\u5225\u76ee\u524d\u4e0d\u53ef\u7528\u3002',
|
||||||
|
mic_error: '\u8a71\u97f3\u8f38\u5165\u51fa\u932f\uff1a',
|
||||||
|
session_imported: '\u6703\u8a71\u5df2\u5c0e\u5165',
|
||||||
|
import_failed: '\u5c0e\u5165\u5931\u6557\uff1a',
|
||||||
|
import_invalid_json: 'JSON \u7121\u6548',
|
||||||
|
image_pasted: '\u5df2\u7c98\u8cbc\u5716\u7247\uff1a',
|
||||||
|
// messages.js
|
||||||
|
edit_message: '\u7de8\u8f2f\u8a0a\u606f',
|
||||||
|
regenerate: '\u91cd\u65b0\u751f\u6210\u56de\u8986',
|
||||||
|
copy: '\u8907\u88fd',
|
||||||
|
copied: '\u5df2\u8907\u88fd',
|
||||||
|
you: '\u4f60',
|
||||||
|
thinking: '\u601d\u8003\u904e\u7a0b',
|
||||||
|
expand_all: '\u5168\u90e8\u5c55\u958b',
|
||||||
|
collapse_all: '\u5168\u90e8\u6298\u758a',
|
||||||
|
edit_failed: '\u7de8\u8f2f\u5931\u6557\uff1a',
|
||||||
|
regen_failed: '\u91cd\u65b0\u751f\u6210\u5931\u6557\uff1a',
|
||||||
|
reconnect_active: '\u56de\u8986\u4ecd\u5728\u751f\u6210\u4e2d\uff0c\u6e96\u5099\u597d\u5f8c\u8981\u91cd\u65b0\u52a0\u8f09\u55ce\uff1f',
|
||||||
|
reconnect_finished: '\u4f60\u96e2\u958b\u6642\u6709\u56de\u8986\u6b63\u5728\u751f\u6210\uff0c\u8a0a\u606f\u5167\u5bb9\u53ef\u80fd\u5df2\u7d93\u66f4\u65b0\u3002',
|
||||||
|
// approval card
|
||||||
|
approval_heading: '\u9700\u8981\u5ba1\u6838',
|
||||||
|
approval_desc_prefix: '\u6aa2\u6e2c\u5230\u5371\u96aa\u547d\u4ee4',
|
||||||
|
approval_btn_once: '\u5141\u8a31\u4e00\u6b21',
|
||||||
|
approval_btn_once_title: '\u5141\u8a31\u57f7\u884c\u6b64\u547d\u4ee4\u4e00\u6b21\uff08Enter\uff09',
|
||||||
|
approval_btn_session: '\u672c\u6b21\u5141\u8a31',
|
||||||
|
approval_btn_session_title: '\u672c\u6b21\u6703\u8a71\u671f\u9593\u5141\u8a31',
|
||||||
|
approval_btn_always: '\u59c4\u59b9\u5141\u8a31',
|
||||||
|
approval_btn_always_title: '\u59c4\u59b9\u5141\u8a31\u6b64\u547d\u4ee4\u6a21\u5f0f',
|
||||||
|
approval_btn_deny: '\u62d2\u7edd',
|
||||||
|
approval_btn_deny_title: '\u62d2\u7edd — \u4e0d\u57f7\u884c\u6b64\u547d\u4ee4',
|
||||||
|
approval_responding: '\u8655\u7406\u4e2d\u2026',
|
||||||
|
untitled: '\u672a\u547d\u540d',
|
||||||
|
n_messages: (n) => `${n} \u689d\u8a0a\u606f`,
|
||||||
|
model_unavailable: '\uff08\u4e0d\u53ef\u7528\uff09',
|
||||||
|
model_unavailable_title: '\u6b64\u6a21\u578b\u5df2\u7d93\u4e0d\u5728\u7576\u524d provider \u5217\u8868\u4e2d',
|
||||||
|
// commands.js
|
||||||
|
cmd_help: '\u67e5\u770b\u53ef\u7528\u547d\u4ee4',
|
||||||
|
cmd_clear: '\u6e05\u7a7a\u7576\u524d\u5c0d\u8a71\u8a0a\u606f',
|
||||||
|
cmd_compact: '\u58d3\u7e2e\u5c0d\u8a71\u4e0a\u4e0b\u6587',
|
||||||
|
cmd_model: '\u5207\u63db\u6a21\u578b\uff08\u4f8b\u5982 /model gpt-4o\uff09',
|
||||||
|
cmd_workspace: '\u6309\u540d\u7a31\u5207\u63db\u5de5\u4f5c\u5340',
|
||||||
|
cmd_new: '\u65b0\u5efa\u804a\u5929\u6703\u8a71',
|
||||||
|
cmd_usage: '\u5207\u63db token \u7528\u91cf\u986f\u793a',
|
||||||
|
cmd_theme: '\u5207\u63db\u4e3b\u984c\uff08dark/light/slate/solarized/monokai/nord/oled\uff09',
|
||||||
|
cmd_personality: '\u5207\u63db Agent \u4eba\u8a2d',
|
||||||
|
available_commands: '\u53ef\u7528\u547d\u4ee4\uff1a',
|
||||||
|
type_slash: '\u8f38\u5165 / \u53ef\u67e5\u770b\u547d\u4ee4',
|
||||||
|
conversation_cleared: '\u5c0d\u8a71\u5df2\u6e05\u7a7a',
|
||||||
|
model_usage: '\u7528\u6cd5\uff1a/model <name>',
|
||||||
|
no_model_match: '\u6c92\u6709\u5339\u914d\u201c',
|
||||||
|
switched_to: '\u5df2\u5207\u63db\u5230 ',
|
||||||
|
workspace_usage: '\u7528\u6cd5\uff1a/workspace <name>',
|
||||||
|
no_workspace_match: '\u6c92\u6709\u5339\u914d\u201c',
|
||||||
|
switched_workspace: '\u5df2\u5207\u63db\u5de5\u4f5c\u5340\uff1a',
|
||||||
|
workspace_switch_failed: '\u5de5\u4f5c\u5340\u5207\u63db\u5931\u6557\uff1a',
|
||||||
|
new_session: '\u5df2\u65b0\u5efa\u6703\u8a71',
|
||||||
|
compressing: '\u6b63\u5728\u8981\u6c42\u58d3\u7e2e\u4e0a\u4e0b\u6587...',
|
||||||
|
token_usage_on: 'Token \u7528\u91cf\u986f\u793a\u5df2\u958b\u555f',
|
||||||
|
token_usage_off: 'Token \u7528\u91cf\u986f\u793a\u5df2\u95dc\u9589',
|
||||||
|
theme_usage: '\u7528\u6cd5\uff1a/theme ',
|
||||||
|
theme_set: '\u4e3b\u984c\uff1a',
|
||||||
|
no_active_session: '\u7576\u524d\u6c92\u6709\u6d3b\u52d5\u6703\u8a71',
|
||||||
|
no_personalities: '\u6c92\u6709\u627e\u5230\u4eba\u8a2d\uff08\u53ef\u6dfb\u52a0\u5230 ~/.hermes/personalities/\uff09',
|
||||||
|
available_personalities: '\u53ef\u7528\u4eba\u8a2d\uff1a',
|
||||||
|
personality_switch_hint: '\n\n\u4f7f\u7528 `/personality <name>` \u5207\u63db\uff0c\u6216\u7528 `/personality none` \u6e05\u7a7a\u3002',
|
||||||
|
personalities_load_failed: '\u52a0\u8f7d\u4eba\u8a2d\u5931\u6557',
|
||||||
|
personality_cleared: '\u4eba\u8a2d\u5df2\u6e05\u7a7a',
|
||||||
|
personality_set: '\u7576\u524d\u4eba\u8a2d\uff1a',
|
||||||
|
failed_colon: '\u5931\u6557\uff1a',
|
||||||
|
// ui.js
|
||||||
|
no_workspace: '\u672a\u9078\u64c7\u5de5\u4f5c\u5340',
|
||||||
|
// workspace.js
|
||||||
|
unsaved_confirm: '\u9810\u89bd\u5340\u6709\u672a\u5132\u5b58\u4fee\u6539\uff0c\u8981\u653e\u68c4\u66f4\u6539\u5e76\u7e7c\u7e8c\u8df3\u8ee2\u55ce\uff1f',
|
||||||
|
save: '\u5132\u5b58',
|
||||||
|
edit: '\u7de8\u8f2f',
|
||||||
|
save_title: '\u5132\u5b58\u4fee\u6539',
|
||||||
|
edit_title: '\u7de8\u8f2f\u6b64\u6587\u4ef6',
|
||||||
|
saved: '\u5df2\u5132\u5b58',
|
||||||
|
save_failed: '\u5132\u5b58\u5931\u6557\uff1a',
|
||||||
|
image_load_failed: '\u5716\u7247\u52a0\u8f09\u5931\u6557',
|
||||||
|
file_open_failed: '\u7121\u6cd5\u6253\u958b\u6587\u4ef6',
|
||||||
|
downloading: (name) => `\u6b63\u5728\u4e0b\u8f09 ${name}...`,
|
||||||
|
double_click_rename: '\u96d9\u64ca\u91cd\u547d\u540d',
|
||||||
|
renamed_to: '\u5df2\u91cd\u547d\u540d\u70ba ',
|
||||||
|
rename_failed: '\u91cd\u547d\u540d\u5931\u6557\uff1a',
|
||||||
|
delete_title: '\u522a\u9664',
|
||||||
|
delete_confirm: (name) => `\u8981\u522a\u9664 ${name} \u55ce\uff1f`,
|
||||||
|
deleted: '\u5df2\u522a\u9664 ',
|
||||||
|
delete_failed: '\u522a\u9664\u5931\u6557\uff1a',
|
||||||
|
new_file_prompt: '\u65b0\u6587\u4ef6\u540d\uff08\u4f8b\u5982 notes.md\uff09\uff1a',
|
||||||
|
created: '\u5df2\u5275\u5efa ',
|
||||||
|
create_failed: '\u5275\u5efa\u5931\u6557\uff1a',
|
||||||
|
new_folder_prompt: '\u65b0\u6587\u4ef6\u593e\u540d\u7a31\uff1a',
|
||||||
|
folder_created: '\u5df2\u5275\u5efa\u6587\u4ef6\u593e ',
|
||||||
|
folder_create_failed: '\u5275\u5efa\u6587\u4ef6\u593e\u5931\u6557\uff1a',
|
||||||
|
remove_title: '\u79fb\u9664',
|
||||||
|
empty_dir: '(\u7a7a)',
|
||||||
|
upload_failed: '\u4e0a\u50b3\u5931\u6557\uff1a',
|
||||||
|
all_uploads_failed: (n) => `${n} \u500b\u6587\u4ef6\u5168\u90e8\u4e0a\u50b3\u5931\u6557`,
|
||||||
|
// settings panel
|
||||||
|
settings_title: '\u8a2d\u5b9a',
|
||||||
|
settings_save_btn: '\u5132\u5b58\u8a2d\u5b9a',
|
||||||
|
settings_label_model: '\u9ed8\u8a8d\u6a21\u578b',
|
||||||
|
settings_label_send_key: '\u767c\u9001\u5feb\u6377\u9375',
|
||||||
|
settings_label_theme: '\u4e3b\u984c',
|
||||||
|
settings_label_language: '\u8a9d\u8a00',
|
||||||
|
settings_label_token_usage: '\u986f\u793a token \u7528\u91cf',
|
||||||
|
settings_label_cli_sessions: '\u986f\u793a CLI \u6703\u8a71',
|
||||||
|
settings_label_sync_insights: '\u540c\u6b65\u5230 insights',
|
||||||
|
settings_label_check_updates: '\u6aa2\u67e5\u66f4\u65b0',
|
||||||
|
settings_label_bot_name: '\u52a9\u624b\u540d\u7a31',
|
||||||
|
settings_label_password: '\u8a2a\u8aad\u5bc6\u78bc',
|
||||||
|
settings_saved: '\u8a2d\u5b9a\u5df2\u5132\u5b58',
|
||||||
|
settings_save_failed: '\u5132\u5b58\u5931\u6557\uff1a',
|
||||||
|
settings_load_failed: '\u8a2d\u5b9a\u52a0\u8f09\u5931\u6557\uff1a',
|
||||||
|
settings_saved_pw: '\u8a2d\u5b9a\u5df2\u5132\u5b58\uff08\u5bc6\u78bc\u5df2\u8a2d\u5b9a\u2014\u73fe\u5728\u9700\u8981\u767b\u5f55\uff09',
|
||||||
|
// login page
|
||||||
|
login_title: '\u767b\u5f55',
|
||||||
|
login_subtitle: '\u8f38\u5165\u5bc6\u78bc\u7e7c\u7e8c\u4f7f\u7528',
|
||||||
|
login_placeholder: '\u5bc6\u78bc',
|
||||||
|
login_btn: '\u767b\u5f55',
|
||||||
|
login_invalid_pw: '\u5bc6\u78bc\u932f\u8aa4',
|
||||||
|
login_conn_failed: '\u9023\u63a5\u5931\u6557',
|
||||||
|
// missing keys from English
|
||||||
|
tab_chat: '\u804a\u5929',
|
||||||
|
tab_memory: '\u8a18\u61b6',
|
||||||
|
tab_skills: '\u6280\u80fd',
|
||||||
|
tab_tasks: '\u4efb\u52d9',
|
||||||
|
tab_todos: '\u5f85\u8e29',
|
||||||
|
tab_workspaces: '\u5de5\u4f5c\u5340',
|
||||||
|
new_conversation: '\u65b0\u5b58\u5c0d\u8a71',
|
||||||
|
filter_conversations: '\u7b5c\u9078\u5b58\u5c0d\u8a71',
|
||||||
|
scheduled_jobs: '\u5b58\u5287\u4efb\u52d9',
|
||||||
|
new_job: '\u65b0\u4efb\u52d9',
|
||||||
|
search_skills: '\u641c\u5c0b\u6280\u80fd',
|
||||||
|
new_skill: '\u65b0\u6280\u80fd',
|
||||||
|
save_skill: '\u5132\u5b58\u6280\u80fd',
|
||||||
|
personal_memory: '\u500b\u4eba\u8a18\u61b6',
|
||||||
|
current_task_list: '\u76ee\u524d\u4efb\u52d9\u6e05\u55ae',
|
||||||
|
new_profile: '\u65b0\u914d\u7f6e\u6a94',
|
||||||
|
transcript: '\u8a18\u9304',
|
||||||
|
download_transcript: '\u4e0b\u8f09\u8a18\u9304',
|
||||||
|
import: '\u5c0e\u5165',
|
||||||
|
editing: '\u7de8\u8f2f\u4e2d',
|
||||||
|
empty_title: '\u7a7a\u767c\u5b58\u7a7a\u9593',
|
||||||
|
empty_subtitle: '\u9ede\u64ca\u4e0a\u65b9\u6309\u9215\u958b\u59cb\u5c0d\u8a71',
|
||||||
|
cancel: '\u53d6\u6d88',
|
||||||
|
loading: '\u52a0\u8f09\u4e2d',
|
||||||
|
create_job: '\u5efa\u7acb\u4efb\u52d9',
|
||||||
|
suggest_plan: '\u5efa\u8b70\u8a08\u5287',
|
||||||
|
suggest_schedule: '\u5efa\u8b70\u6642\u7a0b',
|
||||||
|
suggest_files: '\u5efa\u8b70\u6a94\u6848',
|
||||||
|
sign_out: '\u767b\u51fa',
|
||||||
|
password_placeholder: '\u5bc6\u78bc',
|
||||||
|
disable_auth: '\u505c\u7528\u9a57\u8b49',
|
||||||
|
settings_label_sound: '\u901a\u77e5\u8072\u97f3',
|
||||||
|
settings_label_notifications: '\u700f\u89bd\u901a\u77e5',
|
||||||
|
settings_desc_sound: '\u52a9\u624b\u5b8c\u6210\u56de\u7b54\u6642\u64a9\u653e\u8072\u97f3\u3002',
|
||||||
|
settings_desc_notifications: '\u7576\u5206\u9801\u5728\u5f8c\u81ea\u6642\uff0c\u6709\u56de\u7b54\u5b8c\u6210\u6e05\u55ae\u6703\u986f\u793a\u7cfb\u7d71\u901a\u77e5\u3002',
|
||||||
|
settings_desc_token_usage: '\u5728\u52a9\u624b\u6bcf\u6b21\u56de\u7b54\u4e0b\u65b9\u986f\u793a Input/Output token \u6578\u91cf\u3002\u4e5f\u53ef\u4ee5\u7528 /usage \u5207\u63db\u3002',
|
||||||
|
settings_desc_cli_sessions: '\u5c07 Hermes CLI (\u7684 state.db) \u4e2d\u7684\u6703\u8a71\u6dfb\u52a0\u5230\u6703\u8a71\u6e05\u55ae\u3002\u9ede\u64ca\u4e00\u500b CLI \u6703\u8a71\u5c07\u5c0e\u5165\u5b83\u7a0b\u5f0f\u4e26\u7e7c\u7e8c\u5b58\u5c0d\u8a71\u3002',
|
||||||
|
settings_desc_sync_insights: '\u5c07 WebUI token \u4f7f\u7528\u60c5\u6cc1\u540c\u6b65\u5230 state.db\uff0c\u8a93 hermes /insights \u5305\u542b\u700f\u89bd\u5668\u6703\u8a71\u6578\u64da\u3002\u9810\u8a2d\u70b8\u555f\u7528\u3002',
|
||||||
|
settings_desc_check_updates: '\u7576\u6709\u66f4\u65b0\u7684 WebUI \u6216\u52a9\u624b\u7248\u672c\u6642\u986f\u793a\u6a19\u8a18\u3002\u5c07\u5728\u5f8c\u81ea\u6b63\u5e38\u57f7\u884c Git-Fetch\u3002',
|
||||||
|
settings_desc_bot_name: '\u52a9\u624b\u5728 UI \u4e2d\u7684\u986f\u793a\u540d\u7a31\u3002\u9810\u8a2d\u70b8\u7528\u6539\u3002',
|
||||||
|
settings_desc_password: '\u8a2d\u5b9a WebUI \u767b\u5165\u5bc6\u78bc\u3002\u5047\u5982\u5df2\u8a2d\u7f6e\uff0c\u6bcf\u6b21\u52a0\u8f09\u90fd\u9700\u8981\u767b\u5165\u3002',
|
||||||
|
settings_label_sound: '\u901a\u77e5\u8072\u97f3',
|
||||||
|
// boot.js
|
||||||
|
cancelling: '\u6b63\u5728\u53d6\u6d88...',
|
||||||
|
cancel_failed: '\u53d6\u6d88\u5931\u6557\uff1a',
|
||||||
|
mic_denied: '\u9ea6\u514b\u98a8\u8a2a\u554f\u88ab\u62d2\u7d75\uff0c\u8acb\u6aa2\u67e5\u700f\u89bd\u5668\u6b0a\u9650\u3002',
|
||||||
|
mic_no_speech: '\u6c92\u6709\u6aa2\u6e2c\u5230\u8a71\u97f3\uff0c\u8acb\u518d\u5617\u4e00\u6b21\u3002',
|
||||||
|
mic_network: '\u8a71\u97f3\u8b58\u5225\u76ee\u524d\u4e0d\u53ef\u7528\u3002',
|
||||||
|
mic_error: '\u8a71\u97f3\u8f38\u5165\u51fa\u932f\uff1a',
|
||||||
|
session_imported: '\u6703\u8a71\u5df2\u5c0e\u5165',
|
||||||
|
import_failed: '\u5c0e\u5165\u5931\u6557\uff1a',
|
||||||
|
import_invalid_json: 'JSON \u7121\u6548',
|
||||||
|
image_pasted: '\u5df2\u7c98\u8cbc\u5716\u7247\uff1a',
|
||||||
|
// messages.js
|
||||||
|
edit_message: '\u7de8\u8f2f\u8a0a\u606f',
|
||||||
|
regenerate: '\u91cd\u65b0\u751f\u6210\u56de\u8986',
|
||||||
|
copy: '\u8907\u88fd',
|
||||||
|
copied: '\u5df2\u8907\u88fd',
|
||||||
|
// ui.js
|
||||||
|
workspace_desc: '\u8acb\u9078\u64c7\u5de5\u4f5c\u5340\uff0c\u6216\u8f09\u5165\u65b0\u540d\u7a31\u5beb\u4e00\u500b',
|
||||||
|
tab_profiles: '\u914d\u7f6e',
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@
|
|||||||
<body>
|
<body>
|
||||||
<div class="layout">
|
<div class="layout">
|
||||||
<aside class="sidebar">
|
<aside class="sidebar">
|
||||||
<div class="sidebar-header"><div class="logo">H</div><div><h1 style="margin:0;font-size:15px;font-weight:700;letter-spacing:-.01em">Hermes</h1><div style="font-size:10px;color:var(--muted);opacity:.8;margin-top:1px">v0.45.0</div></div></div>
|
<div class="sidebar-header"><div class="logo">H</div><div><h1 style="margin:0;font-size:15px;font-weight:700;letter-spacing:-.01em">Hermes</h1><div style="font-size:10px;color:var(--muted);opacity:.8;margin-top:1px">v0.46.0</div></div></div>
|
||||||
<div class="sidebar-nav">
|
<div class="sidebar-nav">
|
||||||
<button class="nav-tab active" data-panel="chat" data-label="Chat" onclick="switchPanel('chat')" title="Chat" data-i18n-title="tab_chat"><svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><path d="M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z"/></svg></button>
|
<button class="nav-tab active" data-panel="chat" data-label="Chat" onclick="switchPanel('chat')" title="Chat" data-i18n-title="tab_chat"><svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><path d="M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z"/></svg></button>
|
||||||
<button class="nav-tab" data-panel="tasks" data-label="Tasks" onclick="switchPanel('tasks')" title="Tasks" data-i18n-title="tab_tasks"><svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><rect x="3" y="4" width="18" height="18" rx="2"/><line x1="16" y1="2" x2="16" y2="6"/><line x1="8" y1="2" x2="8" y2="6"/><line x1="3" y1="10" x2="21" y2="10"/></svg></button>
|
<button class="nav-tab" data-panel="tasks" data-label="Tasks" onclick="switchPanel('tasks')" title="Tasks" data-i18n-title="tab_tasks"><svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><rect x="3" y="4" width="18" height="18" rx="2"/><line x1="16" y1="2" x2="16" y2="6"/><line x1="8" y1="2" x2="8" y2="6"/><line x1="3" y1="10" x2="21" y2="10"/></svg></button>
|
||||||
|
|||||||
@@ -295,7 +295,8 @@ async function send(){
|
|||||||
S.messages.push({role:'assistant',content:'*Task cancelled.*'});renderMessages();
|
S.messages.push({role:'assistant',content:'*Task cancelled.*'});renderMessages();
|
||||||
}
|
}
|
||||||
renderSessionList();
|
renderSessionList();
|
||||||
if(!S.session||!INFLIGHT[S.session.session_id]){setBusy(false);setStatus('');}
|
// Always clear busy state and status when cancel event is received
|
||||||
|
setBusy(false);setStatus('');
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -135,6 +135,10 @@ function getModelLabel(modelId){
|
|||||||
|
|
||||||
function renderMd(raw){
|
function renderMd(raw){
|
||||||
let s=raw||'';
|
let s=raw||'';
|
||||||
|
// Pre-pass: decode HTML entities first so markdown processing works correctly.
|
||||||
|
// This prevents double-escaping when LLM outputs entities like < > &
|
||||||
|
const decode=s=>s.replace(/</g,'<').replace(/>/g,'>').replace(/&/g,'&').replace(/"/g,'"').replace(/'/g,"'");
|
||||||
|
s=decode(s);
|
||||||
// Pre-pass: convert safe inline HTML tags the model may emit into their
|
// Pre-pass: convert safe inline HTML tags the model may emit into their
|
||||||
// markdown equivalents so the pipeline can render them correctly.
|
// markdown equivalents so the pipeline can render them correctly.
|
||||||
// Only runs OUTSIDE fenced code blocks and backtick spans (stash + restore).
|
// Only runs OUTSIDE fenced code blocks and backtick spans (stash + restore).
|
||||||
|
|||||||
115
tests/test_cancel_interrupt.py
Normal file
115
tests/test_cancel_interrupt.py
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
"""
|
||||||
|
Unit tests for cancel/interrupt functionality.
|
||||||
|
Tests the integration between cancel_stream() and agent.interrupt().
|
||||||
|
"""
|
||||||
|
import pytest
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
from unittest.mock import Mock
|
||||||
|
|
||||||
|
from api.streaming import cancel_stream
|
||||||
|
from api.config import AGENT_INSTANCES, STREAMS, CANCEL_FLAGS
|
||||||
|
|
||||||
|
|
||||||
|
class TestCancelInterrupt:
|
||||||
|
"""Test suite for cancel/interrupt functionality"""
|
||||||
|
|
||||||
|
def setup_method(self):
|
||||||
|
"""Clean up before each test"""
|
||||||
|
AGENT_INSTANCES.clear()
|
||||||
|
STREAMS.clear()
|
||||||
|
CANCEL_FLAGS.clear()
|
||||||
|
|
||||||
|
def teardown_method(self):
|
||||||
|
"""Clean up after each test"""
|
||||||
|
AGENT_INSTANCES.clear()
|
||||||
|
STREAMS.clear()
|
||||||
|
CANCEL_FLAGS.clear()
|
||||||
|
|
||||||
|
def test_cancel_calls_agent_interrupt(self):
|
||||||
|
"""Verify that cancel_stream() calls agent.interrupt() when agent exists"""
|
||||||
|
# Setup
|
||||||
|
stream_id = "test_stream_123"
|
||||||
|
mock_agent = Mock()
|
||||||
|
mock_agent.interrupt = Mock()
|
||||||
|
|
||||||
|
STREAMS[stream_id] = queue.Queue()
|
||||||
|
CANCEL_FLAGS[stream_id] = threading.Event()
|
||||||
|
AGENT_INSTANCES[stream_id] = mock_agent
|
||||||
|
|
||||||
|
# Execute
|
||||||
|
result = cancel_stream(stream_id)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert result is True
|
||||||
|
mock_agent.interrupt.assert_called_once_with("Cancelled by user")
|
||||||
|
assert CANCEL_FLAGS[stream_id].is_set()
|
||||||
|
|
||||||
|
def test_cancel_handles_interrupt_exception(self):
|
||||||
|
"""Verify that cancel_stream() handles interrupt() exceptions gracefully"""
|
||||||
|
stream_id = "test_stream_456"
|
||||||
|
mock_agent = Mock()
|
||||||
|
mock_agent.interrupt = Mock(side_effect=RuntimeError("Agent error"))
|
||||||
|
|
||||||
|
STREAMS[stream_id] = queue.Queue()
|
||||||
|
CANCEL_FLAGS[stream_id] = threading.Event()
|
||||||
|
AGENT_INSTANCES[stream_id] = mock_agent
|
||||||
|
|
||||||
|
# Should not raise exception
|
||||||
|
result = cancel_stream(stream_id)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert result is True
|
||||||
|
mock_agent.interrupt.assert_called_once()
|
||||||
|
assert CANCEL_FLAGS[stream_id].is_set()
|
||||||
|
|
||||||
|
def test_cancel_before_agent_ready(self):
|
||||||
|
"""Test cancel when agent not yet stored in AGENT_INSTANCES (race condition)"""
|
||||||
|
stream_id = "test_stream_789"
|
||||||
|
|
||||||
|
STREAMS[stream_id] = queue.Queue()
|
||||||
|
CANCEL_FLAGS[stream_id] = threading.Event()
|
||||||
|
# Note: AGENT_INSTANCES[stream_id] not set (simulating race condition)
|
||||||
|
|
||||||
|
# Should succeed even without agent
|
||||||
|
result = cancel_stream(stream_id)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert result is True
|
||||||
|
assert CANCEL_FLAGS[stream_id].is_set()
|
||||||
|
# Agent will check this flag when it starts
|
||||||
|
|
||||||
|
def test_cancel_nonexistent_stream(self):
|
||||||
|
"""Test cancel for a stream that doesn't exist"""
|
||||||
|
result = cancel_stream("nonexistent_stream")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_cancel_sets_cancel_event(self):
|
||||||
|
"""Verify that cancel_stream() sets the cancel_event flag"""
|
||||||
|
stream_id = "test_stream_event"
|
||||||
|
|
||||||
|
STREAMS[stream_id] = queue.Queue()
|
||||||
|
cancel_event = threading.Event()
|
||||||
|
CANCEL_FLAGS[stream_id] = cancel_event
|
||||||
|
|
||||||
|
result = cancel_stream(stream_id)
|
||||||
|
|
||||||
|
assert result is True
|
||||||
|
assert cancel_event.is_set()
|
||||||
|
|
||||||
|
def test_cancel_puts_sentinel_in_queue(self):
|
||||||
|
"""Verify that cancel_stream() puts cancel sentinel in queue"""
|
||||||
|
stream_id = "test_stream_queue"
|
||||||
|
q = queue.Queue()
|
||||||
|
|
||||||
|
STREAMS[stream_id] = q
|
||||||
|
CANCEL_FLAGS[stream_id] = threading.Event()
|
||||||
|
|
||||||
|
result = cancel_stream(stream_id)
|
||||||
|
|
||||||
|
assert result is True
|
||||||
|
# Check that cancel message was queued
|
||||||
|
assert not q.empty()
|
||||||
|
event_type, data = q.get_nowait()
|
||||||
|
assert event_type == 'cancel'
|
||||||
|
assert data['message'] == 'Cancelled by user'
|
||||||
@@ -326,3 +326,55 @@ def test_default_model_lands_under_active_provider_group(monkeypatch):
|
|||||||
assert 'gpt-5.4' not in groups.get('Anthropic', []), (
|
assert 'gpt-5.4' not in groups.get('Anthropic', []), (
|
||||||
f"gpt-5.4 leaked into Anthropic group via fallback: {groups.get('Anthropic')}"
|
f"gpt-5.4 leaked into Anthropic group via fallback: {groups.get('Anthropic')}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_custom_endpoint_uses_model_config_api_key_for_model_discovery(monkeypatch):
|
||||||
|
"""Custom endpoint model discovery must use model.api_key from config.yaml,
|
||||||
|
not only environment variables, otherwise the dropdown collapses to the
|
||||||
|
default model when /v1/models requires auth."""
|
||||||
|
import json as _json
|
||||||
|
import api.config as _cfg
|
||||||
|
|
||||||
|
old_cfg = dict(_cfg.cfg)
|
||||||
|
_cfg.cfg['model'] = {
|
||||||
|
'provider': 'custom',
|
||||||
|
'default': 'gpt-5.4',
|
||||||
|
'base_url': 'https://example.test/v1',
|
||||||
|
'api_key': 'sk-test-model-key',
|
||||||
|
}
|
||||||
|
_cfg.cfg.pop('providers', None)
|
||||||
|
|
||||||
|
captured = {}
|
||||||
|
|
||||||
|
class _Resp:
|
||||||
|
def read(self):
|
||||||
|
return _json.dumps({'data': [{'id': 'gpt-5.2', 'name': 'GPT-5.2'}]}).encode('utf-8')
|
||||||
|
def __enter__(self):
|
||||||
|
return self
|
||||||
|
def __exit__(self, exc_type, exc, tb):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _fake_urlopen(req, timeout=10):
|
||||||
|
captured['auth'] = req.get_header('Authorization')
|
||||||
|
captured['ua'] = req.get_header('User-agent')
|
||||||
|
return _Resp()
|
||||||
|
|
||||||
|
monkeypatch.setattr('urllib.request.urlopen', _fake_urlopen)
|
||||||
|
monkeypatch.setattr('socket.getaddrinfo', lambda *a, **k: [])
|
||||||
|
monkeypatch.delenv('OPENAI_API_KEY', raising=False)
|
||||||
|
monkeypatch.delenv('HERMES_API_KEY', raising=False)
|
||||||
|
monkeypatch.delenv('HERMES_OPENAI_API_KEY', raising=False)
|
||||||
|
monkeypatch.delenv('LOCAL_API_KEY', raising=False)
|
||||||
|
monkeypatch.delenv('OPENROUTER_API_KEY', raising=False)
|
||||||
|
monkeypatch.delenv('API_KEY', raising=False)
|
||||||
|
try:
|
||||||
|
result = _cfg.get_available_models()
|
||||||
|
finally:
|
||||||
|
_cfg.cfg.clear()
|
||||||
|
_cfg.cfg.update(old_cfg)
|
||||||
|
|
||||||
|
assert captured['auth'] == 'Bearer sk-test-model-key'
|
||||||
|
assert captured['ua'] == 'OpenAI/Python 1.0'
|
||||||
|
groups = {g['provider']: [m['id'] for m in g['models']] for g in result['groups']}
|
||||||
|
assert 'Custom' in groups
|
||||||
|
assert 'gpt-5.2' in groups['Custom']
|
||||||
|
|||||||
310
tests/test_security_redaction.py
Normal file
310
tests/test_security_redaction.py
Normal file
@@ -0,0 +1,310 @@
|
|||||||
|
"""
|
||||||
|
Security tests: credential redaction in API responses.
|
||||||
|
|
||||||
|
Verifies that credentials (GitHub PATs, API keys, etc.) are masked in:
|
||||||
|
- GET /api/session (messages and tool_calls)
|
||||||
|
- GET /api/memory (MEMORY.md and USER.md content)
|
||||||
|
- GET /api/session/export (downloaded JSON)
|
||||||
|
- SSE done event (session payload in stream)
|
||||||
|
|
||||||
|
Tests run against the isolated test test_server on port 8788.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import pathlib
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
import urllib.error
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
sys.path.insert(0, str(pathlib.Path(__file__).parent.parent.parent))
|
||||||
|
|
||||||
|
|
||||||
|
def _server_is_up(port: int = 8788) -> bool:
|
||||||
|
"""Return True if the test server is accepting connections."""
|
||||||
|
try:
|
||||||
|
urllib.request.urlopen(f"http://127.0.0.1:{port}/health", timeout=2)
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# _needs_server: these tests require the conftest test_server fixture (port 8788).
|
||||||
|
# The skipif is evaluated lazily via the fixture, not at collection time.
|
||||||
|
_needs_server = pytest.mark.usefixtures("test_server")
|
||||||
|
|
||||||
|
BASE = "http://127.0.0.1:8788"
|
||||||
|
|
||||||
|
# Sample credentials that should be masked in every API response
|
||||||
|
_FAKE_GITHUB_PAT = "ghp_TestFakeCredential1234567890ab"
|
||||||
|
_FAKE_SK_KEY = "sk-TestFakeOpenAIKey1234567890abcdef"
|
||||||
|
_FAKE_HF_TOKEN = "hf_TestFakeHuggingFaceToken12345"
|
||||||
|
_FAKE_AWS_KEY = "AKIATESTFAKEKEY12345"
|
||||||
|
|
||||||
|
|
||||||
|
# ── HTTP helpers ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def _get(path):
|
||||||
|
with urllib.request.urlopen(BASE + path, timeout=10) as r:
|
||||||
|
return json.loads(r.read())
|
||||||
|
|
||||||
|
|
||||||
|
def _post(path, body=None):
|
||||||
|
data = json.dumps(body or {}).encode()
|
||||||
|
req = urllib.request.Request(
|
||||||
|
BASE + path, data=data,
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(req, timeout=10) as r:
|
||||||
|
return json.loads(r.read()), r.status
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
return json.loads(e.read()), e.code
|
||||||
|
|
||||||
|
|
||||||
|
def _get_raw(path):
|
||||||
|
"""Return raw bytes (used for export endpoint)."""
|
||||||
|
with urllib.request.urlopen(BASE + path, timeout=10) as r:
|
||||||
|
return r.read()
|
||||||
|
|
||||||
|
|
||||||
|
def _assert_no_plaintext_credentials(text: str, label: str = ""):
|
||||||
|
"""Assert that none of the fake credential strings appear in text."""
|
||||||
|
for cred in (_FAKE_GITHUB_PAT, _FAKE_SK_KEY, _FAKE_HF_TOKEN, _FAKE_AWS_KEY):
|
||||||
|
assert cred not in text, (
|
||||||
|
f"{label}: credential '{cred[:12]}...' found in plaintext. "
|
||||||
|
"Redaction is not working."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ── helpers.py unit tests (import-level, no test_server needed) ───────────────────
|
||||||
|
|
||||||
|
def test_redact_value_str():
|
||||||
|
"""_redact_value masks a plaintext GitHub PAT in a string."""
|
||||||
|
from api.helpers import _redact_value
|
||||||
|
result = _redact_value(f"my token is {_FAKE_GITHUB_PAT} bye")
|
||||||
|
assert _FAKE_GITHUB_PAT not in result
|
||||||
|
assert "ghp_Te" in result # prefix preserved
|
||||||
|
|
||||||
|
|
||||||
|
def test_redact_value_dict():
|
||||||
|
"""_redact_value recurses into dicts."""
|
||||||
|
from api.helpers import _redact_value
|
||||||
|
d = {"content": f"key={_FAKE_SK_KEY}", "role": "user"}
|
||||||
|
result = _redact_value(d)
|
||||||
|
assert _FAKE_SK_KEY not in result["content"]
|
||||||
|
assert result["role"] == "user" # innocent values untouched
|
||||||
|
|
||||||
|
|
||||||
|
def test_redact_value_list():
|
||||||
|
"""_redact_value recurses into lists."""
|
||||||
|
from api.helpers import _redact_value
|
||||||
|
lst = [{"content": _FAKE_GITHUB_PAT}, {"content": "safe text"}]
|
||||||
|
result = _redact_value(lst)
|
||||||
|
assert _FAKE_GITHUB_PAT not in result[0]["content"]
|
||||||
|
assert result[1]["content"] == "safe text"
|
||||||
|
|
||||||
|
|
||||||
|
def test_redact_session_data_messages():
|
||||||
|
"""redact_session_data masks credentials in messages[]."""
|
||||||
|
from api.helpers import redact_session_data
|
||||||
|
session = {
|
||||||
|
"session_id": "abc123",
|
||||||
|
"title": f"my token {_FAKE_GITHUB_PAT}",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": f"token: {_FAKE_GITHUB_PAT}"},
|
||||||
|
{"role": "assistant", "content": "sure"},
|
||||||
|
],
|
||||||
|
"tool_calls": [
|
||||||
|
{"name": "terminal", "args": {"command": f"gh auth login --token {_FAKE_GITHUB_PAT}"},
|
||||||
|
"snippet": "ok"},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
result = redact_session_data(session)
|
||||||
|
dump = json.dumps(result)
|
||||||
|
_assert_no_plaintext_credentials(dump, "redact_session_data")
|
||||||
|
# Safe fields remain intact
|
||||||
|
assert result["session_id"] == "abc123"
|
||||||
|
assert result["messages"][1]["content"] == "sure"
|
||||||
|
|
||||||
|
|
||||||
|
def test_redact_session_data_multiple_cred_types():
|
||||||
|
"""redact_session_data handles sk-, ghp_, hf_, and AKIA keys."""
|
||||||
|
from api.helpers import redact_session_data
|
||||||
|
session = {
|
||||||
|
"title": "test",
|
||||||
|
"messages": [{"role": "user", "content": (
|
||||||
|
f"openai={_FAKE_SK_KEY} "
|
||||||
|
f"github={_FAKE_GITHUB_PAT} "
|
||||||
|
f"hf={_FAKE_HF_TOKEN} "
|
||||||
|
f"aws={_FAKE_AWS_KEY}"
|
||||||
|
)}],
|
||||||
|
"tool_calls": [],
|
||||||
|
}
|
||||||
|
result = redact_session_data(session)
|
||||||
|
dump = json.dumps(result)
|
||||||
|
_assert_no_plaintext_credentials(dump, "multi-type redaction")
|
||||||
|
|
||||||
|
|
||||||
|
def test_redact_session_data_non_sensitive_unchanged():
|
||||||
|
"""redact_session_data does not corrupt innocent content."""
|
||||||
|
from api.helpers import redact_session_data
|
||||||
|
session = {
|
||||||
|
"title": "Hello world",
|
||||||
|
"messages": [{"role": "user", "content": "What is 2+2?"}],
|
||||||
|
"tool_calls": [{"name": "terminal", "snippet": "4"}],
|
||||||
|
}
|
||||||
|
result = redact_session_data(session)
|
||||||
|
assert result["title"] == "Hello world"
|
||||||
|
assert result["messages"][0]["content"] == "What is 2+2?"
|
||||||
|
assert result["tool_calls"][0]["snippet"] == "4"
|
||||||
|
|
||||||
|
|
||||||
|
# ── API-level tests (require running test server started by conftest.py) ─────
|
||||||
|
# Run via `start.sh && pytest tests/test_security_redaction.py -v`
|
||||||
|
|
||||||
|
def _create_session_with_credentials() -> str:
|
||||||
|
"""Write a session file with credential-containing messages directly to disk.
|
||||||
|
|
||||||
|
Bypasses the server's in-memory cache so the GET endpoint is forced to read
|
||||||
|
from disk, exercising the redaction code path on load.
|
||||||
|
Uses TEST_STATE_DIR from conftest.py (the isolated test server state directory).
|
||||||
|
"""
|
||||||
|
import time, uuid
|
||||||
|
try:
|
||||||
|
from conftest import TEST_STATE_DIR
|
||||||
|
sessions_dir = TEST_STATE_DIR / "sessions"
|
||||||
|
except ImportError:
|
||||||
|
from api.config import SESSION_DIR as sessions_dir
|
||||||
|
sessions_dir = pathlib.Path(sessions_dir)
|
||||||
|
sessions_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Use a unique session ID that is NOT in the server's LRU cache
|
||||||
|
sid = "sec_test_" + uuid.uuid4().hex[:8]
|
||||||
|
now = time.time()
|
||||||
|
session_file = sessions_dir / f"{sid}.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": sid,
|
||||||
|
"title": f"session with {_FAKE_GITHUB_PAT}",
|
||||||
|
"workspace": "/tmp",
|
||||||
|
"model": "test",
|
||||||
|
"created_at": now,
|
||||||
|
"updated_at": now,
|
||||||
|
"pinned": False, "archived": False, "project_id": None,
|
||||||
|
"profile": "default", "input_tokens": 0, "output_tokens": 0,
|
||||||
|
"estimated_cost": None, "personality": None,
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": f"my PAT is {_FAKE_GITHUB_PAT}"},
|
||||||
|
{"role": "assistant", "content": f"sk key is {_FAKE_SK_KEY}"},
|
||||||
|
{"role": "tool", "content": "result ok", "name": "terminal"},
|
||||||
|
],
|
||||||
|
"tool_calls": [
|
||||||
|
{"name": "terminal",
|
||||||
|
"args": {"command": f"gh auth login --token {_FAKE_GITHUB_PAT}"},
|
||||||
|
"snippet": "blocked"}
|
||||||
|
],
|
||||||
|
}))
|
||||||
|
return sid
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_session_redacts_messages():
|
||||||
|
"""GET /api/session route must call redact_session_data() before returning."""
|
||||||
|
import inspect
|
||||||
|
import api.routes as routes
|
||||||
|
src = inspect.getsource(routes.handle_get)
|
||||||
|
# Verify redact_session_data is applied to the session payload
|
||||||
|
assert "redact_session_data" in src, (
|
||||||
|
"api/routes.py handle_get must call redact_session_data() on /api/session response"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_session_redacts_title():
|
||||||
|
"""redact_session_data must redact credentials from session title field."""
|
||||||
|
from api.helpers import redact_session_data
|
||||||
|
session = {
|
||||||
|
"session_id": "abc123",
|
||||||
|
"title": f"session with {_FAKE_GITHUB_PAT}",
|
||||||
|
"messages": [],
|
||||||
|
"tool_calls": [],
|
||||||
|
}
|
||||||
|
result = redact_session_data(session)
|
||||||
|
assert _FAKE_GITHUB_PAT not in result["title"], (
|
||||||
|
f"redact_session_data must mask credentials in title field"
|
||||||
|
)
|
||||||
|
assert result["session_id"] == "abc123" # safe fields preserved
|
||||||
|
|
||||||
|
|
||||||
|
@_needs_server
|
||||||
|
def test_api_sessions_list_redacts_titles(test_server):
|
||||||
|
"""GET /api/sessions must not return session titles containing credentials."""
|
||||||
|
_create_session_with_credentials()
|
||||||
|
data = _get("/api/sessions")
|
||||||
|
dump = json.dumps(data)
|
||||||
|
_assert_no_plaintext_credentials(dump, "GET /api/sessions titles")
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_session_export_redacts():
|
||||||
|
"""GET /api/session/export must call redact_session_data() in _handle_session_export."""
|
||||||
|
import inspect
|
||||||
|
import api.routes as routes
|
||||||
|
# The export handler is a separate function (_handle_session_export)
|
||||||
|
src = inspect.getsource(routes._handle_session_export)
|
||||||
|
assert "redact_session_data" in src, (
|
||||||
|
"_handle_session_export must call redact_session_data() before serving download"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@_needs_server
|
||||||
|
def test_api_memory_redacts_via_write_read(test_server):
|
||||||
|
"""Credential written to MEMORY.md must be masked in GET /api/memory response."""
|
||||||
|
original = _get("/api/memory").get("memory", "")
|
||||||
|
|
||||||
|
cred_content = f"GitHub PAT: {_FAKE_GITHUB_PAT}\nNormal note: hello world"
|
||||||
|
data, status = _post("/api/memory/write", {"section": "memory", "content": cred_content})
|
||||||
|
assert status == 200, f"memory/write failed: {data}"
|
||||||
|
|
||||||
|
try:
|
||||||
|
read_back = _get("/api/memory")
|
||||||
|
dump = json.dumps(read_back)
|
||||||
|
_assert_no_plaintext_credentials(dump, "GET /api/memory")
|
||||||
|
assert "hello world" in read_back["memory"] # non-sensitive content preserved
|
||||||
|
finally:
|
||||||
|
_post("/api/memory/write", {"section": "memory", "content": original})
|
||||||
|
|
||||||
|
|
||||||
|
# ── startup: fix_credential_permissions ──────────────────────────────────────
|
||||||
|
|
||||||
|
def test_fix_credential_permissions_corrects_loose_files(tmp_path, monkeypatch):
|
||||||
|
"""fix_credential_permissions() tightens group/other read bits."""
|
||||||
|
import os
|
||||||
|
from api.startup import fix_credential_permissions
|
||||||
|
|
||||||
|
env_file = tmp_path / ".env"
|
||||||
|
env_file.write_text("SECRET=abc")
|
||||||
|
env_file.chmod(0o644) # world-readable -- should be fixed
|
||||||
|
|
||||||
|
google_file = tmp_path / "google_token.json"
|
||||||
|
google_file.write_text("{}")
|
||||||
|
google_file.chmod(0o664) # group-readable -- should be fixed
|
||||||
|
|
||||||
|
monkeypatch.setenv("HERMES_HOME", str(tmp_path))
|
||||||
|
fix_credential_permissions()
|
||||||
|
|
||||||
|
import stat
|
||||||
|
assert stat.S_IMODE(env_file.stat().st_mode) == 0o600, ".env not fixed to 600"
|
||||||
|
assert stat.S_IMODE(google_file.stat().st_mode) == 0o600, "google_token.json not fixed to 600"
|
||||||
|
|
||||||
|
|
||||||
|
def test_fix_credential_permissions_skips_correct_files(tmp_path, monkeypatch):
|
||||||
|
"""fix_credential_permissions() does not alter already-strict files."""
|
||||||
|
env_file = tmp_path / ".env"
|
||||||
|
env_file.write_text("SECRET=abc")
|
||||||
|
env_file.chmod(0o600)
|
||||||
|
|
||||||
|
monkeypatch.setenv("HERMES_HOME", str(tmp_path))
|
||||||
|
|
||||||
|
from api.startup import fix_credential_permissions
|
||||||
|
fix_credential_permissions()
|
||||||
|
|
||||||
|
import stat
|
||||||
|
assert stat.S_IMODE(env_file.stat().st_mode) == 0o600
|
||||||
Reference in New Issue
Block a user