fix(config): default model empty string — no unavailable OpenAI model for non-OpenAI users — closes #646 (PR #649)

DEFAULT_MODEL now defaults to "" instead of "openai/gpt-5.4-mini". Guards added in model-list builder so empty default does not create blank model entries. Adds 3 tests in test_issue646.py. Independent review by @nesquena.
This commit is contained in:
nesquena-hermes
2026-04-17 23:46:43 -07:00
committed by GitHub
parent bded1cf906
commit ec48c482e2
5 changed files with 84 additions and 19 deletions

View File

@@ -19,6 +19,10 @@
### Fixed
- **Gemma 4 thinking tokens no longer shown raw in chat** — added `<|turn|>thinking\n...<turn|>` to the streaming think-token parser in `static/messages.js` and `_strip_thinking_markup()` in `api/streaming.py`. Previously Gemma 4's reasoning output appeared as raw text prepended to the answer. (Closes #607)
## [v0.50.79] — 2026-04-17
### Fixed
- **Default model no longer shows as "(unavailable)" for non-OpenAI users** — changed the hardcoded fallback `DEFAULT_MODEL` from `openai/gpt-5.4-mini` to `""` (empty). When no default model is configured, the WebUI now defers to the active provider's own default instead of pre-selecting an OpenAI model that most providers don't have. Users who want a specific default can still set `HERMES_WEBUI_DEFAULT_MODEL` env var or pick a model in Preferences. (Closes #646)
## [v0.50.76] — 2026-04-17