fix: silent agent errors, stale model list, live model fetching (#377)
* fix: silent errors, stale models, live model fetching (#373, #374, #375) - api/streaming.py: detect empty agent response (_assistant_added check), emit apperror(type='no_response' or 'auth_mismatch') instead of silent done - api/streaming.py: add _token_sent flag so guard works for streaming agents - static/messages.js: done handler belt-and-suspenders guard for zero replies - static/messages.js: apperror handler labels 'no_response' type distinctly - api/config.py: remove gpt-4o and o3 from _FALLBACK_MODELS and _PROVIDER_MODELS['openai'] (superseded by gpt-5.4-mini and o4-mini) - api/routes.py: new /api/models/live?provider= endpoint, fetches /v1/models from provider API with B310 scheme check + SSRF guard - static/ui.js: _fetchLiveModels() background fetch after static list loads, appends new models to dropdown, caches per session, skips unsupported providers Other: - tests/test_issues_373_374_375.py: 25 new structural tests - tests/test_regressions.py: extend done-handler window 1500->2500 chars - CHANGELOG.md: v0.50.19 entry; 947 tests (up from 922) * fix: SSRF hostname bypass + auth detection operator precedence 1. routes.py: SSRF guard used substring matching (any(k in hostname)) which allows bypass via hostnames like evil-ollama.attacker.com. Changed to exact hostname matching against a fixed set of known local hostnames (localhost, 127.0.0.1, 0.0.0.0, ::1). 2. streaming.py: _is_auth detection had a Python operator precedence bug on the ternary expression. The line: 'AuthenticationError' in type(...).__name__ if _last_err else False parsed as the ternary absorbing the rest of the or-chain when _last_err was falsy. Fixed to: (_last_err and 'AuthenticationError' in ...) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: fix v0.50.20 CHANGELOG version number and test count (949 tests) --------- Co-authored-by: Nathan Esquenazi <nesquena@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -406,8 +406,6 @@ CLI_TOOLSETS = get_config().get("platform_toolsets", {}).get("cli", _DEFAULT_TOO
|
||||
# Hardcoded fallback models (used when no config.yaml or agent is available)
|
||||
_FALLBACK_MODELS = [
|
||||
{"provider": "OpenAI", "id": "openai/gpt-5.4-mini", "label": "GPT-5.4 Mini"},
|
||||
{"provider": "OpenAI", "id": "openai/gpt-4o", "label": "GPT-4o"},
|
||||
{"provider": "OpenAI", "id": "openai/o3", "label": "o3"},
|
||||
{"provider": "OpenAI", "id": "openai/o4-mini", "label": "o4-mini"},
|
||||
{
|
||||
"provider": "Anthropic",
|
||||
@@ -463,8 +461,6 @@ _PROVIDER_MODELS = {
|
||||
],
|
||||
"openai": [
|
||||
{"id": "gpt-5.4-mini", "label": "GPT-5.4 Mini"},
|
||||
{"id": "gpt-4o", "label": "GPT-4o"},
|
||||
{"id": "o3", "label": "o3"},
|
||||
{"id": "o4-mini", "label": "o4-mini"},
|
||||
],
|
||||
"openai-codex": [
|
||||
|
||||
Reference in New Issue
Block a user