- Update _PROVIDER_MODELS['minimax'] from stale ABAB 6.5 models to
current MiniMax-M2.7/M2.5/M2.1 lineup (matching hermes-agent upstream)
- Update _PROVIDER_MODELS['zai'] from GLM-4 to current GLM-5/4.7/4.5
lineup (matching hermes-agent upstream)
- Extend resolve_model_provider() to also return base_url from config.yaml,
so providers with custom endpoints (MiniMax, Z.AI) are routed correctly
- Pass base_url to AIAgent in both streaming and sync chat paths
Fixes#6
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace duplicated inline provider resolution in routes.py and streaming.py
with a shared resolve_model_provider() helper in config.py.
Improvements over original:
- If model ID has a prefix matching any known direct-API provider
(not just the config provider), strip it and route correctly.
This handles edge cases like localStorage restoring a model from
a different provider group.
- Single source of truth for the resolution logic.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When the model dropdown sends a prefixed ID like "anthropic/claude-xxx",
AIAgent interprets the provider/model format as an OpenRouter path and
routes through OpenRouter instead of the direct Anthropic API.
Fix: read the configured provider from config.yaml model section. If
the model ID starts with the configured provider name followed by "/",
strip that prefix and pass the provider explicitly to AIAgent. This
ensures direct API providers (Anthropic, OpenAI, etc.) are used when
configured, regardless of the model ID format from the dropdown.