feat: Add dynamic multi-model support for OpenAI-compatible APIs #1218
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TLDR
This PR adds dynamic multi-model support to Qwen Code, allowing users to fetch and switch between models from OpenAI-compatible API endpoints at runtime without restarting. It removes hardcoded model configuration and adds a new
/modelcommand that dynamically discovers available models from any OpenAI-compatible service (LocalAI, Ollama, LM Studio, OpenRouter, Azure OpenAI, etc.). Selected models are now persisted to settings.json so users' latest model choice is remembered across sessions.Dive Deeper
Problem:
Previously, users could only use a single hardcoded model via environment variable (
OPENAI_MODEL). This prevented:Solution:
Implemented a new
ModelsServicethat fetches models from OpenAI-compatible endpoints and integrated it with the CLI's model command system. Users can now:/authcommand (no model selection needed)/modelcommand to dynamically fetch available modelsImplementation Details:
New Service: ModelsService (
packages/cli/src/services/ModelsService.ts){OPENAI_BASE_URL}/v1/modelsstandard endpointUpdated Components:
Model Persistence:
When users select a model via
/modelcommand:config.setModel(model)~/.qwen/settings.jsonviasettings.setValue('model.name', model)Workflow Example:
Supported APIs:
Reviewer Test Plan
Build and test:
Expected: All tests pass ✅
Test with local OpenAI-compatible service:
Test model persistence across sessions:
Test error handling:
OPENAI_BASE_URLto invalid endpoint/modelTest timeout protection:
OPENAI_BASE_URLto slow/unresponsive endpoint/modelTest backward compatibility:
OPENAI_MODELenvironment variable should still workTest UI/UX:
/authcommand/modelcommand appears in help (/help)Testing Matrix