[Gemma x Gemini CLI] Add an Experimental Gemma Router that uses a LiteRT-LM shim into the Composite Model Classifier Strategy (#17231)

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Allen Hutchison <adh@google.com>
This commit is contained in:
Siddharth Diwan
2026-02-26 15:43:43 -08:00
committed by GitHub
parent 6dc9d5ff11
commit 9b7852f11c
29 changed files with 1456 additions and 58 deletions

View File

@@ -1014,6 +1014,23 @@ their corresponding top-level category object in your `settings.json` file.
- **Default:** `false`
- **Requires restart:** Yes
- **`experimental.gemmaModelRouter.enabled`** (boolean):
- **Description:** Enable the Gemma Model Router. Requires a local endpoint
serving Gemma via the Gemini API using LiteRT-LM shim.
- **Default:** `false`
- **Requires restart:** Yes
- **`experimental.gemmaModelRouter.classifier.host`** (string):
- **Description:** The host of the classifier.
- **Default:** `"http://localhost:9379"`
- **Requires restart:** Yes
- **`experimental.gemmaModelRouter.classifier.model`** (string):
- **Description:** The model to use for the classifier. Only tested on
`gemma3-1b-gpu-custom`.
- **Default:** `"gemma3-1b-gpu-custom"`
- **Requires restart:** Yes
#### `skills`
- **`skills.enabled`** (boolean):