chore: clean up launched memory features (#26941)

Co-authored-by: Jenna Inouye <jinouye@google.com>
This commit is contained in:
Sandy Tao
2026-05-13 14:22:56 -07:00
committed by GitHub
parent 0750b01fe4
commit 7504259d72
83 changed files with 617 additions and 4183 deletions
+4 -5
View File
@@ -29,11 +29,10 @@ You'll use Auto Memory when you want to:
avoid them.
- **Bootstrap a skills library** without writing every `SKILL.md` by hand.
Auto Memory complements—but does not replace—the
[`save_memory` tool](../tools/memory.md), which captures single facts into
`GEMINI.md` when the agent explicitly calls it. Auto Memory infers candidates
from past sessions, writes reviewable patches or skill drafts, and never applies
them without your approval.
Auto Memory complements direct memory-file editing. The agent can still persist
explicit user instructions by editing the appropriate Markdown memory file; Auto
Memory infers candidates from past sessions, writes reviewable patches or skill
drafts, and never applies them without your approval.
## Prerequisites
-2
View File
@@ -65,8 +65,6 @@ You can interact with the loaded context files by using the `/memory` command.
being provided to the model.
- **`/memory reload`**: Forces a re-scan and reload of all `GEMINI.md` files
from all configured locations.
- **`/memory add <text>`**: Appends your text to your global
`~/.gemini/GEMINI.md` file. This lets you add persistent memories on the fly.
## Modularize context with imports
-1
View File
@@ -138,7 +138,6 @@ These are the only allowed tools:
[`replace`](../tools/file-system.md#6-replace-edit) only allowed for `.md`
files in the `~/.gemini/tmp/<project>/<session-id>/plans/` directory or your
[custom plans directory](#custom-plan-directory-and-policies).
- **Memory:** [`save_memory`](../tools/memory.md)
- **Skills:** [`activate_skill`](../cli/skills.md) (allows loading specialized
instructions and resources in a read-only manner)
+18 -19
View File
@@ -162,25 +162,24 @@ they appear in the UI.
### Experimental
| UI Label | Setting | Description | Default |
| ---------------------------------------------------- | ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- |
| Gemma Models | `experimental.gemma` | Enable access to Gemma 4 models via Gemini API. | `true` |
| Voice Mode | `experimental.voiceMode` | Enable experimental voice dictation and commands (/voice, /voice model). | `false` |
| Voice Activation Mode | `experimental.voice.activationMode` | How to trigger voice recording with the Space key. | `"push-to-talk"` |
| Voice Transcription Backend | `experimental.voice.backend` | The backend to use for voice transcription. Note: When using the Gemini Live backend, voice recordings are sent to Google Cloud for transcription. | `"gemini-live"` |
| Whisper Model | `experimental.voice.whisperModel` | The Whisper model to use for local transcription. | `"ggml-base.en.bin"` |
| Voice Stop Grace Period (ms) | `experimental.voice.stopGracePeriodMs` | How long to wait for final transcription after stopping recording. | `4000` |
| Enable Git Worktrees | `experimental.worktrees` | Enable automated Git worktree management for parallel work. | `false` |
| Use OSC 52 Paste | `experimental.useOSC52Paste` | Use OSC 52 for pasting. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | `false` |
| Use OSC 52 Copy | `experimental.useOSC52Copy` | Use OSC 52 for copying. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | `false` |
| Model Steering | `experimental.modelSteering` | Enable model steering (user hints) to guide the model during tool execution. | `false` |
| Direct Web Fetch | `experimental.directWebFetch` | Enable web fetch behavior that bypasses LLM summarization. | `false` |
| Enable Gemma Model Router | `experimental.gemmaModelRouter.enabled` | Enable the Gemma Model Router (experimental). Requires a local endpoint serving Gemma via the Gemini API using LiteRT-LM shim. | `false` |
| Auto-start LiteRT Server | `experimental.gemmaModelRouter.autoStartServer` | Automatically start the LiteRT-LM server when Gemini CLI starts and the Gemma router is enabled. | `false` |
| Memory v2 | `experimental.memoryV2` | Disable the built-in save_memory tool and let the main agent persist project context by editing markdown files directly with edit/write_file. Route facts across four tiers: team-shared conventions go to project GEMINI.md files, project-specific personal notes go to the per-project private memory folder (MEMORY.md as index + sibling .md files for detail), and cross-project personal preferences go to the global ~/.gemini/GEMINI.md (the only file under ~/.gemini/ that the agent can edit — settings, credentials, etc. remain off-limits). Set to false to fall back to the legacy save_memory tool. | `true` |
| Auto Memory | `experimental.autoMemory` | Automatically extract memory patches and skills from past sessions in the background. Every change is written as a unified diff `.patch` file under `<projectMemoryDir>/.inbox/<kind>/` and held for review in /memory inbox; nothing is applied until you approve it. | `false` |
| Use the generalist profile to manage agent contexts. | `experimental.generalistProfile` | Suitable for general coding and software development tasks. | `false` |
| Enable Context Management | `experimental.contextManagement` | Enable logic for context management. | `false` |
| UI Label | Setting | Description | Default |
| ---------------------------------------------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- |
| Gemma Models | `experimental.gemma` | Enable access to Gemma 4 models via Gemini API. | `true` |
| Voice Mode | `experimental.voiceMode` | Enable experimental voice dictation and commands (/voice, /voice model). | `false` |
| Voice Activation Mode | `experimental.voice.activationMode` | How to trigger voice recording with the Space key. | `"push-to-talk"` |
| Voice Transcription Backend | `experimental.voice.backend` | The backend to use for voice transcription. Note: When using the Gemini Live backend, voice recordings are sent to Google Cloud for transcription. | `"gemini-live"` |
| Whisper Model | `experimental.voice.whisperModel` | The Whisper model to use for local transcription. | `"ggml-base.en.bin"` |
| Voice Stop Grace Period (ms) | `experimental.voice.stopGracePeriodMs` | How long to wait for final transcription after stopping recording. | `4000` |
| Enable Git Worktrees | `experimental.worktrees` | Enable automated Git worktree management for parallel work. | `false` |
| Use OSC 52 Paste | `experimental.useOSC52Paste` | Use OSC 52 for pasting. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | `false` |
| Use OSC 52 Copy | `experimental.useOSC52Copy` | Use OSC 52 for copying. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | `false` |
| Model Steering | `experimental.modelSteering` | Enable model steering (user hints) to guide the model during tool execution. | `false` |
| Direct Web Fetch | `experimental.directWebFetch` | Enable web fetch behavior that bypasses LLM summarization. | `false` |
| Enable Gemma Model Router | `experimental.gemmaModelRouter.enabled` | Enable the Gemma Model Router (experimental). Requires a local endpoint serving Gemma via the Gemini API using LiteRT-LM shim. | `false` |
| Auto-start LiteRT Server | `experimental.gemmaModelRouter.autoStartServer` | Automatically start the LiteRT-LM server when Gemini CLI starts and the Gemma router is enabled. | `false` |
| Auto Memory | `experimental.autoMemory` | Automatically extract memory patches and skills from past sessions in the background. Every change is written as a unified diff `.patch` file under `<projectMemoryDir>/.inbox/<kind>/` and held for review in /memory inbox; nothing is applied until you approve it. | `false` |
| Use the generalist profile to manage agent contexts. | `experimental.generalistProfile` | Suitable for general coding and software development tasks. | `false` |
| Enable Context Management | `experimental.contextManagement` | Enable logic for context management. | `false` |
### Skills
+2 -2
View File
@@ -71,8 +71,8 @@ Just tell the agent to remember something.
**Prompt:** `Remember that I prefer using 'const' over 'let' wherever possible.`
The agent will use the `save_memory` tool to store this fact in your global
memory file.
The agent will edit the appropriate memory Markdown file, so the fact is loaded
in future sessions.
**Prompt:** `Save the fact that the staging server IP is 10.0.0.5.`
-3
View File
@@ -265,9 +265,6 @@ Slash commands provide meta-level control over the CLI itself.
- **Description:** Manage the AI's instructional context (hierarchical memory
loaded from `GEMINI.md` files).
- **Sub-commands:**
- **`add`**:
- **Description:** Adds the following text to the AI's memory. Usage:
`/memory add <text to remember>`
- **`list`**:
- **Description:** Lists the paths of the GEMINI.md files in use for
hierarchical memory.
-20
View File
@@ -1867,13 +1867,6 @@ their corresponding top-level category object in your `settings.json` file.
- **Default:** `false`
- **Requires restart:** Yes
- **`experimental.jitContext`** (boolean):
- **Description:** Enable Just-In-Time (JIT) context loading. Defaults to
true; set to false to opt out and load all GEMINI.md files into the system
instruction up-front.
- **Default:** `true`
- **Requires restart:** Yes
- **`experimental.useOSC52Paste`** (boolean):
- **Description:** Use OSC 52 for pasting. This may be more robust than the
default system when using remote terminal sessions (if your terminal is
@@ -1936,19 +1929,6 @@ their corresponding top-level category object in your `settings.json` file.
- **Default:** `"gemma3-1b-gpu-custom"`
- **Requires restart:** Yes
- **`experimental.memoryV2`** (boolean):
- **Description:** Disable the built-in save_memory tool and let the main
agent persist project context by editing markdown files directly with
edit/write_file. Route facts across four tiers: team-shared conventions go
to project GEMINI.md files, project-specific personal notes go to the
per-project private memory folder (MEMORY.md as index + sibling .md files
for detail), and cross-project personal preferences go to the global
~/.gemini/GEMINI.md (the only file under ~/.gemini/ that the agent can edit
— settings, credentials, etc. remain off-limits). Set to false to fall back
to the legacy save_memory tool.
- **Default:** `true`
- **Requires restart:** Yes
- **`experimental.stressTestProfile`** (boolean):
- **Description:** Significantly lowers token limits to force early garbage
collection and distillation for testing purposes.
-2
View File
@@ -120,7 +120,6 @@ each tool.
| :----------------------------------------------- | :------ | :----------------------------------------------------------------------------------- |
| [`activate_skill`](../tools/activate-skill.md) | `Other` | Loads specialized procedural expertise from the `.gemini/skills` directory. |
| [`get_internal_docs`](../tools/internal-docs.md) | `Think` | Accesses Gemini CLI's own documentation for accurate answers about its capabilities. |
| [`save_memory`](../tools/memory.md) | `Think` | Persists specific facts and project details to your `GEMINI.md` file. |
### Planning
@@ -173,7 +172,6 @@ representation of each tool's arguments.
| `replace` | `file_path`, `old_string`, `new_string`, `instruction`, `allow_multiple` |
| `ask_user` | `questions` (array of `question`, `header`, `type`, `options`) |
| `write_todos` | `todos` (array of `description`, `status`) |
| `save_memory` | `fact` |
| `activate_skill` | `name` |
| `get_internal_docs` | `path` |
| `enter_plan_mode` | `reason` |
+10 -13
View File
@@ -1,25 +1,22 @@
# Memory tool (`save_memory`)
# Memory files
The `save_memory` tool allows the Gemini agent to persist specific facts, user
preferences, and project details across sessions.
Gemini CLI persists durable facts, user preferences, and project details by
editing Markdown memory files directly.
## Technical reference
This tool appends information to the `## Gemini Added Memories` section of your
global `GEMINI.md` file (typically located at `~/.gemini/GEMINI.md`).
### Arguments
- `fact` (string, required): A clear, self-contained statement in natural
language.
The agent routes memories to the appropriate Markdown file: shared project
instructions go in repository `GEMINI.md` files, private project notes go in the
per-project private memory folder, and cross-project personal preferences go in
the global `~/.gemini/GEMINI.md` file.
## Technical behavior
- **Storage:** Appends to the global context file in the user's home directory.
- **Storage:** Edits Markdown files with `write_file` or `replace`.
- **Loading:** The stored facts are automatically included in the hierarchical
context system for all future sessions.
- **Format:** Saves data as a bulleted list item within a dedicated Markdown
section.
- **Format:** Keeps durable instructions concise and avoids duplicating the same
fact across multiple memory tiers.
## Use cases