Updated ToC on docs intro; updated title casing to match Google style (#13717)

This commit is contained in:
David Huntsperger
2025-12-01 11:38:48 -08:00
committed by GitHub
parent bde8b78a88
commit 26f050ff10
58 changed files with 660 additions and 642 deletions
+1 -1
View File
@@ -1,3 +1,3 @@
# Authentication Setup
# Authentication setup
See: [Getting Started - Authentication Setup](../get-started/authentication.md).
+8 -8
View File
@@ -6,19 +6,19 @@ AI-powered tools. This allows you to safely experiment with and apply code
changes, knowing you can instantly revert back to the state before the tool was
run.
## How It Works
## How it works
When you approve a tool that modifies the file system (like `write_file` or
`replace`), the CLI automatically creates a "checkpoint." This checkpoint
includes:
1. **A Git Snapshot:** A commit is made in a special, shadow Git repository
1. **A Git snapshot:** A commit is made in a special, shadow Git repository
located in your home directory (`~/.gemini/history/<project_hash>`). This
snapshot captures the complete state of your project files at that moment.
It does **not** interfere with your own project's Git repository.
2. **Conversation History:** The entire conversation you've had with the agent
2. **Conversation history:** The entire conversation you've had with the agent
up to that point is saved.
3. **The Tool Call:** The specific tool call that was about to be executed is
3. **The tool call:** The specific tool call that was about to be executed is
also stored.
If you want to undo the change or simply go back, you can use the `/restore`
@@ -35,7 +35,7 @@ repository while the conversation history and tool calls are saved in a JSON
file in your project's temporary directory, typically located at
`~/.gemini/tmp/<project_hash>/checkpoints`.
## Enabling the Feature
## Enabling the feature
The Checkpointing feature is disabled by default. To enable it, you need to edit
your `settings.json` file.
@@ -56,12 +56,12 @@ Add the following key to your `settings.json`:
}
```
## Using the `/restore` Command
## Using the `/restore` command
Once enabled, checkpoints are created automatically. To manage them, you use the
`/restore` command.
### List Available Checkpoints
### List available checkpoints
To see a list of all saved checkpoints for the current project, simply run:
@@ -74,7 +74,7 @@ typically composed of a timestamp, the name of the file being modified, and the
name of the tool that was about to be run (e.g.,
`2025-06-22T10-00-00_000Z-my-file.txt-write_file`).
### Restore a Specific Checkpoint
### Restore a specific checkpoint
To restore your project to a specific checkpoint, use the checkpoint file from
the list:
+6 -6
View File
@@ -1,4 +1,4 @@
# CLI Commands
# CLI commands
Gemini CLI supports several built-in commands to help you manage your session,
customize the interface, and control its behavior. These commands are prefixed
@@ -26,7 +26,7 @@ Slash commands provide meta-level control over the CLI itself.
- **Description:** Saves the current conversation history. You must add a
`<tag>` for identifying the conversation state.
- **Usage:** `/chat save <tag>`
- **Details on Checkpoint Location:** The default locations for saved chat
- **Details on checkpoint location:** The default locations for saved chat
checkpoints are:
- Linux/macOS: `~/.gemini/tmp/<project_hash>/`
- Windows: `C:\Users\<YourUsername>\.gemini\tmp\<project_hash>\`
@@ -256,13 +256,13 @@ Slash commands provide meta-level control over the CLI itself.
file, making it simpler for them to provide project-specific instructions to
the Gemini agent.
### Custom Commands
### Custom commands
Custom commands allow you to create personalized shortcuts for your most-used
prompts. For detailed instructions on how to create, manage, and use them,
please see the dedicated [Custom Commands documentation](./custom-commands.md).
## Input Prompt Shortcuts
## Input prompt shortcuts
These shortcuts apply directly to the input prompt for text manipulation.
@@ -320,7 +320,7 @@ your prompt to Gemini. These commands include git-aware filtering.
- If the `read_many_files` tool encounters an error (e.g., permission issues),
this will also be reported.
## Shell mode & passthrough commands (`!`)
## Shell mode and passthrough commands (`!`)
The `!` prefix lets you interact with your system's shell directly from within
Gemini CLI.
@@ -348,7 +348,7 @@ Gemini CLI.
- **Caution for all `!` usage:** Commands you execute in shell mode have the
same permissions and impact as if you ran them directly in your terminal.
- **Environment Variable:** When a command is executed via `!` or in shell mode,
- **Environment variable:** When a command is executed via `!` or in shell mode,
the `GEMINI_CLI=1` environment variable is set in the subprocess's
environment. This allows scripts or tools to detect if they are being run from
within the Gemini CLI.
+27 -27
View File
@@ -1,4 +1,4 @@
# Gemini CLI Configuration
# Gemini CLI configuration
Gemini CLI offers several ways to configure its behavior, including environment
variables, command-line arguments, and settings files. This document outlines
@@ -144,7 +144,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
be ignored if `--allowed-mcp-server-names` is set.
- **Default**: No MCP servers excluded.
- **Example:** `"excludeMCPServers": ["myNodeServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -423,7 +423,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -434,7 +434,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -449,7 +449,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -486,7 +486,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -547,7 +547,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
relative. `~` is supported for the home directory. **Note: This will
overwrite the file if it already exists.**
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -606,7 +606,7 @@ for that specific session.
- **`--version`**:
- Displays the version of the CLI.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `contextFileName` setting)
@@ -622,7 +622,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -663,23 +663,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<contextFileName>` (e.g., `~/.gemini/GEMINI.md` in
your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -687,15 +687,15 @@ conventions and context.
with a `memoryDiscoveryMaxDirs` field in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -742,7 +742,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -750,22 +750,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**
+8 -8
View File
@@ -1,4 +1,4 @@
# Custom Commands
# Custom commands
Custom commands let you save and reuse your favorite or most frequently used
prompts as personal shortcuts within Gemini CLI. You can create commands that
@@ -9,9 +9,9 @@ all your projects, streamlining your workflow and ensuring consistency.
Gemini CLI discovers commands from two locations, loaded in a specific order:
1. **User Commands (Global):** Located in `~/.gemini/commands/`. These commands
1. **User commands (global):** Located in `~/.gemini/commands/`. These commands
are available in any project you are working on.
2. **Project Commands (Local):** Located in
2. **Project commands (local):** Located in
`<your-project-root>/.gemini/commands/`. These commands are specific to the
current project and can be checked into version control to be shared with
your team.
@@ -30,7 +30,7 @@ separator (`/` or `\`) being converted to a colon (`:`).
- A file at `<project>/.gemini/commands/git/commit.toml` becomes the namespaced
command `/git:commit`.
## TOML File Format (v1)
## TOML file format (v1)
Your command definition files must be written in the TOML format and use the
`.toml` file extension.
@@ -60,7 +60,7 @@ replace that placeholder with the text the user typed after the command name.
The behavior of this injection depends on where it is used:
**A. Raw injection (outside Shell commands)**
**A. Raw injection (outside shell commands)**
When used in the main body of the prompt, the arguments are injected exactly as
the user typed them.
@@ -77,7 +77,7 @@ prompt = "Please provide a code fix for the issue described here: {{args}}."
The model receives:
`Please provide a code fix for the issue described here: "Button is misaligned".`
**B. Using arguments in Shell commands (inside `!{...}` blocks)**
**B. Using arguments in shell commands (inside `!{...}` blocks)**
When you use `{{args}}` inside a shell injection block (`!{...}`), the arguments
are automatically **shell-escaped** before replacement. This allows you to
@@ -156,7 +156,7 @@ When you run `/changelog 1.2.0 added "New feature"`, the final text sent to the
model will be the original prompt followed by two newlines and the command you
typed.
### 3. Executing Shell commands with `!{...}`
### 3. Executing shell commands with `!{...}`
You can make your commands dynamic by executing shell commands directly within
your `prompt` and injecting their output. This is ideal for gathering context
@@ -302,7 +302,7 @@ Your response should include:
"""
```
**3. Run the Command:**
**3. Run the command:**
That's it! You can now run your command in the CLI. First, you might add a file
to the context, and then invoke your command:
+23 -23
View File
@@ -1,11 +1,11 @@
# Gemini CLI for the Enterprise
# Gemini CLI for the enterprise
This document outlines configuration patterns and best practices for deploying
and managing Gemini CLI in an enterprise environment. By leveraging system-level
settings, administrators can enforce security policies, manage tool access, and
ensure a consistent experience for all users.
> **A Note on Security:** The patterns described in this document are intended
> **A note on security:** The patterns described in this document are intended
> to help administrators create a more controlled and secure environment for
> using Gemini CLI. However, they should not be considered a foolproof security
> boundary. A determined user with sufficient privileges on their local machine
@@ -14,7 +14,7 @@ ensure a consistent experience for all users.
> managed environment, not to defend against a malicious actor with local
> administrative rights.
## Centralized Configuration: The System Settings File
## Centralized configuration: The system settings file
The most powerful tools for enterprise administration are the system-wide
settings files. These files allow you to define a baseline configuration
@@ -33,11 +33,11 @@ settings (like `theme`) is:
This means the System Overrides file has the final say. For settings that are
arrays (`includeDirectories`) or objects (`mcpServers`), the values are merged.
**Example of Merging and Precedence:**
**Example of merging and precedence:**
Here is how settings from different levels are combined.
- **System Defaults `system-defaults.json`:**
- **System defaults `system-defaults.json`:**
```json
{
@@ -89,7 +89,7 @@ Here is how settings from different levels are combined.
}
```
- **System Overrides `settings.json`:**
- **System overrides `settings.json`:**
```json
{
"ui": {
@@ -108,7 +108,7 @@ Here is how settings from different levels are combined.
This results in the following merged configuration:
- **Final Merged Configuration:**
- **Final merged configuration:**
```json
{
"ui": {
@@ -159,7 +159,7 @@ This results in the following merged configuration:
By using the system settings file, you can enforce the security and
configuration patterns described below.
## Restricting Tool Access
## Restricting tool access
You can significantly enhance security by controlling which tools the Gemini
model can use. This is achieved through the `tools.core` and `tools.exclude`
@@ -197,12 +197,12 @@ environment to a blocklist.
}
```
**Security Note:** Blocklisting with `excludeTools` is less secure than
**Security note:** Blocklisting with `excludeTools` is less secure than
allowlisting with `coreTools`, as it relies on blocking known-bad commands, and
clever users may find ways to bypass simple string-based blocks. **Allowlisting
is the recommended approach.**
### Disabling YOLO Mode
### Disabling YOLO mode
To ensure that users cannot bypass the confirmation prompt for tool execution,
you can disable YOLO mode at the policy level. This adds a critical layer of
@@ -222,14 +222,14 @@ approval.
This setting is highly recommended in an enterprise environment to prevent
unintended tool execution.
## Managing Custom Tools (MCP Servers)
## Managing custom tools (MCP servers)
If your organization uses custom tools via
[Model-Context Protocol (MCP) servers](../core/tools-api.md), it is crucial to
understand how server configurations are managed to apply security policies
effectively.
### How MCP Server Configurations are Merged
### How MCP server configurations are merged
Gemini CLI loads `settings.json` files from three levels: System, Workspace, and
User. When it comes to the `mcpServers` object, these configurations are
@@ -246,12 +246,12 @@ This means a user **cannot** override the definition of a server that is already
defined in the system-level settings. However, they **can** add new servers with
unique names.
### Enforcing a Catalog of Tools
### Enforcing a catalog of tools
The security of your MCP tool ecosystem depends on a combination of defining the
canonical servers and adding their names to an allowlist.
### Restricting Tools Within an MCP Server
### Restricting tools within an MCP server
For even greater security, especially when dealing with third-party MCP servers,
you can restrict which specific tools from a server are exposed to the model.
@@ -280,7 +280,7 @@ third-party MCP server, even if the server offers other tools like
}
```
#### More Secure Pattern: Define and Add to Allowlist in System Settings
#### More secure pattern: Define and add to allowlist in system settings
To create a secure, centrally-managed catalog of tools, the system administrator
**must** do both of the following in the system-level `settings.json` file:
@@ -293,7 +293,7 @@ To create a secure, centrally-managed catalog of tools, the system administrator
any servers that are not on this list. If this setting is omitted, the CLI
will merge and allow any server defined by the user.
**Example System `settings.json`:**
**Example system `settings.json`:**
1. Add the _names_ of all approved servers to an allowlist. This will prevent
users from adding their own servers.
@@ -322,12 +322,12 @@ Any server a user defines will either be overridden by the system definition (if
it has the same name) or blocked because its name is not in the `mcp.allowed`
list.
### Less Secure Pattern: Omitting the Allowlist
### Less secure pattern: Omitting the allowlist
If the administrator defines the `mcpServers` object but fails to also specify
the `mcp.allowed` allowlist, users may add their own servers.
**Example System `settings.json`:**
**Example system `settings.json`:**
This configuration defines servers but does not enforce the allowlist. The
administrator has NOT included the "mcp.allowed" setting.
@@ -347,7 +347,7 @@ In this scenario, a user can add their own server in their local
results, the user's server will be added to the list of available tools and
allowed to run.
## Enforcing Sandboxing for Security
## Enforcing sandboxing for security
To mitigate the risk of potentially harmful operations, you can enforce the use
of sandboxing for all tool execution. The sandbox isolates tool execution in a
@@ -367,14 +367,14 @@ You can also specify a custom, hardened Docker image for the sandbox by building
a custom `sandbox.Dockerfile` as described in the
[Sandboxing documentation](./sandbox.md).
## Controlling Network Access via Proxy
## Controlling network access via proxy
In corporate environments with strict network policies, you can configure Gemini
CLI to route all outbound traffic through a corporate proxy. This can be set via
an environment variable, but it can also be enforced for custom tools via the
`mcpServers` configuration.
**Example (for an MCP Server):**
**Example (for an MCP server):**
```json
{
@@ -391,7 +391,7 @@ an environment variable, but it can also be enforced for custom tools via the
}
```
## Telemetry and Auditing
## Telemetry and auditing
For auditing and monitoring purposes, you can configure Gemini CLI to send
telemetry data to a central location. This allows you to track tool usage and
@@ -434,7 +434,7 @@ prompted to switch to the enforced method. In non-interactive mode, the CLI will
exit with an error if the configured authentication method does not match the
enforced one.
## Putting It All Together: Example System `settings.json`
## Putting it all together: Example system `settings.json`
Here is an example of a system `settings.json` file that combines several of the
patterns discussed above to create a secure, controlled environment for Gemini
+1 -1
View File
@@ -1,4 +1,4 @@
# Ignoring Files
# Ignoring files
This document provides an overview of the Gemini Ignore (`.geminiignore`)
feature of the Gemini CLI.
+1 -1
View File
@@ -1,4 +1,4 @@
# Provide Context with GEMINI.md Files
# Provide context with GEMINI.md files
Context files, which use the default name `GEMINI.md`, are a powerful feature
for providing instructional context to the Gemini model. You can use these files
+17 -17
View File
@@ -1,4 +1,4 @@
# Headless Mode
# Headless mode
Headless mode allows you to run Gemini CLI programmatically from command line
scripts and automation tools without any interactive UI. This is ideal for
@@ -45,9 +45,9 @@ The headless mode provides a headless interface to Gemini CLI that:
- Enables automation and scripting workflows
- Provides consistent exit codes for error handling
## Basic Usage
## Basic usage
### Direct Prompts
### Direct prompts
Use the `--prompt` (or `-p`) flag to run in headless mode:
@@ -55,7 +55,7 @@ Use the `--prompt` (or `-p`) flag to run in headless mode:
gemini --prompt "What is machine learning?"
```
### Stdin Input
### Stdin input
Pipe input to Gemini CLI from your terminal:
@@ -63,7 +63,7 @@ Pipe input to Gemini CLI from your terminal:
echo "Explain this code" | gemini
```
### Combining with File Input
### Combining with file input
Read from files and process with Gemini:
@@ -71,9 +71,9 @@ Read from files and process with Gemini:
cat README.md | gemini --prompt "Summarize this documentation"
```
## Output Formats
## Output formats
### Text Output (Default)
### Text output (default)
Standard human-readable output:
@@ -87,12 +87,12 @@ Response format:
The capital of France is Paris.
```
### JSON Output
### JSON output
Returns structured data including response, statistics, and metadata. This
format is ideal for programmatic processing and automation scripts.
#### Response Schema
#### Response schema
The JSON output follows this high-level structure:
@@ -140,7 +140,7 @@ The JSON output follows this high-level structure:
}
```
#### Example Usage
#### Example usage
```bash
gemini -p "What is the capital of France?" --output-format json
@@ -218,14 +218,14 @@ Response:
}
```
### Streaming JSON Output
### Streaming JSON output
Returns real-time events as newline-delimited JSON (JSONL). Each significant
action (initialization, messages, tool calls, results) emits immediately as it
occurs. This format is ideal for monitoring long-running operations, building
UIs with live progress, and creating automation pipelines that react to events.
#### When to Use Streaming JSON
#### When to use streaming JSON
Use `--output-format stream-json` when you need:
@@ -237,7 +237,7 @@ Use `--output-format stream-json` when you need:
timestamps
- **Pipeline integration** - Stream events to logging/monitoring systems
#### Event Types
#### Event types
The streaming format emits 6 event types:
@@ -248,7 +248,7 @@ The streaming format emits 6 event types:
5. **`error`** - Non-fatal errors and warnings
6. **`result`** - Final session outcome with aggregated stats
#### Basic Usage
#### Basic usage
```bash
# Stream events to console
@@ -261,7 +261,7 @@ gemini --output-format stream-json --prompt "Analyze this code" > events.jsonl
gemini --output-format stream-json --prompt "List files" | jq -r '.type'
```
#### Example Output
#### Example output
Each line is a complete JSON event:
@@ -274,7 +274,7 @@ Each line is a complete JSON event:
{"type":"result","status":"success","stats":{"total_tokens":250,"input_tokens":50,"output_tokens":200,"duration_ms":3000,"tool_calls":1},"timestamp":"2025-10-10T12:00:05.000Z"}
```
### File Redirection
### File redirection
Save output to files or pipe to other commands:
@@ -292,7 +292,7 @@ gemini -p "Explain microservices" | wc -w
gemini -p "List programming languages" | grep -i "python"
```
## Configuration Options
## Configuration options
Key command-line options for headless usage:
+10 -10
View File
@@ -7,17 +7,17 @@ overview of Gemini CLI, see the [main documentation page](../index.md).
## Basic features
- **[Commands](./commands.md):** A reference for all built-in slash commands
- **[Custom Commands](./custom-commands.md):** Create your own commands and
- **[Custom commands](./custom-commands.md):** Create your own commands and
shortcuts for frequently used prompts.
- **[Headless Mode](./headless.md):** Use Gemini CLI programmatically for
- **[Headless mode](./headless.md):** Use Gemini CLI programmatically for
scripting and automation.
- **[Model Selection](./model.md):** Configure the Gemini AI model used by the
- **[Model selection](./model.md):** Configure the Gemini AI model used by the
CLI.
- **[Settings](./settings.md):** Configure various aspects of the CLI's behavior
and appearance.
- **[Themes](./themes.md):** Customizing the CLI's appearance with different
themes.
- **[Keyboard Shortcuts](./keyboard-shortcuts.md):** A reference for all
- **[Keyboard shortcuts](./keyboard-shortcuts.md):** A reference for all
keyboard shortcuts to improve your workflow.
- **[Tutorials](./tutorials.md):** Step-by-step guides for common tasks.
@@ -25,18 +25,18 @@ overview of Gemini CLI, see the [main documentation page](../index.md).
- **[Checkpointing](./checkpointing.md):** Automatically save and restore
snapshots of your session and files.
- **[Enterprise Configuration](./enterprise.md):** Deploying and manage Gemini
- **[Enterprise configuration](./enterprise.md):** Deploying and manage Gemini
CLI in an enterprise environment.
- **[Sandboxing](./sandbox.md):** Isolate tool execution in a secure,
containerized environment.
- **[Telemetry](./telemetry.md):** Configure observability to monitor usage and
performance.
- **[Token Caching](./token-caching.md):** Optimize API costs by caching tokens.
- **[Trusted Folders](./trusted-folders.md):** A security feature to control
- **[Token caching](./token-caching.md):** Optimize API costs by caching tokens.
- **[Trusted folders](./trusted-folders.md):** A security feature to control
which projects can use the full capabilities of the CLI.
- **[Ignoring Files (.geminiignore)](./gemini-ignore.md):** Exclude specific
- **[Ignoring files (.geminiignore)](./gemini-ignore.md):** Exclude specific
files and directories from being accessed by tools.
- **[Context Files (GEMINI.md)](./gemini-md.md):** Provide persistent,
- **[Context files (GEMINI.md)](./gemini-md.md):** Provide persistent,
hierarchical context to the model.
## Non-interactive mode
@@ -58,4 +58,4 @@ gemini -p "What is fine tuning?"
```
For comprehensive documentation on headless usage, scripting, automation, and
advanced examples, see the **[Headless Mode](./headless.md)** guide.
advanced examples, see the **[Headless mode](./headless.md)** guide.
+2 -2
View File
@@ -1,4 +1,4 @@
# Gemini CLI Keyboard Shortcuts
# Gemini CLI keyboard shortcuts
Gemini CLI ships with a set of default keyboard shortcuts for editing input,
navigating history, and controlling the UI. Use this reference to learn the
@@ -110,7 +110,7 @@ available combinations.
<!-- KEYBINDINGS-AUTOGEN:END -->
## Additional Context-Specific Shortcuts
## Additional context-specific shortcuts
- `Ctrl+Y`: Toggle YOLO (auto-approval) mode for tool calls.
- `Shift+Tab`: Toggle Auto Edit (auto-accept edits) mode.
+8 -8
View File
@@ -1,31 +1,31 @@
## Model Routing
## Model routing
Gemini CLI includes a model routing feature that automatically switches to a
fallback model in case of a model failure. This feature is enabled by default
and provides resilience when the primary model is unavailable.
## How it Works
## How it works
Model routing is not based on prompt complexity, but is a fallback mechanism.
Here's how it works:
1. **Model Failure:** If the currently selected model fails to respond (for
1. **Model failure:** If the currently selected model fails to respond (for
example, due to a server error or other issue), the CLI will initiate the
fallback process.
2. **User Consent:** The CLI will prompt you to ask if you want to switch to
2. **User consent:** The CLI will prompt you to ask if you want to switch to
the fallback model. This is handled by the `fallbackModelHandler`.
3. **Fallback Activation:** If you consent, the CLI will activate the fallback
3. **Fallback activation:** If you consent, the CLI will activate the fallback
mode by calling `config.setFallbackMode(true)`.
4. **Model Switch:** On the next request, the CLI will use the
4. **Model switch:** On the next request, the CLI will use the
`DEFAULT_GEMINI_FLASH_MODEL` as the fallback model. This is handled by the
`resolveModel` function in
`packages/cli/src/zed-integration/zedIntegration.ts` which checks if
`isInFallbackMode()` is true.
### Model Selection Precedence
### Model selection precedence
The model used by Gemini CLI is determined by the following order of precedence:
@@ -37,5 +37,5 @@ The model used by Gemini CLI is determined by the following order of precedence:
3. **`model.name` in `settings.json`:** If neither of the above are set, the
model specified in the `model.name` property of your `settings.json` file
will be used.
4. **Default Model:** If none of the above are set, the default model will be
4. **Default model:** If none of the above are set, the default model will be
used. The default model is `auto`
+2 -2
View File
@@ -1,4 +1,4 @@
# Gemini CLI Model Selection (`/model` Command)
# Gemini CLI model selection (`/model` command)
Select your Gemini CLI model. The `/model` command opens a dialog where you can
configure the model used by Gemini CLI, giving you more control over your
@@ -21,7 +21,7 @@ Running this command will open a dialog with your model options:
| Flash | For tasks that need a balance of speed and reasoning. | gemini-2.5-flash |
| Flash-Lite | For simple tasks that need to be done quickly. | gemini-2.5-flash-lite |
### Gemini 3 Pro and Preview Features
### Gemini 3 Pro and preview features
Note: Gemini 3 is not currently available on all account types. To learn more
about Gemini 3 access, refer to
+1 -1
View File
@@ -87,7 +87,7 @@ Built-in profiles (set via `SEATBELT_PROFILE` env var):
- `restrictive-open`: Strict restrictions, network allowed
- `restrictive-closed`: Maximum restrictions
### Custom Sandbox Flags
### Custom sandbox flags
For container-based sandboxing, you can inject custom flags into the `docker` or
`podman` command using the `SANDBOX_FLAGS` environment variable. This is useful
+1 -1
View File
@@ -1,4 +1,4 @@
# Gemini CLI Settings (`/settings` Command)
# Gemini CLI settings (`/settings` command)
Control your Gemini CLI experience with the `/settings` command. The `/settings`
command opens a dialog to view and edit all your Gemini CLI settings, including
+44 -50
View File
@@ -3,27 +3,27 @@
Learn how to enable and setup OpenTelemetry for Gemini CLI.
- [Observability with OpenTelemetry](#observability-with-opentelemetry)
- [Key Benefits](#key-benefits)
- [OpenTelemetry Integration](#opentelemetry-integration)
- [Key benefits](#key-benefits)
- [OpenTelemetry integration](#opentelemetry-integration)
- [Configuration](#configuration)
- [Google Cloud Telemetry](#google-cloud-telemetry)
- [Google Cloud telemetry](#google-cloud-telemetry)
- [Prerequisites](#prerequisites)
- [Direct Export (Recommended)](#direct-export-recommended)
- [Collector-Based Export (Advanced)](#collector-based-export-advanced)
- [Local Telemetry](#local-telemetry)
- [File-based Output (Recommended)](#file-based-output-recommended)
- [Collector-Based Export (Advanced)](#collector-based-export-advanced-1)
- [Logs and Metrics](#logs-and-metrics)
- [Direct export (recommended)](#direct-export-recommended)
- [Collector-based export (advanced)](#collector-based-export-advanced)
- [Local telemetry](#local-telemetry)
- [File-based output (recommended)](#file-based-output-recommended)
- [Collector-based export (advanced)](#collector-based-export-advanced-1)
- [Logs and metrics](#logs-and-metrics)
- [Logs](#logs)
- [Sessions](#sessions)
- [Tools](#tools)
- [Files](#files)
- [API](#api)
- [Model Routing](#model-routing)
- [Chat and Streaming](#chat-and-streaming)
- [Model routing](#model-routing)
- [Chat and streaming](#chat-and-streaming)
- [Resilience](#resilience)
- [Extensions](#extensions)
- [Agent Runs](#agent-runs)
- [Agent runs](#agent-runs)
- [IDE](#ide)
- [UI](#ui)
- [Metrics](#metrics)
@@ -31,40 +31,40 @@ Learn how to enable and setup OpenTelemetry for Gemini CLI.
- [Sessions](#sessions-1)
- [Tools](#tools-1)
- [API](#api-1)
- [Token Usage](#token-usage)
- [Token usage](#token-usage)
- [Files](#files-1)
- [Chat and Streaming](#chat-and-streaming-1)
- [Model Routing](#model-routing-1)
- [Agent Runs](#agent-runs-1)
- [Chat and streaming](#chat-and-streaming-1)
- [Model routing](#model-routing-1)
- [Agent runs](#agent-runs-1)
- [UI](#ui-1)
- [Performance](#performance)
- [GenAI Semantic Convention](#genai-semantic-convention)
- [GenAI semantic convention](#genai-semantic-convention)
## Key Benefits
## Key benefits
- **🔍 Usage Analytics**: Understand interaction patterns and feature adoption
- **🔍 Usage analytics**: Understand interaction patterns and feature adoption
across your team
- **⚡ Performance Monitoring**: Track response times, token consumption, and
- **⚡ Performance monitoring**: Track response times, token consumption, and
resource utilization
- **🐛 Real-time Debugging**: Identify bottlenecks, failures, and error patterns
- **🐛 Real-time debugging**: Identify bottlenecks, failures, and error patterns
as they occur
- **📊 Workflow Optimization**: Make informed decisions to improve
- **📊 Workflow optimization**: Make informed decisions to improve
configurations and processes
- **🏢 Enterprise Governance**: Monitor usage across teams, track costs, ensure
- **🏢 Enterprise governance**: Monitor usage across teams, track costs, ensure
compliance, and integrate with existing monitoring infrastructure
## OpenTelemetry Integration
## OpenTelemetry integration
Built on **[OpenTelemetry]** — the vendor-neutral, industry-standard
observability framework — Gemini CLI's observability system provides:
- **Universal Compatibility**: Export to any OpenTelemetry backend (Google
- **Universal compatibility**: Export to any OpenTelemetry backend (Google
Cloud, Jaeger, Prometheus, Datadog, etc.)
- **Standardized Data**: Use consistent formats and collection methods across
- **Standardized data**: Use consistent formats and collection methods across
your toolchain
- **Future-Proof Integration**: Connect with existing and future observability
- **Future-proof integration**: Connect with existing and future observability
infrastructure
- **No Vendor Lock-in**: Switch between backends without changing your
- **No vendor lock-in**: Switch between backends without changing your
instrumentation
[OpenTelemetry]: https://opentelemetry.io/
@@ -89,9 +89,9 @@ Environment variables can be used to override the settings in the file.
`true` or `1` will enable the feature. Any other value will disable it.
For detailed information about all configuration options, see the
[Configuration Guide](../get-started/configuration.md).
[Configuration guide](../get-started/configuration.md).
## Google Cloud Telemetry
## Google Cloud telemetry
### Prerequisites
@@ -130,7 +130,7 @@ Before using either method below, complete these steps:
--project="$OTLP_GOOGLE_CLOUD_PROJECT"
```
### Direct Export (Recommended)
### Direct export (recommended)
Sends telemetry directly to Google Cloud services. No collector needed.
@@ -150,7 +150,7 @@ Sends telemetry directly to Google Cloud services. No collector needed.
- Metrics: https://console.cloud.google.com/monitoring/metrics-explorer
- Traces: https://console.cloud.google.com/traces/list
### Collector-Based Export (Advanced)
### Collector-based export (advanced)
For custom processing, filtering, or routing, use an OpenTelemetry collector to
forward data to Google Cloud.
@@ -184,11 +184,11 @@ forward data to Google Cloud.
- Open `~/.gemini/tmp/<projectHash>/otel/collector-gcp.log` to view local
collector logs.
## Local Telemetry
## Local telemetry
For local development and debugging, you can capture telemetry data locally:
### File-based Output (Recommended)
### File-based output (recommended)
1. Enable telemetry in your `.gemini/settings.json`:
```json
@@ -204,7 +204,7 @@ For local development and debugging, you can capture telemetry data locally:
2. Run Gemini CLI and send prompts.
3. View logs and metrics in the specified file (e.g., `.gemini/telemetry.log`).
### Collector-Based Export (Advanced)
### Collector-based export (advanced)
1. Run the automation script:
```bash
@@ -220,7 +220,7 @@ For local development and debugging, you can capture telemetry data locally:
3. View traces at http://localhost:16686 and logs/metrics in the collector log
file.
## Logs and Metrics
## Logs and metrics
The following section describes the structure of logs and metrics generated for
Gemini CLI.
@@ -378,9 +378,7 @@ Captures Gemini API requests, responses, and errors.
- **Attributes**:
- `model` (string)
#### Model Routing
Tracks model selections via slash commands and router decisions.
#### Model routing
- `gemini_cli.slash_command`: A slash command was executed.
- **Attributes**:
@@ -401,9 +399,7 @@ Tracks model selections via slash commands and router decisions.
- `failed` (boolean)
- `error_message` (string, optional)
#### Chat and Streaming
Observes streaming integrity, compression, and retry behavior.
#### Chat and streaming
- `gemini_cli.chat_compression`: Chat context was compressed.
- **Attributes**:
@@ -489,9 +485,7 @@ Tracks extension lifecycle and settings changes.
- `extension_source` (string)
- `status` (string)
#### Agent Runs
Tracks agent lifecycle and outcomes.
#### Agent runs
- `gemini_cli.agent.start`: Agent run started.
- **Attributes**:
@@ -567,7 +561,7 @@ Tracks API request volume and latency.
- `model`
- Note: Overlaps with `gen_ai.client.operation.duration` (GenAI conventions).
##### Token Usage
##### Token usage
Tracks tokens used by model and type.
@@ -595,7 +589,7 @@ Counts file operations with basic context.
- `function_name`
- `type` ("added" or "removed")
##### Chat and Streaming
##### Chat and streaming
Resilience counters for compression, invalid chunks, and retries.
@@ -614,7 +608,7 @@ Resilience counters for compression, invalid chunks, and retries.
- `gemini_cli.chat.content_retry_failure.count` (Counter, Int): Counts requests
where all content retries failed.
##### Model Routing
##### Model routing
Routing latency/failures and slash-command selections.
@@ -635,7 +629,7 @@ Routing latency/failures and slash-command selections.
- `routing.decision_source` (string)
- `routing.error_message` (string)
##### Agent Runs
##### Agent runs
Agent lifecycle metrics: runs, durations, and turns.
@@ -727,7 +721,7 @@ Optional performance monitoring for startup, CPU/memory, and phase timing.
- `current_value` (number)
- `baseline_value` (number)
#### GenAI Semantic Convention
#### GenAI semantic convention
The following metrics comply with [OpenTelemetry GenAI semantic conventions] for
standardized observability across GenAI applications:
+14 -14
View File
@@ -4,19 +4,19 @@ Gemini CLI supports a variety of themes to customize its color scheme and
appearance. You can change the theme to suit your preferences via the `/theme`
command or `"theme":` configuration setting.
## Available Themes
## Available themes
Gemini CLI comes with a selection of pre-defined themes, which you can list
using the `/theme` command within Gemini CLI:
- **Dark Themes:**
- **Dark themes:**
- `ANSI`
- `Atom One`
- `Ayu`
- `Default`
- `Dracula`
- `GitHub`
- **Light Themes:**
- **Light themes:**
- `ANSI Light`
- `Ayu Light`
- `Default Light`
@@ -24,7 +24,7 @@ using the `/theme` command within Gemini CLI:
- `Google Code`
- `Xcode`
### Changing Themes
### Changing themes
1. Enter `/theme` into Gemini CLI.
2. A dialog or selection prompt appears, listing the available themes.
@@ -36,7 +36,7 @@ using the `/theme` command within Gemini CLI:
by a file path), you must remove the `"theme"` setting from the file before you
can change the theme using the `/theme` command.
### Theme Persistence
### Theme persistence
Selected themes are saved in Gemini CLI's
[configuration](../get-started/configuration.md) so your preference is
@@ -44,13 +44,13 @@ remembered across sessions.
---
## Custom Color Themes
## Custom color themes
Gemini CLI allows you to create your own custom color themes by specifying them
in your `settings.json` file. This gives you full control over the color palette
used in the CLI.
### How to Define a Custom Theme
### How to define a custom theme
Add a `customThemes` block to your user, project, or system `settings.json`
file. Each custom theme is defined as an object with a unique name and a set of
@@ -93,7 +93,7 @@ This object supports the keys `primary`, `secondary`, `link`, `accent`, and
`response`. When `text.response` is provided it takes precedence over
`text.primary` for rendering model responses in chat.
**Required Properties:**
**Required properties:**
- `name` (must match the key in the `customThemes` object and be a string)
- `type` (must be the string `"custom"`)
@@ -117,7 +117,7 @@ for a full list of supported names.
You can define multiple custom themes by adding more entries to the
`customThemes` object.
### Loading Themes from a File
### Loading themes from a file
In addition to defining custom themes in `settings.json`, you can also load a
theme directly from a JSON file by specifying the file path in your
@@ -162,17 +162,17 @@ custom theme defined in `settings.json`.
}
```
**Security Note:** For your safety, Gemini CLI will only load theme files that
**Security note:** For your safety, Gemini CLI will only load theme files that
are located within your home directory. If you attempt to load a theme from
outside your home directory, a warning will be displayed and the theme will not
be loaded. This is to prevent loading potentially malicious theme files from
untrusted sources.
### Example Custom Theme
### Example custom theme
<img src="../assets/theme-custom.png" alt="Custom theme example" width="600" />
### Using Your Custom Theme
### Using your custom theme
- Select your custom theme using the `/theme` command in Gemini CLI. Your custom
theme will appear in the theme selection dialog.
@@ -184,7 +184,7 @@ untrusted sources.
---
## Dark Themes
## Dark themes
### ANSI
@@ -210,7 +210,7 @@ untrusted sources.
<img src="/assets/theme-github.png" alt="GitHub theme" width="600">
## Light Themes
## Light themes
### ANSI Light
+1 -1
View File
@@ -1,4 +1,4 @@
# Token Caching and Cost Optimization
# Token caching and cost optimization
Gemini CLI automatically optimizes API costs through token caching when using
API key authentication (Gemini API key or Vertex AI). This feature reuses
+16 -16
View File
@@ -5,7 +5,7 @@ which projects can use the full capabilities of the Gemini CLI. It prevents
potentially malicious code from running by asking you to approve a folder before
the CLI loads any project-specific configurations from it.
## Enabling the Feature
## Enabling the feature
The Trusted Folders feature is **disabled by default**. To use it, you must
first enable it in your settings.
@@ -22,7 +22,7 @@ Add the following to your user `settings.json` file:
}
```
## How It Works: The Trust Dialog
## How it works: The trust dialog
Once the feature is enabled, the first time you run the Gemini CLI from a
folder, a dialog will automatically appear, prompting you to make a choice:
@@ -38,58 +38,58 @@ folder, a dialog will automatically appear, prompting you to make a choice:
Your choice is saved in a central file (`~/.gemini/trustedFolders.json`), so you
will only be asked once per folder.
## Why Trust Matters: The Impact of an Untrusted Workspace
## Why trust matters: The impact of an untrusted workspace
When a folder is **untrusted**, the Gemini CLI runs in a restricted "safe mode"
to protect you. In this mode, the following features are disabled:
1. **Workspace Settings are Ignored**: The CLI will **not** load the
1. **Workspace settings are ignored**: The CLI will **not** load the
`.gemini/settings.json` file from the project. This prevents the loading of
custom tools and other potentially dangerous configurations.
2. **Environment Variables are Ignored**: The CLI will **not** load any `.env`
2. **Environment variables are ignored**: The CLI will **not** load any `.env`
files from the project.
3. **Extension Management is Restricted**: You **cannot install, update, or
3. **Extension management is restricted**: You **cannot install, update, or
uninstall** extensions.
4. **Tool Auto-Acceptance is Disabled**: You will always be prompted before any
4. **Tool auto-acceptance is disabled**: You will always be prompted before any
tool is run, even if you have auto-acceptance enabled globally.
5. **Automatic Memory Loading is Disabled**: The CLI will not automatically
5. **Automatic memory loading is disabled**: The CLI will not automatically
load files into context from directories specified in local settings.
6. **MCP Servers Do Not Connect**: The CLI will not attempt to connect to any
6. **MCP servers do not connect**: The CLI will not attempt to connect to any
[Model Context Protocol (MCP)](../tools/mcp-server.md) servers.
7. **Custom Commands are Not Loaded**: The CLI will not load any custom
7. **Custom commands are not loaded**: The CLI will not load any custom
commands from .toml files, including both project-specific and global user
commands.
Granting trust to a folder unlocks the full functionality of the Gemini CLI for
that workspace.
## Managing Your Trust Settings
## Managing your trust settings
If you need to change a decision or see all your settings, you have a couple of
options:
- **Change the Current Folder's Trust**: Run the `/permissions` command from
- **Change the current folder's trust**: Run the `/permissions` command from
within the CLI. This will bring up the same interactive dialog, allowing you
to change the trust level for the current folder.
- **View All Trust Rules**: To see a complete list of all your trusted and
- **View all trust rules**: To see a complete list of all your trusted and
untrusted folder rules, you can inspect the contents of the
`~/.gemini/trustedFolders.json` file in your home directory.
## The Trust Check Process (Advanced)
## The trust check process (advanced)
For advanced users, it's helpful to know the exact order of operations for how
trust is determined:
1. **IDE Trust Signal**: If you are using the
1. **IDE trust signal**: If you are using the
[IDE Integration](../ide-integration/index.md), the CLI first asks the IDE
if the workspace is trusted. The IDE's response takes highest priority.
2. **Local Trust File**: If the IDE is not connected, the CLI checks the
2. **Local trust file**: If the IDE is not connected, the CLI checks the
central `~/.gemini/trustedFolders.json` file.
+1 -1
View File
@@ -35,7 +35,7 @@ _PowerShell_
Remove-Item -Path (Join-Path $env:LocalAppData "npm-cache\_npx") -Recurse -Force
```
## Method 2: Using npm (Global Install)
## Method 2: Using npm (global install)
If you installed the CLI globally (e.g., `npm install -g @google/gemini-cli`),
use the `npm uninstall` command with the `-g` flag to remove it.