Updated ToC on docs intro; updated title casing to match Google style (#13717)

This commit is contained in:
David Huntsperger
2025-12-01 11:38:48 -08:00
committed by GitHub
parent bde8b78a88
commit 26f050ff10
58 changed files with 660 additions and 642 deletions

View File

@@ -1,4 +1,4 @@
# Gemini CLI Authentication Setup
# Gemini CLI authentication setup
Gemini CLI requires authentication using Google's services. Before using Gemini
CLI, configure **one** of the following authentication methods:
@@ -10,12 +10,12 @@ CLI, configure **one** of the following authentication methods:
- Headless (non-interactive) mode
- Google Cloud Environments (Cloud Shell, Compute Engine, etc.)
## Quick Check: Running in Google Cloud Shell?
## Quick check: Running in Google Cloud Shell?
If you are running the Gemini CLI within a Google Cloud Shell environment,
authentication is typically automatic using your Cloud Shell credentials.
### Other Google Cloud Environments (e.g., Compute Engine)
### Other Google Cloud environments (e.g., Compute Engine)
Some other Google Cloud environments, such as Compute Engine VMs, might also
support automatic authentication. In these environments, Gemini CLI can
@@ -25,7 +25,7 @@ environment's metadata server.
If automatic authentication does not occur in your environment, you will need to
use one of the interactive methods described below.
## Authenticate in Interactive mode
## Authenticate in interactive mode
When you run Gemini CLI through the command-line, Gemini CLI will provide the
following options:
@@ -61,7 +61,7 @@ logging in with your Google account.
> The browser will be redirected to a `localhost` URL that the CLI listens on
> during setup.
#### (Optional) Set your Google Cloud Project
#### (Optional) Set your Google Cloud project
When you log in using a Google account, you may be prompted to select a
`GOOGLE_CLOUD_PROJECT`.
@@ -98,7 +98,7 @@ export GOOGLE_CLOUD_PROJECT_ID="YOUR_PROJECT_ID"
To make this setting persistent, see
[Persisting Environment Variables](#persisting-environment-variables).
### Use Gemini API Key
### Use Gemini API key
If you don't want to authenticate using your Google account, you can use an API
key from Google AI Studio.
@@ -143,7 +143,7 @@ export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"
```
#### A. Vertex AI - Application Default Credentials (ADC) using `gcloud`
#### A. Vertex AI - application default credentials (ADC) using `gcloud`
Consider this method of authentication if you have Google Cloud CLI installed.
@@ -168,7 +168,7 @@ unset GOOGLE_API_KEY GEMINI_API_KEY
3. Ensure `GOOGLE_CLOUD_PROJECT` (or `GOOGLE_CLOUD_PROJECT_ID`) and
`GOOGLE_CLOUD_LOCATION` are set.
#### B. Vertex AI - Service Account JSON key
#### B. Vertex AI - service account JSON key
Consider this method of authentication in non-interactive environments, CI/CD,
or if your organization restricts user-based ADC or API key creation.
@@ -218,7 +218,7 @@ unset GOOGLE_API_KEY GEMINI_API_KEY
To make any of these Vertex AI environment variable settings persistent, see
[Persisting Environment Variables](#persisting-environment-variables).
## Persisting Environment Variables
## Persisting environment variables
To avoid setting environment variables in every terminal session, you can:
@@ -263,7 +263,7 @@ If you have not already logged in with an authentication credential (such as a
Google account), you **must** configure authentication using environment
variables:
1. **Gemini API Key:** Set `GEMINI_API_KEY`.
1. **Gemini API key:** Set `GEMINI_API_KEY`.
2. **Vertex AI:**
- Set `GOOGLE_GENAI_USE_VERTEXAI=true`.
- **With Google Cloud API Key:** Set `GOOGLE_API_KEY`.

View File

@@ -1,6 +1,6 @@
# Gemini CLI Configuration
# Gemini CLI configuration
**Note on Deprecated Configuration Format**
**Note on deprecated configuration format**
This document describes the legacy v1 format for the `settings.json` file. This
format is now deprecated.
@@ -132,7 +132,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
}
```
### Troubleshooting File Search Performance
### Troubleshooting file search performance
If you are experiencing performance issues with file searching (e.g., with `@`
completions), especially in projects with a very large number of files, here are
@@ -144,12 +144,12 @@ a few things you can try in order of recommendation:
the total number of files crawled is the most effective way to improve
performance.
2. **Disable Fuzzy Search:** If ignoring files is not enough, you can disable
2. **Disable fuzzy search:** If ignoring files is not enough, you can disable
fuzzy search by setting `disableFuzzySearch` to `true` in your
`settings.json` file. This will use a simpler, non-fuzzy matching algorithm,
which can be faster.
3. **Disable Recursive File Search:** As a last resort, you can disable
3. **Disable recursive file search:** As a last resort, you can disable
recursive file search entirely by setting `enableRecursiveFileSearch` to
`false`. This will be the fastest option as it avoids a recursive crawl of
your project. However, it means you will need to type the full path to files
@@ -194,7 +194,7 @@ a few things you can try in order of recommendation:
`--allowed-mcp-server-names` is set.
- **Default:** All MCP servers are available for use by the Gemini model.
- **Example:** `"allowMCPServers": ["myPythonServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -208,7 +208,7 @@ a few things you can try in order of recommendation:
be ignored if `--allowed-mcp-server-names` is set.
- **Default**: No MCP servers excluded.
- **Example:** `"excludeMCPServers": ["myNodeServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -538,7 +538,7 @@ a few things you can try in order of recommendation:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -549,7 +549,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -566,7 +566,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -591,7 +591,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud Shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -639,7 +639,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -714,7 +714,7 @@ for that specific session.
- **`--version`**:
- Displays the version of the CLI.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `contextFileName` setting)
@@ -730,7 +730,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -771,23 +771,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<contextFileName>` (e.g., `~/.gemini/GEMINI.md` in
your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -795,15 +795,15 @@ conventions and context.
with a `memoryDiscoveryMaxDirs` field in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -850,7 +850,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -858,22 +858,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**

View File

@@ -1,6 +1,6 @@
# Gemini CLI Configuration
# Gemini CLI configuration
> **Note on Configuration Format, 9/17/25:** The format of the `settings.json`
> **Note on configuration format, 9/17/25:** The format of the `settings.json`
> file has been updated to a new, more organized structure.
>
> - The new format will be supported in the stable release starting
@@ -950,7 +950,7 @@ of v0.3.0:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -961,7 +961,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -978,7 +978,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -1003,7 +1003,7 @@ the `advanced.excludedEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud Shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -1072,7 +1072,7 @@ the `advanced.excludedEnvVars` setting in your `settings.json` file.
- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -1167,7 +1167,7 @@ for that specific session.
- **`--record-responses`**:
- Path to a file to record model responses for testing.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `context.fileName` setting)
@@ -1183,7 +1183,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -1224,23 +1224,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<configured-context-filename>` (e.g.,
`~/.gemini/GEMINI.md` in your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -1249,15 +1249,15 @@ conventions and context.
file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -1304,7 +1304,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -1312,22 +1312,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**

View File

@@ -1,6 +1,6 @@
Note: This page will be replaced by [installation.md](installation.md).
# Gemini CLI Installation, Execution, and Deployment
# Gemini CLI installation, execution, and deployment
Install and run Gemini CLI. This document provides an overview of Gemini CLI's
installation methods and deployment architecture.
@@ -44,7 +44,7 @@ downloading the Gemini CLI package from the NPM registry.
For security and isolation, Gemini CLI can be run inside a container. This is
the default way that the CLI executes tools that might have side effects.
- **Directly from the Registry:** You can run the published sandbox image
- **Directly from the registry:** You can run the published sandbox image
directly. This is useful for environments where you only have Docker and want
to run the CLI.
```bash
@@ -63,7 +63,7 @@ the default way that the CLI executes tools that might have side effects.
Contributors to the project will want to run the CLI directly from the source
code.
- **Development Mode:** This method provides hot-reloading and is useful for
- **Development mode:** This method provides hot-reloading and is useful for
active development.
```bash
# From the root of the repository

View File

@@ -1,4 +1,4 @@
# Gemini CLI Examples
# Gemini CLI examples
Not sure where to get started with Gemini CLI? This document covers examples on
how to use Gemini CLI for a variety of tasks.
@@ -57,7 +57,7 @@ Gemini CLI will return an explanation based on the actual source code:
The `chalk` library is a popular npm package for styling terminal output with
colors. After analyzing the source code, here's how it works:
- **Core Functionality:** The main file sets up a chainable API. Each color or
- **Core functionality:** The main file sets up a chainable API. Each color or
modifier (like `bold` or `italic`) is a getter that appends the corresponding
ANSI escape code to an internal stack.
@@ -65,7 +65,7 @@ colors. After analyzing the source code, here's how it works:
getters. The `red` getter adds the red color code, and the `bold` getter adds
the bold code.
- **Output Generation:** When the chain is treated as a string (e.g., in
- **Output generation:** When the chain is treated as a string (e.g., in
`console.log`), a final `toString()` method is called. This method joins all
the stored ANSI codes, wraps them around the input string ('Hello'), and adds
a reset code at the end. This produces the final, styled string that the

View File

@@ -1,4 +1,4 @@
# Gemini 3 Pro on Gemini CLI (Join the Waitlist)
# Gemini 3 Pro on Gemini CLI (join the waitlist)
Were excited to bring Gemini 3 Pro to Gemini CLI. For Google AI Ultra users
(Google AI Ultra for Business is not currently supported) and paid Gemini and
@@ -8,7 +8,7 @@ For everyone else, we're gradually expanding access
waitlist now to access Gemini 3 Pro once approved.
**Note:** Please wait until you have been approved to use Gemini 3 Pro to enable
**Preview Features**. If enabled early, the CLI will fallback to Gemini 2.5 Pro.
**preview features**. If enabled early, the CLI will fallback to Gemini 2.5 Pro.
## Do I need to join the waitlist?
@@ -81,7 +81,7 @@ CLI waits longer between each retry, when the system is busy. If the retry
doesn't happen immediately, please wait a few minutes for the request to
process.
## Model selection & routing types
## Model selection and routing types
When using Gemini CLI, you may want to control how your requests are routed
between models. By default, Gemini CLI uses **Auto** routing.

View File

@@ -1,4 +1,4 @@
# Get Started with Gemini CLI
# Get started with Gemini CLI
Welcome to Gemini CLI! This guide will help you install, configure, and start
using the Gemini CLI to enhance your workflow right from your terminal.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Installation, Execution, and Deployment
# Gemini CLI installation, execution, and deployment
Install and run Gemini CLI. This document provides an overview of Gemini CLI's
installation methods and deployment architecture.
@@ -42,7 +42,7 @@ downloading the Gemini CLI package from the NPM registry.
For security and isolation, Gemini CLI can be run inside a container. This is
the default way that the CLI executes tools that might have side effects.
- **Directly from the Registry:** You can run the published sandbox image
- **Directly from the registry:** You can run the published sandbox image
directly. This is useful for environments where you only have Docker and want
to run the CLI.
```bash
@@ -61,13 +61,13 @@ the default way that the CLI executes tools that might have side effects.
Contributors to the project will want to run the CLI directly from the source
code.
- **Development Mode:** This method provides hot-reloading and is useful for
- **Development mode:** This method provides hot-reloading and is useful for
active development.
```bash
# From the root of the repository
npm run start
```
- **Production-like mode (Linked package):** This method simulates a global
- **Production-like mode (linked package):** This method simulates a global
installation by linking your local package. It's useful for testing a local
build in a production workflow.