The Gemini CLI ships with five subcommand groups, more than 30 flags, and an interactive REPL with its own set of slash commands. The official docs split that surface across a dozen pages. This cheat sheet collapses everything into one place: every command, every flag, every slash command, and the config file schema, with real outputs captured on Ubuntu 26.04 LTS.
If you came from the Gemini CLI install guide and want a single reference page to keep open while you work, this is it.
Verified working: May 2026 with Gemini CLI 0.40.1 and Gemini 2.5 Pro on Ubuntu 26.04 LTS (Linux kernel 7.0).
Quick install reminder
Gemini CLI is an npm-distributed package. Install it once and the gemini binary becomes available on your $PATH:
sudo npm install -g @google/gemini-cli
gemini --version
0.40.1
For full prerequisites (Node 20+, browser auth, free-tier quota notes) check the dedicated Gemini CLI install walkthrough.
Top-level command map
Running gemini --help lists the five subcommand groups plus the default interactive launcher. The table below is what you should keep bookmarked. Every command links down to its dedicated section.
| Command | Purpose |
|---|---|
gemini | Launch the interactive TUI agent (default) |
gemini [query] | Open the TUI seeded with an initial prompt |
gemini -p "..." | Headless one-shot mode, prints to stdout |
gemini mcp | Manage MCP servers (add, remove, list, enable, disable) |
gemini extensions | Install, list, link, validate Gemini CLI extensions |
gemini skills | Manage agent skills (install, link, list, enable, disable) |
gemini hooks | Manage hooks (currently only the migrate-from-Claude subcommand) |
gemini gemma | Local Gemma model routing (setup, start, stop, status, logs) |
Authentication and config
On first launch the CLI looks for one of three auth methods. Pick the one that matches your account:
- Google OAuth (free tier): launch
geminiin a desktop session, follow the browser flow. Best for laptops and workstations. - API key: set
GEMINI_API_KEYfrom a Google AI Studio key. Best for servers, CI, and headless boxes. - Vertex AI: set
GOOGLE_GENAI_USE_VERTEXAI=truewith a service account. Best for enterprise or GCP-billed teams.
If none of those is configured the CLI prints a clear error rather than silently falling back. Real output from a fresh box without keys:
$ gemini -p 'echo /help' --output-format text
Please set an Auth method in your /root/.gemini/settings.json
or specify one of the following environment variables before running:
GEMINI_API_KEY, GOOGLE_GENAI_USE_VERTEXAI, GOOGLE_GENAI_USE_GCA
API key on a Linux server
export GEMINI_API_KEY="aiza..."
echo 'export GEMINI_API_KEY="aiza..."' >> ~/.bashrc
gemini --list-extensions
Where the CLI stores state
Gemini keeps every per-user file under ~/.gemini/. Knowing the layout makes scripting, backups, and CI debugging straightforward:
$ ls -la ~/.gemini/
drwxr-xr-x history/ # interactive REPL transcripts
-rw-r--r-- installation_id # anonymized telemetry id
-rw-r--r-- projects.json # known projects map
drwxr-xr-x tmp/ # tool sandbox scratch
# settings.json is created on first auth choice
A minimal ~/.gemini/settings.json for API-key auth looks like this:
{
"selectedAuthType": "USE_GEMINI",
"theme": "Default Dark",
"preferredEditor": "vim",
"telemetry": { "enabled": false }
}
Project-scoped overrides live in ./.gemini/settings.json at the root of the repo you are working in. The same schema applies, and project values override user values.
GEMINI.md project memory
A GEMINI.md file at the root of a repository is auto-loaded into context every session. Use it to record build commands, test commands, deployment paths, and any rule the agent should respect. The pattern mirrors CLAUDE.md in the .claude directory for Claude Code.
Global flags reference
The following flags work with the default launcher and most subcommands. Useful in scripts, CI jobs, and shell aliases.
| Flag | Purpose |
|---|---|
-m, --model | Override the model for one invocation. Example: -m gemini-2.5-pro. |
-p, --prompt | Run headless. The prompt prints, the agent answers, the process exits. |
-i, --prompt-interactive | Seed the TUI with a prompt and stay interactive afterwards. |
-y, --yolo | Auto-approve every action. Equivalent to --approval-mode yolo. |
--approval-mode | Choose default, auto_edit, yolo, or plan (read-only). |
-w, --worktree | Run in a fresh git worktree. Pass a name or let the CLI generate one. |
-s, --sandbox | Run tool calls in the configured sandbox. |
--skip-trust | Trust the current workspace for this session only. |
-r, --resume | Resume a previous session. Pass latest or an index from --list-sessions. |
--list-sessions | Print previous session IDs for the current project. |
--delete-session | Delete a session by index. |
--include-directories | Add extra directories to the workspace map. |
-e, --extensions | Restrict the active extension set for this session. |
-l, --list-extensions | List installed extensions and exit. |
-o, --output-format | text, json, or stream-json. Use json for scripting. |
--policy / --admin-policy | Load policy files for the Policy Engine, used to lock down tools per-team. |
--allowed-mcp-server-names | Only enable the named MCP servers for this run. |
--screen-reader | Optimize TUI output for screen readers. |
-d, --debug | Open the F12 debug console with raw event logs. |
-v, --version | Print the version and exit. |
Headless mode for scripting
Headless invocation is what makes Gemini CLI useful in CI, cron, and pipelines. The -p flag prints the answer and exits, suitable for piping into another tool:
# Quick code review of a diff
git diff main | gemini -p "Review this diff for security issues. Format: bullets."
# JSON output for downstream parsing
gemini -p "List the top 3 risks in this Dockerfile" --output-format json < Dockerfile
# Stream JSON events for tail-friendly log capture
gemini -p "Summarise this log" --output-format stream-json < /var/log/syslog
One gotcha on servers: trust. Gemini refuses to run inside an untrusted directory unless you opt in. Real error from a fresh checkout:
Gemini CLI is not running in a trusted directory.
To proceed, either use --skip-trust, set GEMINI_CLI_TRUST_WORKSPACE=true,
or trust this directory in interactive mode.
Pass --skip-trust for one-shot calls, or set GEMINI_CLI_TRUST_WORKSPACE=true in the environment for a CI runner.
Interactive REPL slash commands
Once inside the TUI, slash commands control session state, tools, and memory. The list below covers everything in the current 0.40.x release.
| Slash command | What it does |
|---|---|
/help | Show the in-app command list. |
/auth | Switch the active auth method without restarting. |
/chat list | List saved chats in the current project. |
/chat save <tag> | Snapshot the current conversation under a tag. |
/chat resume <tag> | Reload a saved chat into the active session. |
/chat delete <tag> | Remove a saved chat. |
/clear | Wipe the visible conversation, keep the model state. |
/compress | Summarise the conversation in place. Frees tokens without losing context. |
/copy | Copy the last reply to the system clipboard. |
/corgi | Toggle the corgi mascot. Yes, really. |
/mcp | Show the active MCP servers and their tools. |
/memory show | Print the loaded GEMINI.md and per-session notes. |
/memory add <text> | Append a note that survives the rest of the session. |
/memory refresh | Re-read GEMINI.md from disk after edits. |
/quit | Exit the TUI cleanly. |
/restore | Restore a checkpoint after a tool call. |
/stats | Print token spend, cache hits, latency. |
/theme | Cycle through built-in colour themes. |
/tools | List active tools (built-in plus MCP plus extensions). |
at (no slash) | Use @path/to/file inline to attach files to the next prompt. |
! (no slash) | Prefix any input with ! to run it as a shell command, e.g. !ls -la. |
bug | Open a pre-filled GitHub issue with anonymised diagnostics. |
editor | Open the configured editor for a long-form prompt. |
File and shell context patterns
Two patterns make the REPL ten times faster. Use them constantly:
# Attach a file to the next prompt
@src/auth.ts review this for SQL injection
# Attach multiple files
@src/auth.ts @src/db.ts find shared validation logic
# Run a shell command without leaving the agent
!git status
!docker compose ps
Managing MCP servers
MCP (Model Context Protocol) servers extend the agent with external tools: GitHub, Postgres, Filesystem, Sentry, Playwright, and hundreds more from the public registry. The gemini mcp subcommand manages them.
$ gemini mcp --help
Commands:
gemini mcp add <name> <commandOrUrl> [args...] Add a server
gemini mcp remove <name> Remove a server
gemini mcp list List all configured MCP servers
gemini mcp enable <name> Enable an MCP server
gemini mcp disable <name> Disable an MCP server
Common MCP servers and the exact add command for each:
# Filesystem (scoped to one directory)
gemini mcp add filesystem npx -y @modelcontextprotocol/server-filesystem /home/user/projects
# GitHub (needs GITHUB_TOKEN)
gemini mcp add github npx -y @modelcontextprotocol/server-github
# Postgres (needs POSTGRES_CONNECTION_STRING)
gemini mcp add postgres npx -y @modelcontextprotocol/server-postgres "postgresql://user:pass@localhost:5432/app"
# Context7 for live library docs
gemini mcp add context7 npx -y @upstash/context7-mcp@latest
List the active set with gemini mcp list and toggle individuals with enable / disable so you can keep a long-tail registry installed without paying token cost on every invocation.
Extensions, skills, and hooks
The Gemini CLI splits agent customisation across three concepts. Knowing what each one is for saves a lot of trial-and-error.
| Concept | What it is | When to use |
|---|---|---|
| Extension | A bundled package that can ship custom commands, themes, MCP servers, hook handlers, and policies. | Distributing a full toolkit to a team. Versioned, auto-updateable. |
| Skill | A self-contained agent role with its own prompt, tools, and metadata. | Specialised behaviour like “release notes writer” or “security reviewer”. |
| Hook | Event handler that fires on tool calls, prompts, and lifecycle events. | Guardrails (block terraform destroy), automation (auto-format on edit). |
Extension commands
# Install from a github repo
gemini extensions install https://github.com/your-org/your-extension --auto-update
# Install from a local checkout (live-linked, edits reflect immediately)
gemini extensions link ./my-extension
# List installed extensions
gemini extensions list
# Update everything
gemini extensions update --all
# Disable temporarily
gemini extensions disable my-extension
# Validate a local extension before publishing
gemini extensions validate ./my-extension
# Scaffold a new extension from a template
gemini extensions new ./my-new-ext mcp-server
The available templates for extensions new are custom-commands, exclude-tools, hooks, mcp-server, policies, skills, and themes-example. Pick the one closest to what you need and edit from there.
Skills commands
# Install a skill from a git repo
gemini skills install https://github.com/your-org/skill-release-notes --scope user
# Install only a sub-path inside a monorepo of skills
gemini skills install https://github.com/some/monorepo --path skills/security-review
# Link a local skill while you build it
gemini skills link ./security-review
# List discovered skills
gemini skills list --all
# Toggle a skill
gemini skills enable security-review
gemini skills disable security-review
Migrating Claude Code hooks to Gemini
If you already wrote hooks for Claude Code, the migrate command rewrites them into Gemini’s format:
gemini hooks migrate --from-claude
Run it from the project root that contains .claude/settings.json. The CLI writes the converted file under .gemini/ and prints a summary of any hooks that need manual review.
Local Gemma model routing
Gemma is Google’s open-weights model family. The gemini gemma subcommand provisions a local LiteRT-LM server so the CLI can route prompts to a CPU- or GPU-hosted Gemma model instead of the cloud Gemini API. Useful for offline work, sensitive code, or burning none of your free-tier quota on quick lookups.
gemini gemma setup # downloads the model + LiteRT-LM runtime
gemini gemma start # starts the local server in the background
gemini gemma status # prints health + current model
gemini gemma logs # tails the LiteRT-LM log
gemini gemma stop # stops the server
Once the server is running, set the model on a per-invocation basis with -m gemma-3-12b (or whichever variant you downloaded). Combine with --approval-mode plan if you want strictly read-only local exploration.
Sessions, history, and resume
gemini --list-sessions # what is recoverable
gemini --resume latest # pick up the most recent
gemini --resume 3 # pick up index 3
gemini --delete-session 5 # tidy up
Sessions live under ~/.gemini/history/. They are JSON, so you can grep, archive, or feed them into another tool.
Approval modes explained
The four approval modes control how much trust you give the agent for a single run. Pick deliberately based on what the agent is about to touch.
| Mode | Behaviour | When to use |
|---|---|---|
default | Prompts on every tool call. | First time on a new repo. |
auto_edit | Auto-approves edits, prompts on shell commands and writes outside the workspace. | Routine refactor work after you trust the agent. |
yolo | Auto-approves everything. Same as --yolo. | Disposable VMs, sandboxed worktrees, never on production. |
plan | Read-only. Refuses edits, refuses shell. Only reads and reasons. | Code reviews, audits, “explore-don’t-touch” sessions. |
Worktree workflow
The -w flag spins the agent up inside a fresh git worktree. Useful for parallel sessions on the same repo, or when you want the agent’s edits isolated from your in-flight branch.
# Auto-named worktree
gemini -w -p "refactor src/auth/* into smaller modules"
# Named worktree (created if missing, reused if it exists)
gemini -w fix-flaky-tests
When the agent finishes, review the diff with git diff inside the worktree, then merge or discard with the usual git worktree commands.
Output formats and JSON parsing
Three output formats fit different scripting needs:
# Plain text. Default. Best for shell pipes and humans.
gemini -p "summarise" --output-format text < report.md
# JSON. One object on stdout. Easiest to parse with jq.
gemini -p "extract action items" --output-format json < meeting.txt | jq .
# Stream JSON. One event per line. Best for long-running prompts where you want progress.
gemini -p "review the entire repo" --output-format stream-json | tee events.ndjson
Use --raw-output only when you specifically need ANSI escapes preserved (terminal recordings, fancy formatters). Pair it with --accept-raw-output-risk to suppress the security warning. Untrusted model output piped to a terminal can issue control sequences, so leave the warning on whenever the prompt sources are not yours.
Common errors and fixes
Error: “Please set an Auth method”
No env var, no settings.json. Either export GEMINI_API_KEY or run gemini interactively once and choose an auth method.
Error: “Gemini CLI is not running in a trusted directory”
Pass --skip-trust for one run, set GEMINI_CLI_TRUST_WORKSPACE=true in the environment for CI, or trust the directory once from the interactive UI for normal desktop work.
Error: “Quota exceeded”
Free-tier OAuth has a daily request limit. Switch to API key auth (paid AI Studio key) or wait for the quota window to reset. Run /stats inside the TUI to see live usage.
Issue: WSL2 path mismatches
If gemini launches under Windows but tries to read a WSL path, configure the PATH and HOME consistently. Easiest fix: install Gemini CLI inside the WSL distro using the Linux instructions, not the Windows installer.
Gemini CLI vs Claude Code vs Codex CLI
Picking between the three terminal AI coding agents comes down to budget, ecosystem, and the model behaviour you prefer. Quick orientation if you are choosing today:
- Gemini CLI: free OAuth tier, 1M-token Gemini 2.5 Pro context, strong at long-document reasoning, native Google Search tool.
- Claude Code: paid Pro/Max, deepest agent ecosystem (skills, hooks, MCP), best Sonnet/Opus models. See the Claude Code cheat sheet.
- Codex CLI: pay-per-use OpenAI, sharpest at short focused refactors. See the Codex CLI cheat sheet.
- OpenCode: open-source TUI that fronts any provider. See the OpenCode setup guide and the OpenCode vs Claude Code vs Cursor comparison.
Frequently asked questions
Is Gemini CLI free?
Yes, on the OAuth tier. Sign in with a Google account and you get a daily quota of Gemini 2.5 Pro requests at no cost. Heavier usage routes through an AI Studio API key, which is paid per token.
Where is the Gemini CLI config file?
The user-level config is ~/.gemini/settings.json. Project-level overrides go in ./.gemini/settings.json at the root of the repo. Project values override user values.
Can Gemini CLI run offline?
Yes, route prompts to a local Gemma model with gemini gemma setup and gemini gemma start. The cloud Gemini API is unreachable offline, so configure Gemma as the default model in settings.json.
How do I use MCP servers with Gemini CLI?
Add a server with gemini mcp add <name> <command> [args], list active servers with gemini mcp list, and inspect what tools each server exposes from inside the TUI with /mcp.
How does Gemini CLI compare to Claude Code?
Gemini CLI has a free OAuth tier and a 1M-token context, which Claude Code does not. Claude Code has the deeper ecosystem (skills, hooks, plugins, MCP) and stronger code-editing models. Most teams pick Gemini for long-document reasoning and Claude Code for production refactor work.
Keep this open while you work
Gemini CLI’s surface keeps growing. Bookmark this page, and when a new release ships check the freshness block at the top to confirm the version you are running matches what was tested. Pair this with the Gemini CLI install guide for first-time setup, and with the Ollama commands cheat sheet if you also run local models alongside.