×

注意!页面内容来自https://github.com/kardolus/chatgpt-cli,本站不储存任何内容,为了更好的阅读体验进行在线解析,若有广告出现,请及时反馈。若您觉得侵犯了您的利益,请通知我们进行删除,然后访问 原网页

Skip to content
<> /* Override primer focus outline color for marketing header dropdown links for better contrast */ [data-color-mode="light"] .HeaderMenu-dropdown-link:focus-visible, [data-color-mode="light"] .HeaderMenu-trailing-link a:focus-visible { outline-color: var(--color-accent-fg); }

kardolus/chatgpt-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

493 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ChatGPT CLI

Test Workflow Public Backlog

ChatGPT CLI is a powerfulmulti-provider command-line interface for working with modern LLMs. It supports OpenAI, AzurePerplexityLLaMAand moreand includes streaminginteractive chatprompt filesimage/audio I/OMCP tool callsand an experimental agent mode for multi-step tasks with safety and budget controls.

a screenshot

Table of Contents

Features

  • Streaming mode: Real-time interaction with the GPT model.
  • Query mode: Single input-output interactions with the GPT model.
  • Interactive mode: The interactive mode allows for a more conversational experience with the model. Prints the token usage when combined with query mode.
  • Thread-based context management: Enjoy seamless conversations with the GPT model with individualized context for each threadmuch like your experience on the OpenAI website. Each unique thread has its own historyensuring relevant and coherent responses across different chat instances.
  • Sliding window history: To stay within token limitsthe chat history automatically trims while still preserving the necessary context. The size of this window can be adjusted through the context-window setting.
  • Custom context from any source: You can provide the GPT model with a custom context during conversation. This context can be piped in from any sourcesuch as local filesstandard inputor even another program. This flexibility allows the model to adapt to a wide range of conversational scenarios.
  • Agent mode (ReAct + Plan/Execute): Run multi-step tasks that can thinkactand observe using tools like shell, file operationsand LLM reasoning. Supports both iterative ReAct loops and Plan/Execute workflowswith built-in * budget limits* (timestepstokens) and policy enforcement (allowed toolsdenied commandsworkdir sandboxing) for safe-by-default automation.
  • Web search: Allow compatible models (e.g. gpt-5+) to fetch live web data during a query. Enable with the web setting and tune results using web_context_size.
  • MCP (Model Context Protocol) support: Call external MCP tools via HTTP(S) or STDIOinject their results into the conversation contextand continue the prompt seamlessly.
    • MCP session management: Built-in support for stateful MCP servers. The CLI automatically initializes sessionsattaches session identifiersand renews them when they become invalid.
  • Support for images: Upload an image or provide an image URL using the --image flag. Note that image support may not be available for all models. You can also pipe an image directly: pngpaste - | chatgpt "What is this photo?"
  • Generate images: Use the --draw and --output flags to generate an image from a prompt (requires image-capable models like gpt-image-1).
  • Edit images: Use the --draw flag with --image and --output to modify an existing image using a prompt ( e.g."add sunglasses to the cat"). Supported formats: PNGJPEGand WebP.
  • Audio support: You can upload audio files using the --audio flag to ask questions about spoken content. This feature is compatible only with audio-capable models like gpt-4o-audio-preview. Currentlyonly .mp3 and .wav formats are supported.
  • Transcription support: You can also use the --transcribe flag to generate a transcript of the uploaded audio. This uses OpenAI’s transcription endpoint (compatible with models like gpt-4o-transcribe) and supports a wider range of formatsincluding .mp3.mp4.mpeg.mpga.m4a.wavand .webm.
  • Text-to-speech support: Use the --speak and --output flags to convert text to speech (works with models like gpt-4o-mini-tts). If you have afplay installed (macOS)you can even chain playback like this:
    chatgpt --speak "convert this to audio" --output test.mp3 && afplay test.mp3
  • Model listing: Access a list of available models using the -l or --list-models flag.
  • Advanced configuration options: The CLI supports a layered configuration system where settings can be specified through default valuesa config.yaml fileand environment variables. For quick adjustments, various --set-<value> flags are provided. To verify your current settingsuse the --config or -c flag.

Prompt Support

We’re excited to introduce support for prompt files with the --prompt flag in version 1.7.1! This feature allows you to provide a rich and detailed context for your conversations directly from a file.

Using the --prompt Flag

The --prompt flag lets you specify a file containing the initial context or instructions for your ChatGPT conversation. This is especially useful when you have detailed instructions or context that you want to reuse across different conversations.

To use the --prompt flagpass the path of your prompt file like this:

chatgpt --prompt path/to/your/prompt.md "Use a pipe or provide a query here"

The contents of prompt.md will be read and used as the initial context for the conversationwhile the query you provide directly will serve as the specific question or task you want to address.

Example

Here’s a fun example where you can use the output of a git diff command as a prompt:

git diff | chatgpt --prompt ../prompts/write_pull-request.md

In this examplethe content from the write_pull-request.md prompt file is used to guide the model's response based on the diff data from git diff.

Explore More Prompts

For a variety of ready-to-use promptscheck out this awesome prompts repository. These can serve as great starting points or inspiration for your own custom prompts!

Agent Mode (ReAct + Plan/Execute)

a screenshot

ChatGPT CLI includes an experimental agent mode that can plan and run multi-step tasks using tools (shellfile ops, and LLM reasoning)while enforcing budget + policy constraints.

There are two agent modes:

  • ReAct (-agent-mode react): iterative “think → act → observe” loop
  • Plan/Execute (-agent-mode plan): generates a plan firstthen executes it step-by-step

Quick Start

ReAct mode (default):

chatgpt why is my test failing? --agent

Plan/Execute mode:

chatgpt what is the weather like in brooklyn --agent --agent-mode plan

Workdir Safety

Agent file access can be restricted to a working directory. This is useful to prevent accidental reads/writes outside a project.

chatgpt "what files are in the /tmp directory" \
  --agent \
  --agent-work-dir .

If a step tries to read/write outside the workdirit will be denied by policy (e.g. kind=path_escape).

Budgets and Policy

Agent execution is governed by:

  • Budget limits (iterationsstepstool callswall-timetoken usage)
  • Policy rules (allowed toolsdenied shell commandsfile op allowlistand workdir path restrictions)

This keeps the agent useful while still being safe-by-default.

Logs

When running in agent modeChatGPT CLI automatically writes detailed execution logs to the cache directoryunder:

$OPENAI_CACHE_HOME/agent/

These logs include:

  • Planner output (for Plan/Execute mode)
  • Tool calls and their results
  • Timing and budget usage
  • Debug-level traces when debug logging is enabled

Each agent run gets its own timestamped log directorymaking it easy to inspect what happened after the fact or debug unexpected behavior.

This is especially useful when:

  • An agent run fails due to budget or policy limits
  • You want to understand why the agent chose certain steps
  • You’re developing or tuning agent policies and budgets

MCP Support

ChatGPT CLI supports the Model Context Protocol (MCP) over HTTP(S). This allows the CLI to call an MCP toolinject the tool’s result into the current thread as contextand then run your prompt — all in one command. The integration is provider-agnostic.

You provide:

  • MCP endpoint URL (--mcp)
  • Tool name (--mcp-tool)
  • Optional HTTP headers (--mcp-header)
  • Tool arguments (--mcp-param or --mcp-params)

Overview

When --mcp is setthe CLI will:

  1. POST a JSON-RPC tools/call request to your MCP server
  2. Automatically initialize and manage an MCP session if required
  3. Extract the tool output
  4. Store it as an assistant message in the active thread (prefixed with [MCP: <tool>])
  5. Submit your query to the model (if you provided one)

Examples

Local FastMCP echo server (minimal MCP HTTP example):

chatgpt \
  --mcp "http://127.0.0.1:8000/mcp" \
  --mcp-tool echo \
  --mcp-param 'payload={"foo":"bar","count":3,"enabled":true}' \
  "What did the MCP server receive?"

Apify MCP example (production MCP server):

chatgpt \
  --mcp "https://mcp.apify.com/?tools=epctex/weather-scraper" \
  --mcp-tool "epctex-slash-weather-scraper" \
  --mcp-header "Authorization: Bearer $APIFY_API_KEY" \
  --mcp-param locations='["BrooklynNY"]' \
  --mcp-param timeFrame=today \
  --mcp-param units=imperial \
  --mcp-param proxyConfiguration='{"useApifyProxy":true}' \
  --mcp-param maxItems=1 \
  "what should I wear today"

Using --mcp-params (raw JSON) instead of multiple --mcp-param flags:

chatgpt \
  --mcp "https://your-mcp-server.example.com" \
  --mcp-tool "some-tool-name" \
  --mcp-params '{"locations":["BrooklynNY"],"timeFrame":"today"}' \
  "what should I wear today"

Local MCP server over stdio (no HTTPruns as a subprocess):

chatgpt \
  --mcp "stdio:python test/mcp/stdio/mcp_stdio_server.py" \
  --mcp-tool echo \
  --mcp-param 'payload={"foo":"bar","count":3}' \
  "What did the MCP server receive?"

Headers and Authentication

MCP does not mandate a specific authentication mechanism. Some servers use Bearer tokensothers use API keyscookies, or no auth at all. Use --mcp-header to pass whatever your MCP server requires:

--mcp-header "Authorization: Bearer $TOKEN"
--mcp-header "X-Api-Key: $API_KEY"

MCP Session Management

Some MCP servers require a session identifier (commonly mcp-session-id) to be established before tool calls are accepted. The ChatGPT CLI automatically manages MCP sessions for HTTP(S) servers that require them:

  • Initializes a session when needed
  • Caches the session identifier per endpoint
  • Attaches it to subsequent requests
  • Automatically re-initializes the session if the server invalidates it

You can explicitly pass a session header yourself using --mcp-header. If you dothe CLI will respect it and skip automatic session handling.

How MCP Results Are Used

Tool results are injected into the conversation thread as context before your query runs. The injected message is stored as an assistant message and prefixed like this:

[MCP: <tool-name>] ...

If you run MCP without providing a querythe CLI will inject the context and exit:

chatgpt \
  --mcp "https://your-mcp-server.example.com" \
  --mcp-tool "some-tool-name" \
  --mcp-params '{"foo":"bar"}'

Installation

Using Homebrew (macOS)

You can install chatgpt-cli using Homebrew:

brew tap kardolus/chatgpt-cli && brew install chatgpt-cli

Direct Download

For a quick and easy installation without compilingyou can directly download the pre-built binary for your operating system and architecture:

Apple Silicon

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

macOS Intel chips

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (amd64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (arm64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (386)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-386 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

FreeBSD (amd64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-freebsd-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

FreeBSD (arm64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-freebsd-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Windows (amd64)

Download the binary from this link and add it to your PATH.

Choose the appropriate command for your systemwhich will download the binarymake it executableand move it to your /usr/local/bin directory (or %PATH% on Windows) for easy access.

Getting Started

  1. Set the OPENAI_API_KEY environment variable to your ChatGPT secret key. To set the environment variableyou can add the following line to your shell profile (e.g.~/.bashrc~/.zshrcor ~/.bash_profile)replacing your_api_key with your actual key:

    export OPENAI_API_KEY="your_api_key"
  2. To enable history tracking across CLI callscreate a ~/.chatgpt-cli directory using the command:

    mkdir -p ~/.chatgpt-cli

    Once this directory is in placethe CLI automatically manages the message history for each "thread" you converse with. The history operates like a sliding windowmaintaining context up to a configurable token maximum. This ensures a balance between maintaining conversation context and achieving optimal performance.

    By defaultif a specific thread is not provided by the userthe CLI uses the default thread and stores the history at ~/.chatgpt-cli/history/default.on. You can find more details about how to configure the thread parameter in the Configuration section of this document.

  3. Try it out:

    chatgpt what is the capital of the Netherlands
  4. To start interactive modeuse the -i or --interactive flag:

    chatgpt --interactive

    If you want the CLI to automatically create a new thread for each sessionensure that the auto_create_new_thread configuration variable is set to true. This will create a unique thread identifier for each interactive session.

  5. To use the pipe featurecreate a text file containing some context. For examplecreate a file named context.txt with the following content:

    Kya is a playful dog who loves swimming and playing fetch.

    Thenuse the pipe feature to provide this context to ChatGPT:

    cat context.txt | chatgpt "What kind of toy would Kya enjoy?"
  6. To list all available modelsuse the -l or --list-models flag:

    chatgpt --list-models
  7. For more optionssee:

    chatgpt --help

Configuration

The ChatGPT CLI adopts a four-tier configuration strategywith different levels of precedence assigned to flags, environment variablesa config.yaml fileand default valuesin that respective order:

  1. Flags: Command-line flags have the highest precedence. Any value provided through a flag will override other configurations.
  2. Environment Variables: If a setting is not specified by a flagthe corresponding environment variable (prefixed with the name field from the config) will be checked.
  3. Config file (config.yaml): If neither a flag nor an environment variable is setthe value from the config.yaml file will be used.
  4. Default Values: If no value is specified through flagsconfig.yamlor environment variablesthe CLI will fall back to its built-in default values.

General Configuration

Variable Description Default
name The prefix for environment variable overrides. 'openai'
thread The name of the current chat thread. Each unique thread name has its own context. 'default'
target Load configuration from config.target.yaml ''
omit_history If truethe chat history will not be used to provide context for the GPT model. false
command_prompt The command prompt in interactive mode. Should be single-quoted. '[%datetime] [Q%counter]'
output_prompt The output prompt in interactive mode. Should be single-quoted. ''
command_prompt_color The color of the command_prompt in interactive mode. Supported colors: "red""green""blue""yellow""magenta". ''
output_prompt_color The color of the output_prompt in interactive mode. Supported colors: "red""green""blue""yellow""magenta". ''
auto_create_new_thread If set to truea new thread with a unique identifier (e.g.int_a1b2) will be created for each interactive session. If falsethe CLI will use the thread specified by the thread parameter. false
auto_shell_title If set to truesets the title of the shell to the name of the current thread. false
track_token_usage If set to truedisplays the total token usage after each query in --query modehelping you monitor API usage. false
debug If set to trueprints the raw request and response data during API callsuseful for debugging. false
custom_headers Add a map of custom headers to each http request {}
skip_tls_verify If set to trueskips TLS certificate verificationallowing insecure HTTPS requests. false
http_timeout HTTP client timeout in seconds. Set to 0 for no timeoutuseful for slow or local models. 60
multiline If set to trueenables multiline input mode in interactive sessions. false
role_file Path to a file that overrides the system role (role). ''
prompt Path to a file that provides additional context before the query. ''
image Local path or URL to an image used in the query. ''
audio Path to an audio file (MP3/WAV) used as part of the query. ''
output Path where synthesized audio is saved when using --speak. ''
transcribe Enables transcription mode. This flags takes the path of an audio file. false
speak If trueenables text-to-speech synthesis for the input query. false
draw If truegenerates an image from a prompt and saves it to the path specified by output. Requires image-capable models. false
web Enable web search for supported models (e.g. gpt-5+). false
web_context_size Controls how much context is retrieved during web search (lowmediumhigh). low

LLM-Specific Configuration

Variable Description Default
api_key Your API key. ''
api_key_file Load the API key from a file instead of the environment. Takes precedence over the environment variable. ''
auth_header The header used for authorization in API requests. 'Authorization'
auth_token_prefix The prefix to be added before the token in the auth_header. 'Bearer '
completions_path The API endpoint for completions. '/v1/chat/completions'
context_window The memory limit for how much of the conversation can be remembered at one time. 8192
effort Sets the reasoning effort. Used by gpt-5 and o1-pro models. 'low'
frequency_penalty Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. 0.0
image_edits_path The API endpoint for image editing. '/v1/images/edits'
image_generations_path The API endpoint for image generation. '/v1/images/generations'
max_tokens The maximum number of tokens that can be used in a single API call. 4096
model The GPT model used by the application. 'gpt-4o'
models_path The API endpoint for accessing model information. '/v1/models'
presence_penalty Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. 0.0
responses_path The API endpoint for responses. Used by o1-pro models. '/v1/responses'
role The system role 'You are a helpful assistant.'
seed Sets the seed for deterministic sampling (Beta). Repeated requests with the same seed and parameters aim to return the same result. 0
speech_path The API endpoint for text-to-speech synthesis. '/v1/audio/speech'
temperature What sampling temperature to usebetween 0 and 2. Higher values make the output more random; lower values make it more focused and deterministic. 1.0
top_p An alternative to sampling with temperaturecalled nucleus samplingwhere the model considers the results of the tokens with top_p probability mass. 1.0
transcriptions_path The API endpoint for audio transcription requests. '/v1/audio/transcriptions'
url The base URL for the OpenAI API. 'https://api.openai.com'
user_agent The header used for the user agent in API requests. 'chatgpt-cli'
voice The voice to use when generating audio with TTS models like gpt-4o-mini-tts. 'nova'

Agent Configuration

Variable Description Default
agent Enable agent mode false
agent.mode Strategy (react or plan) react
agent.work_dir Working directory .
agent.max_iterations Max ReAct iterations 10
agent.max_steps Max plan steps 10
agent.max_wall_time Max wall time (0 = unlimited) 0
agent.max_shell_calls Max shell calls (0 = unlimited) 0
agent.max_llm_calls Max LLM calls (0 = unlimited) 10
agent.max_file_ops Max file ops (0 = unlimited) 0
agent.max_llm_tokens Max LLM tokens (0 = unlimited) 0
agent.allowed_tools Allowed tools see below
agent.denied_shell_commands Denied shell commands see below
agent.allowed_file_ops Allowed file ops see below
agent.restrict_files_to_work_dir Sandbox to workdir true
agent.write_plan_on Write plan.on in plan mode true
agent.plan_on_path Override plan.on path ""
agent.dry_run No side effects false

You can also use flagsfor example:

chatgpt "what files are here?" --agent --agent-work-dir /tmp

Default Policy

allowed_tools: [shellllmfiles]
denied_shell_commands: [rmsudoddmkfsshutdownreboot]
allowed_file_ops: [readwrite]

Custom ConfigCache and Data Directory

By defaultChatGPT CLI stores configuration and history files in the ~/.chatgpt-cli directory. Howeveryou can easily override these locations by setting environment variablesallowing you to store configuration and history in custom directories.

Environment Variable Description Default Location
OPENAI_CONFIG_HOME Overrides the default config directory path. ~/.chatgpt-cli
OPENAI_DATA_HOME Overrides the default data directory path. ~/.chatgpt-cli/history
OPENAI_CACHE_HOME Overrides the default cache directory path. ~/.chatgpt-cli/cache

Example for Custom Directories

To change the default configuration or data directoriesset the appropriate environment variables:

export OPENAI_CONFIG_HOME="/custom/config/path"
export OPENAI_DATA_HOME="/custom/data/path"
export OPENAI_CACHE_HOME="/custom/cache/path"

If these environment variables are not setthe application defaults to ~/.chatgpt-cli for configuration files and ~ /.chatgpt-cli/history for history.

Switching Between Configurations with --target

You can maintain multiple configuration files side by side and switch between them using the --target flag. This is especially useful if you use multiple LLM providers (like OpenAIPerplexityAzureetc.) or have different contexts or workflows that require distinct settings.

How it Works

When you use the --target flagthe CLI loads a config file named:

config.<target>.yaml

For example:

chatgpt --target perplexity --config

This will load:

~/.chatgpt-cli/config.perplexity.yaml

If the --target flag is not providedthe CLI falls back to:

~/.chatgpt-cli/config.yaml

Example Setup

You can maintain the following structure:

~/.chatgpt-cli/
├── config.yaml # Default (e.g.OpenAI)
├── config.perplexity.yaml # Perplexity setup
├── config.azure.yaml # Azure-specific config
└── config.llama.yaml # LLaMA setup

Then switch between them like so:

chatgpt --target azure "Explain Azure's GPT model differences"
chatgpt --target perplexity "What are some good restaurants in the Red Hook area"

Or just use the default:

chatgpt "What's the capital of Sweden?"

CLI and Environment Interaction

  • The value of --target is never persisted — it must be explicitly passed for each run.
  • The config file corresponding to the target is loaded before any environment variable overrides are applied.
  • Environment variables still follow the name: field inside the loaded configso name: perplexity enables PERPLEXITY_API_KEY.

Variables for interactive mode:

  • %date: The current date in the format YYYY-MM-DD.
  • %time: The current time in the format HH:MM:SS.
  • %datetime: The current date and time in the format YYYY-MM-DD HH:MM:SS.
  • %counter: The total number of queries in the current session.
  • %usage: The usage in total tokens used (only works in query mode).

The defaults can be overridden by providing your own values in the user configuration file. The structure of this file mirrors that of the default configuration. For instanceto override the model and max_tokens parametersyour file might look like this:

model: gpt-3.5-turbo-16k
max_tokens: 4096

This alters the model to gpt-3.5-turbo-16k and adjusts max_tokens to 4096. All other optionssuch as url completions_pathand models_pathcan similarly be modified.

You can also add custom HTTP headers to all API requests. This is useful when working with proxiesAPI gatewaysor services that require additional headers:

custom_headers:
  X-Custom-Header: "custom-value"
  X-API-Version: "v2"
  X-Client-ID: "my-client-id"

If the user configuration file cannot be accessed or is missingthe application will resort to the default configuration.

Another way to adjust values without manually editing the configuration file is by using environment variables. The name attribute forms the prefix for these variables. As an examplethe model can be modified using the OPENAI_MODEL environment variable. Similarlyto disable history during the execution of a commanduse:

OPENAI_OMIT_HISTORY=true chatgpt what is the capital of Denmark?

This approach is especially beneficial for temporary changes or for testing varying configurations.

Moreoveryou can use the --config or -c flag to view the present configuration. This handy feature allows users to swiftly verify their current settings without the need to manually inspect the configuration files.

chatgpt --config

Executing this command will display the active configurationincluding any overrides instituted by environment variables or the user configuration file.

To facilitate convenient adjustmentsthe ChatGPT CLI provides flags for swiftly modifying the modelthread context-window and max_tokens parameters in your user configured config.yaml. These flags are --set-model --set-thread--set-context-window and --set-max-tokens.

For instanceto update the modeluse the following command:

chatgpt --set-model gpt-3.5-turbo-16k

This feature allows for rapid changes to key configuration parametersoptimizing your experience with the ChatGPT CLI.

Azure Configuration

For Azureyou need to configure theseor similarvalue

name: azure
api_key: <your azure api key>
url: https://<your_resource>.openai.azure.com
completions_path: /openai/deployments/<your_deployment>/chat/completions?api-version=<your_api>
auth_header: api-key
auth_token_prefix: " "

You can set the API key either in the config.yaml file as shown above or export it as an environment variable:

export AZURE_API_KEY=<your_key>

Perplexity Configuration

For Perplexityyou will need something equivelent to the following values:

name: perplexity
api_key: <your perplexity api key>
model: sonar
url: https://api.perplexity.ai

You can set the API key either in the config.yaml file as shown above or export it as an environment variable:

export PERPLEXITY_API_KEY=<your_key>

You can set the API key either in the config.yaml file as shown above or export it as an environment variable:

export AZURE_API_KEY=<your_key>

302.AI Configuration

I successfully tested 302.AI with the following values

name: ai302 # environment variables cannot start with numbers
api_key: <your 302.AI api key>
url: https://api.302.ai

You can set the API key either in the config.yaml file as shown above or export it as an environment variable:

export AI302_API_KEY=<your_key>

Command-Line Autocompletion

Enhance your CLI experience with our new autocompletion feature for command flags!

Enabling Autocompletion

Autocompletion is currently supported for the following shells: BashZshFishand PowerShell. To activate flag completion in your current shell sessionexecute the appropriate command based on your shell:

  • Bash
    . <(chatgpt --set-completions bash)
  • Zsh
    . <(chatgpt --set-completions zsh)
  • Fish
    chatgpt --set-completions fish | source
  • PowerShell
    chatgpt --set-completions powershell | Out-String | Invoke-Expression

Persistent Autocompletion

For added convenienceyou can make autocompletion persist across all new shell sessions by adding the appropriate sourcing command to your shell's startup file. Here are the files typically used for each shell:

  • Bash: Add to .bashrc or .bash_profile
  • Zsh: Add to .zshrc
  • Fish: Add to config.fish
  • PowerShell: Add to your PowerShell profile script

For examplefor Bashyou would add the following line to your .bashrc file:

. <(chatgpt --set-completions bash)

This ensures that command flag autocompletion is enabled automatically every time you open a new terminal window.

Markdown Rendering

You can render markdown in real-time using the mdrender.sh scriptlocated here. You'll first need to install glow.

Example:

chatgpt write a hello world program in Java | ./scripts/mdrender.sh

Development

To start developingset the OPENAI_API_KEY environment variable to your ChatGPT secret key.

Using the Makefile

The Makefile simplifies development tasks by providing several targets for testingbuildingand deployment.

  • all-tests: Run all testsincluding lintingformattingand go mod tidy.
    make all-tests
  • binaries: Build binaries for multiple platforms.
    make binaries
  • shipit: Run the release processcreate binariesand generate release notes.
    make shipit
  • updatedeps: Update dependencies and commit any changes.
    make updatedeps

For more available commandsuse:

make help

Windows build script

.\scripts\install.ps1

Testing the CLI

  1. After a successful buildtest the application with the following command:

    ./bin/chatgpt what type of dog is a Jack Russel?
  2. As mentioned previouslythe ChatGPT CLI supports tracking conversation history across CLI calls. This feature creates a seamless and conversational experience with the GPT modelas the history is utilized as context in subsequent interactions.

    To enable this featureyou need to create a ~/.chatgpt-cli directory using the command:

    mkdir -p ~/.chatgpt-cli

Reporting Issues and Contributing

If you encounter any issues or have suggestions for improvements, please submit an issue on GitHub. We appreciate your feedback and contributions to help make this project better.

Uninstallation

If for any reason you wish to uninstall the ChatGPT CLI application from your systemyou can do so by following these steps:

Using Homebrew (macOS)

If you installed the CLI using Homebrew you can do:

brew uninstall chatgpt-cli

And to remove the tap:

brew untap kardolus/chatgpt-cli

MacOS / Linux

If you installed the binary directlyfollow these steps:

  1. Remove the binary:

    sudo rm /usr/local/bin/chatgpt
  2. Optionallyif you wish to remove the history tracking directoryyou can also delete the ~/.chatgpt-cli directory:

    rm -rf ~/.chatgpt-cli

Windows

  1. Navigate to the location of the chatgpt binary in your systemwhich should be in your PATH.

  2. Delete the chatgpt binary.

  3. Optionallyif you wish to remove the history trackingnavigate to the ~/.chatgpt-cli directory (where ~ refers to your user's home directory) and delete it.

Please note that the history tracking directory ~/.chatgpt-cli only contains conversation history and no personal data. If you have any concerns about thisplease feel free to delete this directory during uninstallation.

Useful Links

Additional Resources

Thank you for using ChatGPT CLI!

About

ChatGPT CLI is a powerfulmulti-provider command-line interface for working with modern LLMs. It supports OpenAIAzurePerplexityLLaMAand morewith features like streaminginteractive chatprompt filesimage/audio I/OMCP tool callsand an experimental agent mode for safemulti-step automation.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages