CuratedMCP
Gemini CLI
Free

How to Install OpenAI MCP in Gemini CLI

Official OpenAI integration — access GPT-4o, DALL-E, Whisper, and Embeddings from any MCP client

What OpenAI MCP does

The official OpenAI MCP server lets you use OpenAI's full model suite as tools within any MCP-compatible AI client — including Claude, Cursor, and Windsurf. Features: - Chat completions with any GPT model (GPT-4o, o1, o3-mini) - DALL-E 3 image generation from text prompts - Whisper audio transcription - Text embeddings for semantic search - Moderation API for content safety checks - Function/tool calling passthrough - Streaming response support - Batch API for high-volume requests Cross-model use cases: - Use Claude as orchestrator while delegating image generation to DALL-E - Use GPT-4o for tasks where it outperforms other models - Use Whisper for voice transcription within a Claude workflow - Generate embeddings for RAG pipelines directly from MCP Full OpenAI API compatibility — all models and endpoints supported.

openai
gpt
dall-e
whisper
embeddings

Installation steps

  1. 1Copy the config below
  2. 2Open ~/.gemini/settings.json
  3. 3Paste inside "mcpServers": {}
  4. 4Restart Gemini CLI

Configuration

File location: ~/.gemini/settings.json

settings.jsonjson
{
  "openaiMcp": {
    "command": "npx",
    "args": [
      "-y",
      "@openai/mcp"
    ],
    "env": {
      "OPENAI_API_KEY": "sk-proj-..."
    }
  }
}
Full server details GitHub Open Config Editor

Install in other clients

For maintainers

If you maintain OpenAI MCP, add this badge to your README to show it's verified on CuratedMCP:

CuratedMCP Verified
[![CuratedMCP Verified](https://curatedmcp.com/api/badge/openai-mcp)](https://curatedmcp.com/marketplace/openai-mcp)