Official Sentry integration — bring live error context, stack traces, and performance data into your AI
The official Sentry MCP server pulls live error data directly into your AI agent. Instead of screenshotting stack traces and pasting them into chat, point Claude at a Sentry issue — it gets full context, correlates with releases, and suggests fixes. Features: - Query issues by project, environment, and time range - Get full stack traces with local variables - Access performance transaction data and bottlenecks - Correlate errors with recent deployments and releases - Search events by user, URL, tag, or error type - Manage issue status (resolve, ignore, merge) - Access release health metrics - Set up alert rules and notification policies The typical workflow: 1. Claude sees an error in your code 2. Claude queries Sentry for matching recent issues 3. Pulls the full stack trace, breadcrumbs, and affected users 4. Suggests a fix with full context — no manual copy-paste Built and maintained by Sentry. OAuth-based — no stored API keys.
File location: ~/.gemini/settings.json
{
"sentryMcp": {
"command": "npx",
"args": [
"-y",
"@sentry/mcp-server"
],
"env": {
"SENTRY_AUTH_TOKEN": "sntrys_...",
"SENTRY_ORG": "your-org"
}
}
}If you maintain Sentry MCP, add this badge to your README to show it's verified on CuratedMCP:
[](https://curatedmcp.com/marketplace/sentry-mcp)