Official Datadog integration — query metrics, logs, traces, and incidents for full-stack observability
The official Datadog MCP server gives AI agents access to your full observability stack — metrics, logs, distributed traces, and incidents — enabling autonomous debugging and root cause analysis. Features: - Query metrics with full PromQL/DQL expression support - Search and tail logs in real time - Inspect distributed traces and service maps - View and manage monitors and alert policies - Access incident timelines and postmortems - Query APM service performance data - Inspect infrastructure maps and host health - Run synthetic test results analysis - Create dashboards and widgets programmatically Real-world workflow: 1. Deployment triggers an alert 2. Claude queries Datadog for the specific metric spike 3. Pulls correlated logs from the same time window 4. Identifies the root cause from trace data 5. Proposes a fix — all within the same conversation The fastest path from "something is wrong" to "here's why and how to fix it."
from agents import Agent
from agents.mcp import MCPServerStdio
mcp_server = MCPServerStdio(
command="npx",
args=["-y", "@datadog/mcp-server"],
env={
"DATADOG_API_KEY": "your_api_key",
"DATADOG_APP_KEY": "your_app_key",
"DATADOG_SITE": "datadoghq.com",
},
)
agent = Agent(
name="My Agent",
model="gpt-4o",
mcp_servers=[mcp_server],
)If you maintain Datadog MCP, add this badge to your README to show it's verified on CuratedMCP:
[](https://curatedmcp.com/marketplace/datadog-mcp)
Grafana MCP
Official Grafana integration — query dashboards, explore metrics, and manage alerts via AI
Sentry MCP
Official Sentry integration — bring live error context, stack traces, and performance data into your AI
Elastic Cloud MCP
Official Elastic Cloud integration — search, observability, and analytics across Elasticsearch and Kibana