Real-Time Telemetry in Development: Using MCP and Datadog for AI-Assisted Coding
Whitney demonstrates an innovative approach to AI-assisted development by integrating real-time telemetry data through the Datadog MCP server with Claude Code. The session showcases Commit Story, a demo application that automatically generates journal entries from Git commits and Claude Code conversations. The key innovation is using OpenTelemetry instrumentation throughout development (not just production) to give AI agents accurate, real-world data about system behavior. Whitney shares compelling debugging scenarios where telemetry data helped identify root causes when Claude Code made incorrect assumptions - like comparing data flow from three weeks ago to current behavior to discover a bypassed filter function. The demo includes viewing a heavily instrumented helper function where instrumentation code significantly outweighs business logic, highlighting the need for auto-instrumentation tooling. Whitney is building an auto-instrumentation agent that uses OpenTelemetry Weaver for semantic conventions, automatically adds instrumentation to code changes, ensures proper correlation between logs/metrics/traces, and validates instrumentation by checking data appears in Datadog. The conversation explores how telemetry helps maintain clean, efficient codebases by preventing AI agents from duplicating existing functionality, and how having complete visibility into system behavior enables more informed development decisions about data handling, filtering strategies, and resource allocation.
Jump To
Key Takeaways
- Real-time telemetry data through MCP servers can help inform development choices and catch AI agent bad assumptions before they lead to suboptimal solutions
- Instrumenting code during development (not just production) enables time-travel debugging - comparing what data looked like weeks ago versus now to identify root causes
- Heavily instrumenting even small helper functions provides complete visibility for AI agents, though this requires auto-instrumentation tooling to remain practical
- Correlated logs, metrics, and traces enable AI agents to understand the full system flow from a single trace ID without relying solely on static code analysis
- Using telemetry to validate AI agent proposals prevents bloated code - ensuring agents use existing functionality rather than duplicating logic
Resources
Datadog MCP Server
MCP server for integrating Datadog telemetry data with AI coding assistants
OpenTelemetry Weaver
Tool for managing semantic conventions and auto-instrumentation standards