
Claude Code findings with Rust and TypeScript through Ray Tracing projects
In this candid engineering chat, Datadog developer advocate Scott Gerring walks Jason Hand through his latest experiments with Anthropic’s Claude Code assistant. Gerring shows off a ray-traced teapot scene that Claude generated almost entirely on its own. By conversationally prescribing high-level architecture and letting the model write the Rust, he ended up with clean, compiler-checked interfaces for materials, impressive Lambertian shading and glass refractions, and a working multithreaded renderer—all things that would have sounded like science-fiction two years ago. Yet the demo also exposes Claude’s more comedic failure modes: a ‘random’ axis picker implemented as a global static counter wrapped in an unsafe block, a crippling global mutex that turns a progress bar into a five-times performance penalty, and subtle rendering artifacts caused by omitting the classic ray-origin epsilon offset. Beyond graphics, Gerring describes using Claude Code as a pragmatic coding mule. It rapidly scaffolded a TypeScript ETL pipeline that ingests Datadog integration metadata into the new DevHub site powered by Astro, complete with strongly-typed schemas and a bespoke GitHub API client. The assistant excelled at mechanical mapping and refactoring chores that normally burn hours of human focus, but it stumbled when asked to wire up Jest unit-tests—introducing brittle module-system changes that broke the entire build. At the bleeding edge, Claude still falters on Rust compiler internals for architectural linting, reminding us that engineers must recognise success criteria and domain pitfalls before blindly shipping AI-generated code. The pair close by musing about shared organisational context windows, multi-modal workflows like handwriting drafts on a Supernote then piping PDFs through Claude for flawless transcription, and the exciting potential of Datadog’s new MCP for autonomous incident remediation. The through-line: lean on generative AI for the repetitive drudgery, but keep seasoned intuition—plus good observability—at the helm.
Jump To
Key Takeaways
- Generative coding assistants like Claude Code can produce surprisingly correct, compiler-validated architecture when given clear conversational guidance
- AI tools also invent highly 'creative' bugs—unsafe globals, mutex bottlenecks, missing epsilon offsets—that require domain expertise to detect
- For rote tasks (type definitions, data mapping, API clients, refactors) Claude saves hours, especially in TypeScript and Astro projects
- Complex build or tooling changes (e.g., JavaScript module systems, Rust compiler internals) remain fragile; human review and testing are essential
- Multi-modal workflows—hand-written drafts, observability data, shared context stores—hint at a future where AI augments the entire software lifecycle, not just code editing