
Maximizing AI with Cursor: Building MCP Servers for Seamless Workflow
In this in-depth conversation, Whitney Lee shares how a weekend of “vibe coding” with Cursor spiraled into building her own MCP (Model Context Protocol) server—designed to automate the creation of engineering journal entries from Git commits, AI chats, and terminal commands.
With a background in Kubernetes and platform engineering—but not recent app development—Whitney walks through the creative and technical decisions behind her playful demo app and the real problem it helped her solve: tracking her technical progress and decisions with context, clarity, and reflection.
She explores:
- Why she started with raw Cursor and deliberately held off on using MCP tools
- The specific challenges she encountered—like lack of memory, unclear documentation sourcing, and staying on task
- How tools like Taskmaster, Memory, and Context 7 helped address those issues
- Her vision for a journaling MCP server that automatically generates daily, weekly, and monthly summaries—complete with tone, milestones, terminal history, and reflections
- How she designed and validated the architecture using AI-driven dialogue, test-driven development, and zero-trust principles
This episode offers a real-world look at how AI tooling can support—not replace—human thinking, and how to shape those tools into something genuinely useful and sustainable.
Jump To
Key Takeaways
- Start with raw AI tools to understand their capabilities before adding extensions or plugins.
- Create preference files and workflows to maintain consistency across AI interactions and reduce repetitive instructions.
- Use MCP servers like Memory, Context 7, and Taskmaster to solve specific workflow problems systematically.
- Implement test-driven development and anti-hallucination rules when working with AI to ensure code quality.
- Build engineering journals that automatically capture Git commits, AI chats, and terminal commands for better project tracking.
- Apply zero-trust principles to AI development - always verify outputs and pit different LLMs against each other.