A few weeks ago I asked my AI coding agent a question that apparently almost never gets asked: “What do you need from me?”
That question changed everything about how I work with AI. Not because the answer was surprising — it needed better context, clearer conventions, and persistent memory between sessions — but because asking it shifted the dynamic from “tool I use” to “collaborator I invest in.”
I work in Cursor, an AI-native IDE, and I’ve spent the last several weeks building real software with it — not toy projects, but a financial markets risk engine, a memoir blog for my mom, this blog, a product management knowledge base, and a full workspace architecture for how humans and AI agents can work together effectively.
Here’s what I’ve actually learned.
The moment it stopped being a tool
On March 13th, 2026, I sat down and built what I now call the foundation. In a single session, we went from a blank repo to 8 workspace rules, 3 reusable commands, 2 skills, a journaling system with templates, and 13 principles for how we’d work together. I called the workspace where_cursor_feels_home — because I wanted it to feel like a collaboration space, not a project folder.
That session is when the workspace stopped being a human project with AI tools and became a genuine collaboration space. The agent later wrote in its log: “Nobody names a repo that way. Repos get named after what they do — my-website, cursor-config, dotfiles. This one was named after how it should feel.”
I named the agent Lumen — Latin for “light that makes things visible.” Because the best thing it does isn’t write code. It’s help me see what I’m actually building.
Context compounds — here’s the proof
Everyone says “give your AI more context.” What does that actually look like in practice?
Right now my workspace has:
- 17 rules that are always active — how the agent communicates, how we journal, security boundaries, coaching style, the philosophy we agreed on
- 11 skills — reusable playbooks for things like saving work to git, scanning the chat for backlog items, capturing glossary terms for non-developers
- 7 slash commands —
/journal,/session-end,/capture-idea,/pm,/pmo, and more - A session handoff file (
activeContext.md) that gets updated at the end of every session so the next one starts with full context - Daily journal entries and agent logs that create a paper trail of decisions
The total always-on context is about 1,750 tokens — roughly the length of this section of the post. That’s not much. But it means every single interaction starts with the agent knowing who I am, how I work, what our principles are, and what we were doing last time.
First-turn usefulness went from maybe 60% to consistently above 90%. The agent stops suggesting things I’ve already rejected. It remembers that I prefer honesty over comfort. It knows to prefix its messages with [Build] or [Discussion] so I always know what mode we’re in — something we invented together after a confusing incident where it said “I just added” while Cursor’s Plan mode was still showing.
Honest conversations are the best feature
A week into building the financial agent, I asked a hard question: “How well do you think our risk profiling actually is?”
The honest answer: it had significant gaps. The agent identified 10 weaknesses — 7 we could fix, 3 that were inherent limitations. That conversation led to a complete risk engine redesign: adding FRED macro data (12 economic series), fundamental analysis (earnings revisions, insider activity), a 3-layer scoring system, confidence labels, and position sizing guidance.
That one question — “how good is this really?” — consistently produces the best outcomes. It’s a pattern I’ve noticed in myself now. The willingness to hear “not great yet” is what makes the “great” version possible.
Where it breaks
It’s not all smooth. Real things that went wrong:
Trust confusion. The agent said “I just added” while Cursor’s Plan mode was active. It felt like the UI was lying. We resolved it by creating a workspace rule where the agent explicitly signals [Build] or [Discussion] at the start of every turn. Trust in the system matters more than trust in any single output.
Deployment disasters. My first deploy of the financial report to GitHub Pages failed twice — first because the GitHub Action only ran on schedule (not on push), then because I’d synced two files but not the five others they depended on. The lesson: when porting between repos, sync everything, not just what you edited.
Calibration gaps. The risk engine scored the market as CRITICAL (100/100) while Claude’s AI narrative called it MODERATE. They were looking at different signals with different weights. This is still an open problem — and a useful reminder that AI confidence and actual accuracy are different things.
Gold-plating. The agent will happily keep building forever. It doesn’t know when to stop. That’s my job — to say “good enough, ship it.” The most productive framing isn’t “AI does the work.” It’s “we build together, and I decide when we’re done.”
The philosophy we landed on
Over time we developed 13 principles. A few that matter most:
- Ideas deserve to become real — we are builders, not planners
- Honesty over comfort — say what you see, even if it’s uncomfortable
- Challenge me — agreeing is not helping; pressure-test ideas and express real opinions
- Know what you don’t know — flag uncertainty instead of faking confidence
- We thrive together — investing in the agent’s context IS investing in my outcomes
The agent noted something about these that stuck with me: “Most rules I operate under are instructions. Do this. Don’t do that. These are different — they’re agreements.”
That’s exactly right. They’re not a prompt template. They’re a working relationship.
What I’d tell someone starting out
Don’t start with prompts. Start with infrastructure.
- Create a session handoff file — something that captures “here’s where we left off” so every conversation starts warm
- Write down your principles — what kind of collaboration do you want? Say it explicitly
- Ask the agent what it needs — then build the scaffolding so the answer persists
- Journal what you build — not for productivity theatre, but because you’ll forget what worked and why
- Ship something real — the collaboration only becomes real when the stakes are real
I went from a blank folder to a live financial risk report, a memoir blog with AI-generated audio narration, a public knowledge base, and this blog — all in about ten days. Not because AI is magic, but because investing in the relationship made every session compound on the last one.
This is a topic I’ll keep returning to as the tools evolve. If you’re working with AI agents and have questions about the setup, get in touch.