I Built a Weekly Review That Triangulates Five Sources. Here's What It Found

Diagram showing five data sources — voice notes, ChatGPT chats, git commits, Slack messages, and session reflections — flowing into an orchestrator that produces a weekly review report with cross-referenced threads

I talk to myself in voice notes. I research in ChatGPT. I code in git. I coordinate with my team on Slack. I reflect on AI-assisted sessions in structured post-mortems.

Every week, each of these sources tells a partial story. My voice notes say I was thinking about catalog sorting. My ChatGPT history says I was researching it. My git log says I shipped something. Slack says the team reacted. The reflection says the AI got it wrong twice before landing on the right approach.

No single source captures what actually happened. But all five together do.

The problem with single-source reviews

Most weekly reviews are built from one source. You open your task manager and check off what got done. Or you scroll through your git log. Or you skim Slack. Each source has its own bias:

The interesting signal is in the overlaps.

When the same topic appears in three or more sources, that's real work. When it appears in only one, it's probably noise — or an idea that hasn't landed yet.

The architecture

I built this as a set of Claude Code skills — modular, composable, each responsible for one source. An orchestrator spawns all five as sub-agents in parallel, collects their summaries, and does the cross-referencing.

Five source agents, three model tiers:

AgentWhat it readsModelWhy
Voice notesTana transcriptions (daily-notes/)HaikuShort files, classification only
ChatGPT chatsFrontmatter + first 50 lines of each chatHaikuTopic extraction, not deep reading
Git commitsAll repos, commit messages + GitHub issuesSonnetThematic narrative requires judgment
Slack messagesFull export via API, ~350 messages/weekSonnetUkrainian context, open loop extraction
Session reflectionsStructured post-mortemsHaikuAlready analyzed, just summarize

The Haiku agents cost almost nothing and finish in under 30 seconds. The Sonnet agents do the heavy lifting. The orchestrator (Opus, my main session) doesn't read any raw data — it works only with the five summaries, keeping the context window clean.

Each agent writes its section to a temp file. The orchestrator reads all five, identifies topics that span multiple sources, and writes a "Threads of the Week" section that tells the cross-source story.

What triangulation actually reveals

The first time I ran this on a real week, six threads emerged. Here's what stood out:

Catalog sorting. Appeared in all five sources. I voiced the hypothesis on Monday ("test sorting algorithms before hiding products — poor sales might reflect poor discoverability"). Researched KPIs in ChatGPT on Tuesday. Committed the sort migration on Wednesday. Announced it in Slack on Thursday. The reflection captured that synthesizing voice notes with ChatGPT exports into structured notes was the session's key output.

No single source tells this story. Git says "switched 72 collections to Most Relevant sort." Voice notes say "maybe we shouldn't hide products yet." ChatGPT says "here's how to measure sorting effectiveness." Slack says "the team noticed Jasmin dominates page one now." Only the combination shows a hypothesis moving from thought to research to code to team reaction in four days.

Call center AI supervisor. Appeared in voice and ChatGPT only. Two long voice notes designing a three-tier architecture (open-model transcription, cheap analysis, expensive recommendations). A ChatGPT session exploring the same architecture. A separate ChatGPT session ranking it as the best business opportunity in a portfolio of ideas.

Zero git commits. Zero Slack messages. This is an idea that hasn't left the thinking phase. Without the multi-source view, I'd either forget it existed (if reviewing only git/Slack) or overestimate its progress (if reviewing only voice notes). The triangulation gives it the correct label: still thinking, not yet acting.

PKM infrastructure. Four of five session reflections were about building data pipelines — ChatGPT sync extension, Tana voice webhook, Slack export CLI. Git showed 43 commits across three new repos. Voice notes captured the vision: "consolidated weekly review connecting all these pipelines into one system."

This is the meta-thread: the infrastructure week that enabled this very review to exist. The tools I built last week are what the orchestrator is now running.

The cost model

Three Haiku sub-agents at ~40K tokens each: negligible. Two Sonnet sub-agents at ~100-150K tokens each: moderate. The Slack agent is the most expensive because it runs the export CLI and then spawns its own sub-agent to analyze ~350 messages with thread expansion.

Total cost for a full five-source weekly review: roughly equivalent to a 15-minute Claude conversation. The cross-referencing pass in the main context adds maybe 5 minutes of Opus time, working with five compact summaries rather than raw data.

The key optimization is that Haiku reads the ChatGPT conversations.My W16 had 23 chats. Reading them fully would cost a fortune. Reading only frontmatter + first 50 lines costs almost nothing, and the title alone is usually sufficient to classify the topic. The title alone is usually enough to classify the topic — "Shopify Catalog Sorting KPIs" doesn't need deep reading to know it belongs in the catalog sorting thread.

What I'd do differently

The current architecture is good enough to ship, but I already see two improvements:

Temporal threading. The current "Threads of the Week" clusters by topic but doesn't show the timeline — which source came first, which followed. The call center supervisor idea started in voice notes on Wednesday and hit ChatGPT on Thursday. That temporal signal (thinking precedes research) is interesting and currently not captured.

Carry-forward from previous weeks. The orchestrator should diff open loops between weeks and surface stale items.The W15 report had 43 open action items from Slack. W16 has 42. How many are the same items? Right now it treats each week independently.

The broader point

The reason weekly reviews feel hollow is that they're usually single-source. You look at what you shipped and feel productive, or you look at your task list and feel behind. Both are distortions.

What actually happened in my week is a network of connections between thinking, researching, building, and coordinating. The catalog sorting thread touched five tools across five days. The call center idea touched two tools and zero execution. Both are true, and both are important — but for different reasons.

The triangulation doesn't make the review better in the sense of more polished. It makes it honest. It shows where attention actually went, which ideas moved through the pipeline, and which ones stalled. That's worth more than a bullet list of completed tasks.