markdekock.com
EN NL
Kirk & Blackbeard

Media Agency Operations —
What the Command Center can do

Day-to-day agency tasks mapped to the 10-agent system. What gets automated, what gets assisted, what stays human.
AUTOMATE System handles end-to-end
ASSIST System drafts, human reviews
HUMAN Stays with the team
AGENT Which K&B agent handles it
01 · Operational Mapping
Agency Task What it involves Frequency Current time Command Center Agent
Media Planning
Brief intake & structuring Gather client objectives, KPIs, audience, channel preferences into structured doc Per campaign 2–4 hrs AUTOMATE Orchestrator parses brief, validates required fields, routes to Phase 1 Captain's Table
Audience research & segmentation Analyse historical data, identify segments, model reach/frequency Per campaign 4–8 hrs ASSIST Produces ICP validation, segment maps, opportunity sizing. Human validates. Lookout Spyglass
Budget allocation modelling Distribute budget across channels using CPM assumptions and historical benchmarks Per campaign 4–8 hrs ASSIST Generates allocation model with rationale. Planner adjusts at PAUSE checkpoint. Cartographer
Competitive analysis Track competitor spend, messaging, creative, positioning gaps Monthly 6–12 hrs AUTOMATE Continuous competitor monitoring, positioning gap analysis, white space report Spyglass
Content calendar creation Build 4-week publishing schedule with platform-specific timing Monthly 3–6 hrs AUTOMATE Generates full calendar with TOF/MOF/BOF distribution, platform cadence Cartographer
Media Buying & Optimisation
Campaign pacing review Check spend rate vs. plan; adjust daily budget caps Daily 30–60 min/day HUMAN Requires platform access. System can set pacing rules and flag deviations.
Bid management Monitor win rates, adjust bids per audience/placement Daily 1–2 hrs/day HUMAN In-platform execution. System writes optimisation rules (hard pause/scale thresholds). Quartermaster (rules only)
Optimisation rules & thresholds Define when to pause, scale, or test based on CPA/ROAS targets Per campaign 2–4 hrs AUTOMATE Writes hard rules: "if CPA > €X for 3 days, pause placement." Human approves at PAUSE 2. Quartermaster
Channel budget reallocation Shift budget between social, display, video based on performance Weekly 1–2 hrs ASSIST Recommends reallocation with evidence. Buyer executes in platform. Quartermaster Chronometer
Campaign Management & QA
Creative spec gathering Collect ad formats, sizes, file types per placement Per campaign 2–3 hrs AUTOMATE Generates spec sheet from campaign plan + platform requirements Cartographer
QA checklist execution Verify naming, budgets, dates, targeting, frequency caps, tracking pixels Per launch 4–8 hrs ASSIST Generates pre-flight checklist from plan; flags mismatches. Human signs off. Captain's Table
Trafficking & tag management Upload creatives to ad server, configure line items, test tags Per launch 4–8 hrs HUMAN Requires ad server access. System produces the trafficking sheet; ops team executes.
Daily campaign health check Monitor impressions, CTR, frequency; flag anomalies Daily 1–2 hrs/day ASSIST Writes trend narratives, anomaly flags, hypothesis log. Human decides action. Chronometer
Creative & Ad Copy
Ad copy writing Headlines, body copy, CTAs per platform and audience segment Per campaign 6–12 hrs AUTOMATE Generates 15–25 variants per brief, on-brand (style memory from Navigator). Human curates. Scribe
Email sequence writing Multi-email drip: subject lines, body, CTAs, timing logic Per campaign 8–16 hrs AUTOMATE Full sequence with branching logic, timing, exit conditions. Human reviews. Scribe
Landing page copy Hero, value props, social proof, CTA — per campaign Per campaign 4–8 hrs AUTOMATE Produces full page copy with conversion structure. Human + designer finish. Scribe
Visual creative production Design banners, video, rich media assets Per campaign 10–40 hrs HUMAN Creative production stays with designers. System provides briefs + Canva handoff. Scribe (brief to Canva)
A/B test design Define hypothesis, variants, success criteria, duration Ongoing 2–4 hrs/test AUTOMATE ICE-scored test backlog, hypothesis definition, stat sig criteria. Human approves. Helmsman
Analytics & Reporting
Weekly performance report Pull metrics across platforms, summarise trends, flag anomalies Weekly 4–8 hrs AUTOMATE Trend narrative, anomaly log, hypothesis register. Analyst reviews, doesn't build. Chronometer
Monthly performance deck 5–10 slides with channel breakdown, insights, recommendations Monthly 8–12 hrs AUTOMATE Full narrative report with business impact framing. Human edits for client tone. Ship's Log
QBR preparation 25–30 slide deck: performance vs. objectives, learnings, recommendations Quarterly 20–30 hrs ASSIST Generates narrative structure, data synthesis, recommendations. Human adds context + presents. Ship's Log Chronometer
Attribution & data reconciliation Cross-check platform data (GA, Meta, ad server); flag discrepancies Monthly 2–5 hrs ASSIST Flags mismatches between sources, labels as Hypothesis or Confirmed. Analyst investigates. Chronometer
Experiment results & scaling decisions Evaluate A/B test results; recommend scale/kill/hold Ongoing 2–3 hrs/week AUTOMATE Statistical evaluation, scaling recommendation, learning logged to Data Vault Helmsman
Strategy & Account Management
Strategy synthesis Merge internal data + market intel into positioning and messaging framework Per campaign 8–16 hrs AUTOMATE Falsifiable positioning, contra-strategy, messaging hierarchy. Strategist reviews at PAUSE 1. Navigator
Client status updates Weekly call prep: pacing, optimisation actions, next steps Weekly 2–3 hrs AUTOMATE Generates status summary from latest agent outputs. AM personalises and sends. Ship's Log
Client relationship & stakeholder mgmt Calls, negotiations, expectation management, scope discussions Ongoing 10–20 hrs/week HUMAN Relationships don't automate. System frees AM time to focus here instead of reports.
Brief writing for client approval Turn strategy into structured brief document for sign-off Per campaign 3–5 hrs AUTOMATE Generates client-facing brief from Phase 1 strategy output. AM reviews and presents. Navigator Scribe
02 · What Stays Human

Five things the system doesn't touch

These require platform access, client trust, or creative judgment that the system shouldn't own.

TaskWhy it stays human
Ad platform executionBid changes, budget adjustments, campaign pauses happen inside DSPs/ad managers. The system writes the rules and recommendations; the buyer executes. No API access to client ad accounts needed during POC.
Creative design & productionVisual assets (banners, video, rich media) need designers. The system produces the briefs, copy, and spec sheets that feed the design process — but Photoshop/Premiere stays with humans.
Trafficking & tag managementUploading tags to ad servers, configuring line items, and QA-testing live tags requires ad server access and ops expertise. System generates the trafficking sheet; ops team clicks the buttons.
Client relationshipsCalls, negotiations, trust-building, scope discussions. The system frees account managers from report-building so they can spend more time here — which is what retains clients.
Final approval at checkpointsEvery PAUSE gate requires a human decision: approve, revise, or reject. The system never ships anything without sign-off. That's the design.
03 · Time Savings Per Campaign Cycle

Where the hours go back

PhaseCurrent labourWith Command CenterSaved
Planning & Strategy20–40 hrs4–8 hrs (review + approval)~75% reduction
Creative & Copy20–40 hrs6–12 hrs (curation + design)~65% reduction
Reporting & Analysis15–25 hrs/month3–5 hrs/month (review only)~80% reduction
QBR Preparation20–30 hrs4–6 hrs (editing + presenting)~80% reduction
Account Status Updates8–12 hrs/month2–3 hrs/month (personalise + send)~75% reduction
Total per client/month80–140 hrs20–35 hrs~75% reduction

Translation for a 10-client agency: Instead of 6–8 full-time people on production, reporting, and strategy drafting — you need 2–3 people doing review, client relationships, and platform execution. The rest is system output.

04 · POC Design — How to Run It

4-week pilot on one real client

Pick one existing client where the agency is already running campaigns. Run the Command Center in parallel — same brief, same timeline. Compare output quality, speed, and variant volume at the end.

Week 1 — Setup & Intelligence

Load the client, run Phase 1

K&B does: Configure global_brand.json with client voice, ICP, and banned vocabulary. Wire connectors (Notion, Slack). Run The Lookout (internal data analysis) + The Spyglass (competitor/market audit) in parallel. Feed both into The Navigator for strategy synthesis.

Agency provides: Client brief, brand guidelines, 6 months of campaign performance data, audience segment definitions.

Output: Falsifiable positioning statement, competitor audit, audience opportunity map, messaging hierarchy.

Week 2 — PAUSE 1 Review & Execution Start

Strategy review, then Phase 2

PAUSE 1 checkpoint: Agency strategist + account manager review Phase 1 output. Approve, revise, or reject. Nothing proceeds without sign-off.

Then: The Cartographer produces funnel map, content calendar, KPI targets. Feeds into The Scribe (ad copy, email sequences, landing page copy) + The Quartermaster (budget allocation, optimisation rules) in parallel.

Output: Campaign plan, 15–25 ad copy variants, email sequence, media plan with hard pause/scale rules.

Week 3 — PAUSE 2 Review & Performance

Creative review, then Phase 3

PAUSE 2 checkpoint: Creative director + media buyer review all Phase 2 output. Select best variants. Approve media plan. Agency team executes in-platform (trafficking, launching).

Then: The Chronometer reads early performance data (if campaign is live) or analyses historical data to produce trend narratives. The Helmsman designs experiment backlog.

Output: Trend report, anomaly log, ICE-scored experiment backlog, scaling recommendations.

Week 4 — Report & Comparison

Final report + side-by-side evaluation

The Ship's Log produces the executive summary — business impact narrative, not just metrics.

Comparison report: Side-by-side of what the agency team produced vs. what the Command Center produced for the same brief. Cover: time taken, number of variants, depth of analysis, quality of strategic rationale.

Decision meeting: Present results to agency leadership. If the delta is meaningful → discuss engagement model (Managed, Embedded, or White-Label).

What the agency risks: €2,500 and one brief. What they see: A full campaign cycle — strategy through performance — produced in 4 weeks with dramatically less labour. The output speaks for itself.

05 · POC Success Criteria

How both sides know it worked

MetricWhat "good" looks like
Time to strategyPhase 1 output in <48 hours (vs. 1–2 weeks agency-side)
Copy variant volume15–25 on-brand variants per brief (vs. 3–5 manually)
Strategy qualityAgency strategist rates positioning as "equal or better" vs. what they'd produce
Report depthHypothesis register + anomaly log included (vs. standard metrics-only report)
Labour hoursAgency team spends <15 hours total on review across 4 weeks (vs. 60–100 to produce equivalent)
Experiment backlogICE-scored backlog of 8–12 testable hypotheses delivered (vs. typically 0–2 ad hoc tests)
Brand consistencyZero banned-word violations. Tone matches brand guidelines across all outputs.
06 · How to Open the Conversation

The agency's problem, not your technology

Don't lead with agents, AI, or Claude. Lead with their margin problem.

Their reality: Clients want more variants, faster turnaround, and deeper analysis — for the same budget. Team is stretched. Reporting eats 30% of analyst time. Strategy gets squeezed into half-day workshops instead of proper research cycles. Junior staff write copy that needs 3 rounds of revision.

Your opening line: "What if your team could focus on client relationships and creative judgment — and everything else was drafted before they sit down?"

Then show them this document. Walk through the mapping table. Let them point at the tasks that hurt most. That's your POC scope.