Untitled

InterviewBee — Performance Marketing Manager Premium Question Bank

FAANG-Level Interview Preparation | Senior · Staff · Principal


Question 1: Paid Search Strategy and Budget Allocation — Restructuring a £500K Google Ads Account Haemorrhaging Spend

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Booking.com, Deliveroo, Monzo, GoCardless, Wise


The Question

You are a Senior Performance Marketing Manager hired by a UK-based B2B SaaS company selling HR software to SMBs. The company spends £500,000/year on Google Ads across four campaigns: Brand (£50K), Generic Keywords (£280K), Competitor Terms (£100K), and Retargeting (£70K). Current performance: £2.1M in pipeline generated from paid search, a blended ROAS of 4.2x, and a Cost Per Opportunity (CPO) of £238. The head of marketing believes performance is "good enough" but has asked you to audit the account because organic leads have been declining and they want paid to compensate. Your audit finds: (1) the Generic Keywords campaign burns 60% of its budget on broad match keywords that trigger irrelevant queries (e.g., "HR software" triggering for "how to do HR" and "free HR advice"), generating clicks at £4.20 CPC but almost zero conversions; (2) the Competitor Terms campaign targets 35 competitors but 28 of them are tiny players with combined search volume under 200 searches/month — the campaign spends £85K chasing searchers who will never convert; (3) Smart Bidding is set to Maximise Conversions on all campaigns with no target CPA set, causing Google's algorithm to optimise for volume rather than quality; (4) Quality Scores across the Generic campaign average 4/10, driving CPCs 40% higher than they should be (industry average QS for this category is 7/10); (5) there are no negative keyword lists — 3,400 irrelevant search terms have triggered ads in the last 90 days without a single conversion. Redesign the account structure, bidding strategy, and keyword management to cut CPO from £238 to £150 within 90 days while maintaining or growing pipeline.


1. What Is This Question Testing?

  • Campaign architecture and match type discipline — understanding that broad match keywords without sculpted negative keyword lists are the single most common source of wasted spend in Google Ads; "HR software" on broad match triggers for "free HR advice," "HR courses," and "what does HR stand for" — none of which are buyers; knowing the match type hierarchy (Exact → Phrase → Broad Modified → Broad) and the correct use case for each: Exact match for bottom-funnel, high-intent, proven converters; Phrase match for capturing intent variants with controlled relevance; Broad match only when paired with Smart Bidding and a well-established conversion history (1,000+ conversions/month); for an account converting at this volume, broad match is premature and should be replaced with Phrase and Exact
  • Smart Bidding configuration and conversion quality — understanding that "Maximise Conversions" without a Target CPA instructs Google to generate as many conversions as possible at any cost, which inflates volume at the expense of quality; the correct configuration is Target CPA bidding set to the desired CPO (£150) with a conversion value assigned that reflects actual pipeline quality (not all form fills are equal — a demo request from a 200-person company is worth more than a whitepaper download from a solo freelancer); knowing that Smart Bidding requires a minimum of 30–50 conversions/month per campaign for the algorithm to optimise effectively
  • Quality Score optimisation — understanding the three components of Quality Score: Expected Click-Through Rate (eCTR), Ad Relevance, and Landing Page Experience; a QS of 4/10 (vs. industry average 7/10) means CPCs are approximately 40% higher than they need to be (QS 4 → bid modifier ~+40% vs. QS 7); the fix requires aligning the keyword → ad copy → landing page chain (a keyword for "HR onboarding software" should trigger an ad about HR onboarding specifically, not a generic "HR software" ad, and should land on a page specifically about onboarding — not the homepage); a QS improvement from 4 to 7 would reduce CPC from £4.20 to approximately £2.80, a 33% cost reduction
  • Competitor campaign ROI analysis — understanding that competitor term campaigns have structurally lower conversion rates than brand or generic campaigns (a searcher looking for "BambooHR" has already decided they want BambooHR; converting them to your product requires overcoming that intent); knowing which competitors to target: only those with significant search volume (500+ searches/month), where you have a credible differentiation story (price, feature, use case), and where historic conversion data shows a CPO below your target; the 28 small competitors collectively consuming £85K with near-zero conversions is pure waste
  • Negative keyword strategy and search term mining — understanding that a negative keyword list is as important as the positive keyword list; 3,400 irrelevant search terms triggering ads in 90 days (without a single conversion) indicates zero keyword hygiene; the remediation process: export all search terms from the last 90 days, filter for zero-conversion terms with >1 click, batch-add them as negatives, and create a recurring weekly process (every Monday, review the previous week's zero-conversion search terms and add them as negatives before they compound)
  • Budget reallocation logic — understanding that freeing up wasted spend (from broad match waste + ineffective competitor terms) and reinvesting it into proven performers (brand keywords, high-QS exact match terms, retargeting) is the highest-ROI action available; a budget reallocation from underperforming campaigns to high-performing ones does not require additional budget — it requires disciplined analysis of which campaigns generate pipeline at acceptable CPO

2. Framework: Paid Search Account Turnaround and CPO Reduction Model (PSATRM)

  1. Assumption Documentation — Confirm what counts as a conversion: is a "demo request" the primary conversion, or are whitepaper downloads, free trial signups, and contact form submissions all tracked equally? If all form fills are tracked as equal conversions, Smart Bidding is optimising toward the easiest-to-generate conversion (whitepaper downloads at £12 each) rather than the most valuable (demo requests at £300 each); the first step is separating conversion types and assigning conversion values before changing bidding strategy
  1. Constraint Analysis — A 90-day CPO reduction target from £238 to £150 (a 37% reduction) is aggressive; Smart Bidding algorithms need 2–4 weeks to adjust to a new Target CPA setting; Quality Score improvements take 4–8 weeks to propagate as Google re-assesses the keyword-ad-landing-page chain; the first 30 days is mostly diagnosis and restructuring, with CPO improvement concentrated in days 45–90
  1. Tradeoff Evaluation — Reduce spend aggressively (cut broad match entirely, pause all 28 small competitors immediately) vs. transition gradually (shift broad match to phrase match over 4 weeks, pause competitor campaigns one by one with performance validation); gradual transition is correct because aggressive cuts risk losing conversion volume that the algorithm needs to maintain Smart Bidding performance (below 30 conversions/campaign/month, Smart Bidding degrades)
  1. Hidden Cost Identification — Quality Score improvement requires landing page updates (new ad group-specific landing pages for each keyword theme), which requires design and engineering time; creating 10–15 new landing pages at £500–1,500 per page (design + dev) costs £5K–22K — a worthwhile investment given the CPC reduction it enables, but one that needs to be scoped and approved before work begins
  1. Risk Signals / Early Warning Metrics — Weekly CPO trend (alert if CPO rises above £280 in the first 30 days — indicates the Smart Bidding algorithm is destabilised by the changes); conversion volume per campaign per week (alert if any campaign drops below 20 conversions/week — below this threshold, Smart Bidding becomes erratic); impression share on brand keywords (alert if impression share drops below 90% — indicates competitors are bidding aggressively on your brand and stealing branded traffic)
  1. Pivot Triggers — If after 45 days the CPO has not moved below £200 (despite the match type restructure and negative keyword additions), the issue may be landing page quality rather than keyword quality; pivot to running A/B tests on landing pages (testing headline, offer, form length) to improve conversion rate rather than continuing to optimise keyword-level settings
  1. Long-Term Evolution Plan — Days 1–14: full account audit, conversion tracking validation, negative keyword bulk upload; Days 15–30: restructure Generic campaign into tightly themed ad groups, shift broad match to phrase/exact, set Target CPA; Days 31–60: monitor Smart Bidding algorithm stabilisation, pause 28 small competitor terms, redeploy budget to proven performers; Days 61–90: land page QS improvements go live, measure CPO improvement, report to leadership

3. The Answer

Step 1: Fix Conversion Tracking Before Touching Bidding (Days 1–5)

Before any bidding or structural changes, validate what is being measured:

Audit conversion actions in Google Ads → Tools → Conversions:

Current setup (broken):
- "Form Submit" (all forms): Primary conversion, value = £0
- "Page View: /thank-you": Primary conversion, value = £0

Problem: All form fills counted equally. A "Download Whitepaper" form
and a "Request a Demo" form are both counted as 1 conversion — but
demo requests are worth 10–20x more to the business.

Fixed setup:
- "Demo Request Submitted": Primary conversion, value = £500
  (based on: avg deal size £3,000 × 25% close rate × 67% qualified)
- "Free Trial Started": Primary conversion, value = £200
- "Contact Sales Form": Primary conversion, value = £350
- "Whitepaper Download": Secondary conversion (not used for bidding), value = £0
- "Pricing Page View": Secondary conversion (not used for bidding), value = £0

Assign conversion values based on actual pipeline data. Import CRM data (HubSpot, Salesforce) via Google Ads offline conversion import: when a demo request converts to a closed deal, pass the actual revenue value back to Google Ads so Smart Bidding optimises toward revenue, not form fills.

This single change — separating conversion types and assigning values — will shift Smart Bidding's optimisation from "get as many form fills as possible" to "get as many high-value demo requests as possible."

Step 2: Restructure the Generic Keywords Campaign (Days 5–20)

Current structure (broken):

Campaign: Generic Keywords (£280K/year)
Ad Group: HR Software — Broad Match
Keywords: +HR +software, +human +resources +software, +HR +system
Match Type: Broad (all)
Negative keywords: None
QS: 4/10 average
Ad: "HR Software for Growing Teams | [Brand]" → Homepage

This structure causes every "HR"-adjacent query to trigger ads (free HR advice, HR courses, HR job postings). The single ad group pointing to the homepage means QS is low for every keyword because the ad and landing page are not specifically relevant to any particular search intent.

Restructured approach — theme-based ad groups:

Campaign: Generic Keywords — Restructured (Budget: £160K/year, down from £280K)

Ad Group 1: HR Software Core (Exact + Phrase)
Keywords:
  [hr software for small business] (exact)
  [hr software smb] (exact)
  "hr software" (phrase)
  "best hr software" (phrase)
Ad headline 1: "HR Software Built for SMBs"
Ad headline 2: "From £8/Employee/Month — No Setup Fees"
Ad headline 3: "Trusted by 2,400+ UK Businesses"
Landing page: /hr-software-smb/ (dedicated, not homepage)
Expected QS: 7–8/10

Ad Group 2: HR Onboarding Software (Exact + Phrase)
Keywords:
  [hr onboarding software] (exact)
  [employee onboarding system] (exact)
  "onboarding software hr" (phrase)
Ad headline 1: "Automate Employee Onboarding"
Ad headline 2: "New Hire Ready in 48 Hours — Not 2 Weeks"
Ad headline 3: "Free Trial — No Credit Card Required"
Landing page: /hr-onboarding-software/
Expected QS: 7–9/10

Ad Group 3: HR Payroll Integration (Exact + Phrase)
Keywords:
  [hr software with payroll] (exact)
  [hr and payroll software uk] (exact)
Ad headline 1: "HR + Payroll in One Platform"
Landing page: /hr-payroll-integration/
Expected QS: 7–8/10

[Create 8–12 similar ad groups for each keyword theme]

QS improvement from 4 → 7 reduces effective CPC:

  • Current CPC: £4.20 (at QS 4, paying +40% penalty)
  • Post-restructure CPC: £2.75 (at QS 7, paying fair market rate)
  • CPC saving: £1.45/click × estimated 40,000 clicks/year = £58,000/year saved

Step 3: Deploy Negative Keywords — Emergency First Pass (Days 1–3)

Before touching campaign structure, immediately upload a bulk negative keyword list to stop the bleeding:

Negative Keywords — Cross-Campaign List (add as Campaign-level negatives):

Informational intent (not buyers):
- free
- how to
- what is
- definition
- tutorial
- course
- training
- certification
- careers
- jobs
- salary
- template
- examples
- meaning
- vs (unless in a competitor ad group)

Wrong audience:
- enterprise (if targeting SMB)
- large company
- freelancer (if minimum contract size applies)
- student

Wrong market:
- usa (if UK-only)
- australia
- canada
- [any non-UK geography]

Upload method:
Google Ads Editor → Negative Keywords → paste list →
Apply to: All Generic + Competitor campaigns as Campaign-level negatives

Then add as shared negative list (Tools → Shared Library → Negative Keyword Lists)
so the list applies automatically to any new campaigns created in future.

Then establish a weekly search term mining process:

Every Monday (30-minute task):
1. Google Ads → Search Terms report → Last 7 days
2. Filter: Conversions = 0, Clicks ≥ 2
3. Export to CSV
4. Sort by cost descending
5. Review top 50 terms manually — are any of them actually relevant?
6. Add irrelevant terms as negatives to the shared list
7. Repeat weekly. The list grows over time; wasted spend shrinks.

Target: Within 90 days, reduce zero-conversion search terms from 3,400 (90-day period) to <200.

Step 4: Competitor Campaign — Cut to the Profitable 7 (Days 10–20)

Current: 35 competitors targeted, £100K spent, £85K on 28 tiny competitors.

Analysis of all 35 competitor terms:

Priority matrix:

Competitor | Monthly Search Volume | CPO (90-day data) | Action
-----------|----------------------|------------------|-------
BambooHR   | 8,200                | £185             | KEEP — below £150 target, scale
HiBob      | 5,400                | £210             | KEEP with CPA cap — monitor
Personio   | 4,100                | £195             | KEEP — close to target
Sage HR    | 3,800                | £225             | REDUCE budget — above target, test
Cezanne HR | 1,200                | £310             | PAUSE — too expensive, too small
CharlieHR  | 900                  | £280             | PAUSE — marginal, above target
[28 others] | <200 combined       | No conversions   | PAUSE ALL immediately

Budget reallocation:
- PAUSE 28 small competitors: free up £85,000
- Reduce Sage HR budget by 50%: free up £12,000
→ Total freed: £97,000
→ Reallocate: £40K to Generic restructured campaign, £35K to Retargeting, £22K to Brand

Step 5: Set Target CPA and Stabilise Smart Bidding (Days 15–25)

Change bidding strategy from Maximise Conversions to Target CPA:

Current: Maximise Conversions (optimises for volume, no CPA constraint)

New setup:
- Brand campaign: Target CPA = £80 (brand converts well, set low CPA)
- Generic campaign: Target CPA = £160 (10% above £150 target — gives algorithm room)
- Top competitor terms (BambooHR, HiBob, Personio): Target CPA = £200 (harder to convert)
- Retargeting: Target CPA = £120 (warm audience, should convert cheaply)

Important: Do not set Target CPA exactly at £150 immediately.
The algorithm needs 2–4 weeks to learn at a slightly higher CPA before
it can hit the tighter target. Set at 10–15% above target, then
tighten by 10% every 2 weeks once conversion volume is stable.

Timeline:
Week 1: Set Generic campaign Target CPA = £200
Week 3: Reduce to £175 (if ≥30 conversions/week in that campaign)
Week 5: Reduce to £160
Week 7: Reduce to £150 (target reached)

Step 6: 90-Day CPO Trajectory Model

DayActionExpected CPO Impact
1–5Fix conversion trackingBaseline accurate measurement
5–15Bulk negative uploadCPO: £238 → £210 (waste eliminated)
15–25Pause 28 small competitors, redirect budgetCPO: £210 → £185
20–35Set Target CPA, restructure ad groupsCPO: £185 → £165
35–60Smart Bidding algorithm stabilisesCPO: £165 → £155
60–90QS improvements live (new landing pages)CPO: £155 → £145
Day 90Final stateCPO: £145 — 39% below starting point

Early Warning Metrics:

  • Weekly conversion volume per campaign (alert if any campaign drops below 25 conversions/week — Smart Bidding will degrade)
  • Weekly CPO trend (alert if CPO rises above £260 during restructure — indicates Smart Bidding is confused by the changes; slow down the transition)
  • Quality Score trend on top 20 keywords (track weekly in Google Ads; alert if QS does not begin improving within 3 weeks of new landing pages going live)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The sequencing — fix conversion tracking first, then add negatives, then restructure campaigns, then adjust bidding — shows the operational discipline that prevents the classic mistake of changing bidding strategy before the conversion data it relies on is accurate; optimising toward low-quality conversions with Smart Bidding is worse than not using Smart Bidding at all. The competitor campaign ROI matrix (kill the 28 tiny competitors based on search volume + CPO data, keep the 7 that have sufficient volume and acceptable CPO) demonstrates the data-driven prioritisation that frees £97K of wasted spend without guessing. The Target CPA ramp-down schedule (£200 → £175 → £160 → £150 over 7 weeks) shows the understanding that Smart Bidding algorithms need time to learn a new CPA constraint and that forcing the final target immediately causes conversion volume collapse.

What differentiates it from mid-level thinking: A mid-level performance marketer would say "add more negative keywords," "improve Quality Score," and "restructure ad groups" — correct direction, but without the sequencing logic, conversion value assignment, or the quantified budget reallocation that makes those actions produce measurable results. They would not calculate the CPC saving from QS improvement (£1.45/click × 40K clicks = £58K/year) or design the weekly search term mining process that prevents keyword waste from recurring.

What would make it a 10/10: A 10/10 response would include a Google Ads Editor bulk upload template showing the exact negative keyword list format, a specific landing page brief for the 3 highest-priority ad groups (showing above-the-fold content, headline, subheading, form, and CTA), and a Looker Studio dashboard showing the weekly CPO metric, conversion volume by campaign, and QS trend — giving the head of marketing real-time visibility into the recovery.



Question 2: Paid Social Strategy and Creative Testing — Scaling Meta Ads from £50K to £200K/Month Without ROAS Collapse

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: ASOS, Gymshark, Bloom & Wild, Huel, Grind Coffee


The Question

You are a Senior Performance Marketing Manager at a DTC consumer brand selling premium skincare products. You have successfully managed a £50,000/month Meta Ads budget with a consistent ROAS of 4.5x (generating £225,000 in revenue monthly). The CMO wants to scale to £200,000/month in the next 6 months to support an aggressive growth target. Your challenge: the CMO assumes you can simply increase the budget 4x and maintain the same ROAS. You know from experience that Meta Ads have significant diminishing returns at higher spend levels — you cannot simply "turn up the budget" and maintain efficiency. Your specific concerns: (1) at £50K/month your current audiences (prospecting via Lookalike Audiences from purchasers + retargeting existing website visitors) are already heavily saturated — your frequency on the retargeting audience is 9.8 impressions per person per month, well above the 3–5 recommended threshold; (2) your creative library has 6 active ads, and three of them are showing significant creative fatigue (CTR declining week-over-week despite no changes); (3) your current account structure is a single prospecting campaign and a single retargeting campaign — insufficient segmentation for a 4x budget increase; (4) iOS 14.5+ privacy changes have made Meta's pixel attribution less reliable, causing reported ROAS to overstate actual revenue (your Meta-reported ROAS is 4.5x but your blended ROAS from bank/analytics is closer to 3.1x). Design a scaling strategy for Meta Ads that grows spend to £200K/month while managing diminishing returns, creative fatigue, audience saturation, and attribution discrepancy.


1. What Is This Question Testing?

  • Diminishing returns and audience expansion — understanding that Meta's ad auction has a finite supply of high-value users in any given audience; as you spend more, Meta must reach progressively less-engaged users (because the most-engaged users are already being served your ads), which drives down conversion rates and ROAS; the solution to saturation is not more budget on the same audiences but expanding to new audience layers (broader Lookalikes, interest-based prospecting, advantage+ audiences, new geographic markets, new demographic segments) — each new layer provides fresh inventory at better efficiency
  • Creative velocity and systematic testing — understanding that creative fatigue is the primary driver of ROAS decline in Meta Ads at scale; an ad with declining CTR is increasingly expensive per click because Meta's algorithm penalises low-engagement ads with higher CPMs; the solution is a systematic creative testing programme with a defined cadence (new creative concepts tested weekly), a defined "winner" criteria (statistical significance on ROAS or CPA vs. control), and a defined "kill" criteria (CTR decline >30% week-over-week triggers creative retirement); at £200K/month, you need 15–25 active creatives in rotation to avoid frequency fatigue
  • Campaign architecture for scale — understanding that a single prospecting campaign and a single retargeting campaign is insufficient at £200K/month because it creates budget competition between audience types (prospecting and retargeting should not compete for the same budget line), blurs optimisation signals (Meta's algorithm cannot distinguish between a high-intent retargeting conversion and a cold prospecting conversion if they are in the same campaign), and limits testing (you cannot test audience vs. creative independently in a monolithic structure); the correct architecture at scale uses a full-funnel structure with dedicated campaigns for each funnel stage
  • iOS 14.5+ attribution and measurement strategy — understanding that iOS 14.5 introduced App Tracking Transparency (ATT) which limits Meta's pixel tracking to only users who have opted in to tracking (approximately 30–40% of iOS users); this means Meta's reported ROAS systematically overstates actual performance because Meta only credits revenue from the trackable fraction of its audience but serves ads to 100% of its audience; the blended ROAS (from first-party analytics or revenue from bank statements) divided by Meta spend is the true efficiency measure; knowing how to build a measurement framework that triangulates Meta-reported data, first-party analytics data, and incremental lift tests to arrive at a defensible "true ROAS"
  • Audience layering and Advantage+ Audiences — understanding Meta's different audience targeting methods: Custom Audiences (based on pixel events, customer lists, video views), Lookalike Audiences (based on similarity to a source audience — typically purchasers, 1%–10% similarity scale), Interest-based targeting (based on Facebook/Instagram interests and behaviours), and Advantage+ Audiences (Meta's AI-driven broad targeting that has shown 15–25% CPP improvement in many accounts since 2023); knowing when to use each: bottom-funnel (Custom Audience retargeting), mid-funnel (Lookalike 1–3%), top-funnel (Lookalike 4–10%, Interest, Advantage+)
  • Budget pacing and spending efficiency — understanding that rapidly increasing a Meta Ads campaign budget (doubling or tripling overnight) triggers Meta's algorithm to re-enter a "learning phase" (a period of 7–14 days where the algorithm experiments with new placements, times, and users to find efficient delivery at the new budget level — during which ROAS typically drops 20–40%); the correct approach is to increase budgets by no more than 20–30% per week to avoid triggering the learning phase, or to use Campaign Budget Optimisation (CBO) at campaign level and let Meta allocate budget across ad sets dynamically

2. Framework: Meta Ads Scaling Without ROAS Collapse Model (MASRM)

  1. Assumption Documentation — Confirm the revenue attribution source: when the CMO says "4.5x ROAS," are they using Meta-reported ROAS (which overstates due to iOS 14.5 attribution loss) or blended ROAS from actual revenue? The business should align on blended ROAS as the primary success metric (not Meta-reported ROAS) before setting the scale target; if the true blended ROAS is 3.1x at £50K/month, then the target at £200K/month should be 2.5–2.8x (some ROAS degradation is expected at 4x spend) — not the optimistic 4.5x the CMO may be assuming
  1. Constraint Analysis — Scaling from £50K to £200K in 6 months requires adding £150K of incremental spend; this spend cannot go into saturated existing audiences (frequency already 9.8x on retargeting); new spend must go into genuinely new audience layers that do not cannibalise existing performance; the creative library must scale proportionally (from 6 creatives to 20–25 active creatives by month 6) to sustain engagement rates
  1. Tradeoff Evaluation — Scale spend aggressively (increase budget 30% per week for 6 months, reaching £200K quickly) vs. scale creatives first, then spend (spend 2 months building creative library + testing infrastructure, then scale budget); the correct approach is parallel scaling: increase budget 20% per month while simultaneously scaling creative production; rushing budget scale without creative infrastructure causes ROAS collapse, which is exactly what the CMO wants to avoid
  1. Hidden Cost Identification — Creative production at scale is expensive: at £200K/month spend, you need 4–6 new creative concepts tested per week; at £500–2,000 per creative concept (video production, UGC sourcing, graphic design), creative production costs £8K–48K/month — a significant line item that must be budgeted alongside the media spend
  1. Risk Signals / Early Warning Metrics — Weekly frequency per audience (alert if any audience exceeds 5.0 average frequency — above this threshold, creative fatigue and CPM inflation accelerate); weekly ROAS by campaign stage (alert if prospecting ROAS drops below 2.0x or retargeting ROAS drops below 6.0x — indicates audience or creative exhaustion); creative CTR week-over-week trend (alert if any active creative's CTR declines >25% week-over-week for 2 consecutive weeks — trigger creative retirement); Meta Learning Phase status (alert if any campaign re-enters Learning Phase — pause budget changes on that campaign until it exits Learning)
  1. Pivot Triggers — If by month 3 (at £100K/month target) blended ROAS has dropped below 2.0x: the audience expansion strategy is not providing fresh inventory efficiently; pivot to investigating upper-funnel activity (YouTube video ads, TikTok prospecting) to build new retargeting pools rather than continuing to push cold Meta audiences that are showing diminishing returns
  1. Long-Term Evolution Plan — Month 1: fix measurement (blended ROAS framework), launch creative testing system, increase budget to £65K; Month 2: scale creative library to 15 active ads, expand audience to Lookalike 4–6%, budget to £90K; Month 3: launch Advantage+ campaign, retargeting segmentation, budget to £120K; Month 4: introduce UGC-led creative at scale, budget to £150K; Month 5: geographic expansion if UK is saturating, budget to £175K; Month 6: review full-funnel efficiency, budget to £200K

3. The Answer

Step 1: Fix Measurement — Define True ROAS Before Scaling

You cannot scale toward a target ROAS if you do not agree on which ROAS is "real." Establish a blended measurement framework:

Blended ROAS calculation:

Step 1: Pull actual monthly revenue from Shopify/your e-commerce platform
Step 2: Pull Meta Ads spend from Meta Ads Manager
Step 3: Blended ROAS = Revenue / Meta Spend

Example (current state):
- Meta-reported ROAS: 4.5x (what Meta's attribution claims)
- Shopify revenue from all channels: £155K
- Meta Ads spend: £50K
- Blended Meta contribution estimate:
  Use a media mix attribution model or last-click GA4 ROAS as a cross-reference:
  GA4 attributed: £162K revenue from Meta (last-click, slight overcount)
  MTA estimate: £145K revenue from Meta (multi-touch)
  Blended ROAS estimate: £145K / £50K = 2.9x

Agreed target at £200K/month spend:
- Minimum blended ROAS: 2.5x (generating £500K revenue)
- Stretch target: 3.0x (£600K revenue)
- NOT: 4.5x (Meta-reported, overstated due to iOS 14.5)

For ongoing measurement, set up a "North Star Dashboard" that shows both metrics:

  • Meta-reported ROAS (for trend tracking — useful for relative performance over time)
  • Blended ROAS (for business truth — used for budget decisions)

Run a geo-based incrementality test (Meta's "Conversion Lift" or a manual holdout test) to validate the true incremental contribution of Meta Ads before scaling (invest £5K in testing to confirm the blended ROAS attribution before investing £150K in scale).

Step 2: Restructure Campaign Architecture for Full-Funnel

Expand from 2 campaigns to a full-funnel structure:

FULL-FUNNEL ACCOUNT STRUCTURE (£200K/month target)

Campaign 1: Prospecting — Cold (Budget: £80K/month)
Purpose: Reach new audiences who have never engaged with the brand
Audience options:
  Ad Set 1A: Lookalike 1–3% of Purchasers (£25K)
  Ad Set 1B: Lookalike 3–6% of Purchasers (£20K)
  Ad Set 1C: Advantage+ Audience (broad, AI-driven) (£25K)
  Ad Set 1D: Interest-based (skincare, beauty, wellness) (£10K)
Creative: Top-of-funnel (brand awareness, problem-aware, story-driven)
Exclusion: Website visitors last 30 days (prevent overlap with retargeting)

Campaign 2: Prospecting — Warm (Budget: £40K/month)
Purpose: Re-engage users who have shown interest but not visited the site
Audience:
  Ad Set 2A: Video viewers (50%+ of brand video) (£15K)
  Ad Set 2B: Instagram/Facebook page engagers last 90 days (£15K)
  Ad Set 2C: Lookalike of engaged website visitors (not purchasers) (£10K)
Creative: Mid-funnel (social proof, before/after, ingredient education)
Exclusion: Website visitors last 30 days

Campaign 3: Retargeting — Site Visitors (Budget: £40K/month)
Purpose: Convert users who have visited the site
Audience:
  Ad Set 3A: All website visitors last 7 days (£15K)  ← highest intent
  Ad Set 3B: Website visitors 8–30 days (£10K)
  Ad Set 3C: Add-to-cart abandoners last 30 days (£15K) ← highest intent
Creative: Bottom-of-funnel (discount, urgency, testimonials, guarantee)
Exclusion: All purchasers last 180 days

Campaign 4: Retention + LTV (Budget: £20K/month)
Purpose: Upsell/cross-sell to existing customers
Audience:
  Ad Set 4A: Customers who purchased once (£10K) ← win second purchase
  Ad Set 4B: Customers who purchased 2+ times (£10K) ← upsell premium line
Creative: Loyalty-oriented (subscription offer, exclusive access, VIP)

Campaign 5: Advantage+ Shopping (Budget: £20K/month)
Purpose: Automated product discovery (Meta's AI-driven shopping campaign)
No manual audience definition — Meta controls targeting
Creative: Product catalogue (automatic from Shopify product feed)

This structure prevents budget competition between funnel stages, allows Meta's algorithm to optimise each campaign independently, and makes it easy to diagnose which stage has a performance issue (if cold prospecting ROAS is low but retargeting is strong, the issue is top-of-funnel creative or audience selection — not the account overall).

Step 3: Creative Testing System — Systematic, Not Sporadic

At £200K/month, creative is the most important variable. Build a structured testing system:

Creative Testing Cadence:

Every Monday:
- Review last 7 days of creative performance
- Kill any creative with: CTR decline >25% for 2 consecutive weeks OR frequency >4.0
- Launch 2–3 new creative concepts into the testing ad set

Testing framework (per new creative):
- Launch in a dedicated "testing" ad set with £50/day budget
- Run for 7 days minimum (to accumulate data)
- Evaluate on: CTR, CPC, ROAS, CPP (Cost Per Purchase)
- Threshold for promotion to main ad set: ROAS ≥ control × 1.1 (10% better than current winner)
- Threshold for kill: ROAS < control × 0.8 for 7 days (20% worse than control)

Creative portfolio target at £200K/month:
- Active creatives: 20–25 total
- Creative mix: 40% video (15–30s), 30% UGC (user-generated), 20% static image, 10% carousel
- Creative themes: Product-focused (30%), Social proof/testimonials (25%), Education/ingredients (20%),
  Offer/urgency (15%), Brand story (10%)

Creative brief template for each test:

Hypothesis: "We believe that [showing before/after skin photos] will
outperform [product-only shots] because [customers respond to visible results
rather than product aesthetics]"

Format: 15-second video
Headline: "See the difference in 4 weeks"
Body: Before-after photos with voiceover testimonial
CTA: "Shop Now" → Product page
Audience: All prospecting campaigns

Success criteria: ROAS ≥ 10% above current best performer for 7 days
Kill criteria: ROAS < 20% below current best performer for 7 days

Step 4: Audience Saturation — Solve Retargeting Frequency Crisis

Current retargeting frequency: 9.8 impressions/person/month. Target: 3–5.

Solutions:

Option A: Reduce retargeting budget (lower impressions per person)
- Reduce retargeting from current (estimated £20K+) to £40K in the new structure
- But segment retargeting into 3 windows (0–7 days, 8–30 days, cart abandoners)
- Each window has a lower individual frequency (2–3x each) rather than 9.8x combined

Option B: Rotate creative faster (same impressions, different ad each time)
- Rotate 6–8 retargeting creatives (different offers, different social proof)
- User sees a different message on each impression → reduces fatigue
- This is a stopgap; the structural fix is budget segmentation

Option C: Shorten retargeting window
- Current window: All website visitors 90 days → extremely broad, low intent
- Restructure: 0–7 days (highest intent, highest bid), 8–30 days (medium), 31–90 days (low, limit spend)
- The 31–90 day window is where frequency is highest and intent is lowest — reduce or eliminate

Immediate action:
1. Export current retargeting campaign audience size from Meta Audience Manager
2. Divide total retargeting impressions by audience size = current frequency per person
3. Reduce budget on 31–90 day window by 70% immediately
4. Segment into 3 separate ad sets with frequency caps (Meta's frequency cap: set per ad set)

Step 5: Budget Scale Ramp — 20% Increases, Never More

Safe budget scaling schedule (avoid triggering Learning Phase):

Month 1: £50K → £65K (+30% — initial push, within safe range)
Month 2: £65K → £90K (+38% — slightly aggressive, monitor for Learning Phase)
Month 3: £90K → £115K (+28%)
Month 4: £115K → £145K (+26%)
Month 5: £145K → £175K (+21%)
Month 6: £175K → £200K (+14%)

Rule: If any campaign enters Learning Phase during a budget increase:
→ Do not increase budget further on that campaign for 14 days
→ Allow algorithm to exit Learning Phase naturally
→ Resume scaling once the campaign shows stable delivery for 7 consecutive days

Budget increase method: Use daily budget increases of 15–20% rather than
one-time jumps (e.g., to go from £2,000/day to £3,000/day, increase by £200/day
over 5 days rather than jumping all at once)

Step 6: Blended ROAS Projection at £200K/Month

MonthSpendMeta-Reported ROASEstimated Blended ROASRevenue Estimate
Current£50K4.5x3.1x£155K
Month 2£90K4.0x2.8x£252K
Month 4£145K3.6x2.6x£377K
Month 6£200K3.2x2.5x£500K

Expected ROAS degradation as spend scales is natural and should be set as an expectation with the CMO before the scaling begins. The business question is not "can we maintain 4.5x ROAS at £200K?" (no, we cannot) but "is 2.5x ROAS at £200K more profitable than 4.5x ROAS at £50K?" (yes — £500K revenue at 2.5x vs. £225K revenue at 4.5x: more absolute profit despite lower ROAS).


4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The proactive identification of the iOS 14.5 attribution discrepancy — and the reframing of the success target from "maintain 4.5x ROAS" to "achieve 2.5x blended ROAS" — shows the commercial honesty required to set achievable targets rather than telling the CMO what they want to hear; this prevents the classic failure mode of scaling Meta spend toward an unachievable ROAS target and being blamed when performance "collapses" (it was always going to decline at scale — the question is how much). The five-campaign full-funnel architecture with explicit budget allocation per funnel stage demonstrates the operational structure that allows Meta's algorithm to optimise each stage independently, preventing the budget competition and signal blurring that degrades performance in monolithic campaigns.

What differentiates it from mid-level thinking: A mid-level performance marketer would increase the existing two campaigns' budgets by 4x and report back when ROAS collapsed. They would not anticipate the audience saturation problem, would not design the creative testing system with specific kill criteria, and would not understand the iOS 14.5 attribution gap or how to establish a blended ROAS framework. They would not know about the Learning Phase sensitivity to budget changes or the 20% weekly increase rule.

What would make it a 10/10: A 10/10 response would include a geo-based incrementality test design (showing which regions are holdout markets and how to calculate true incremental ROAS from the lift test), a detailed creative brief with specific copy angles for each funnel stage, and a Looker Studio template showing the seven metrics the team reviews in the weekly performance meeting.



Question 3: Marketing Attribution and Measurement — Building a Multi-Touch Attribution Model When Last-Click Is Lying

Difficulty: Elite | Role: Performance Marketing Manager | Level: Senior / Staff | Company Examples: Airbnb, Monzo, Wise, Deliveroo, Farfetch


The Question

You are a Senior Performance Marketing Manager at a B2B SaaS company with a £1.2M annual paid media budget across Google Ads, LinkedIn Ads, Meta Ads, and programmatic display. The CFO has asked you to justify the marketing budget in a board meeting and you are relying on last-click attribution from Google Analytics 4. You pull the data and notice something alarming: according to last-click attribution, Google Ads drives 72% of all conversions, LinkedIn drives 8%, Meta drives 4%, and programmatic display drives 1% — the remaining 15% are "organic" (no attribution). But your media buyer tells you that LinkedIn and Meta ad campaigns are performing "visibly better" than the data suggests (they see lift when campaigns are active and drops when they are paused). You investigate further and find: (1) your B2B buyer journey involves an average of 14 touchpoints over 6 weeks before someone converts (B2B SaaS purchase is research-heavy), and last-click gives 100% credit to whichever touchpoint occurred immediately before the form fill — which is almost always a branded Google search; (2) LinkedIn Ads are shown exclusively on desktop, but your LinkedIn landing pages have a 68% mobile bounce rate, suggesting some users click on LinkedIn from work, switch to mobile later in the day, and convert on mobile — creating a cross-device attribution gap; (3) your programmatic display campaigns are running on a CPM (impression) model — they generate no clicks and therefore get zero attribution from click-based models, but you suspect they are building awareness and assisting conversions; (4) the sales team reports that 35% of new customers mention LinkedIn when asked "where did you first hear about us?" — a signal that LinkedIn's actual awareness contribution is far larger than its 8% last-click credit; (5) your current Google Analytics 4 data is affected by cookie consent — 40% of UK visitors decline cookie consent, meaning 40% of their touchpoints are invisible to GA4. Design a multi-touch attribution framework that accounts for these limitations and helps you make defensible budget allocation decisions.


1. What Is This Question Testing?

  • Attribution model theory and practical limitations — understanding the differences between last-click (100% credit to the final touchpoint — simple, systematically biases toward bottom-funnel channels like branded search), first-click (100% credit to the first touchpoint — systematically biases toward top-of-funnel channels like display), linear (equal credit to all touchpoints — does not reflect that some touchpoints drive more intent than others), time-decay (more credit to recent touchpoints — biases toward bottom-funnel), position-based (40-20-40: 40% first touch, 20% middle, 40% last — acknowledges both discovery and conversion), and data-driven attribution (uses machine learning to assign credit based on actual conversion path data — most accurate but requires high conversion volume to build a reliable model)
  • Cross-device attribution challenges — understanding that a user who clicks a LinkedIn ad on work desktop, then later searches your brand on their personal iPhone and converts is counted as a "branded Google search conversion" in last-click attribution — LinkedIn gets no credit; solving cross-device attribution requires either (a) first-party login data (if users log in on both devices, you can stitch the journey), (b) probabilistic device matching (matching devices by IP, time-of-day patterns, and browser fingerprint — less accurate but no login required), or (c) platform-reported cross-device attribution (Meta and LinkedIn both report cross-device attribution within their own platforms using their logged-in identity graph)
  • Incrementality testing and causality — understanding the distinction between correlation (LinkedIn campaigns are running when conversions spike) and causation (LinkedIn campaigns are causing the conversion spike); incrementality testing (holdout experiments where a control group does not see the ads and a treatment group does, with conversion rates compared) is the only method that establishes true causal attribution; Media Mix Modelling (MMM) provides similar causal estimates without requiring holdout groups by using historical time-series regression
  • Cookie consent and first-party data gaps — understanding that GDPR cookie consent (required in the UK) means users who decline cookies are completely invisible to GA4 — their visits, touchpoints, and conversions are not tracked; this creates systematic underreporting of total marketing contribution; the solution is a combination of server-side tagging (which captures some data without relying on browser cookies), modelled conversions (GA4 uses machine learning to estimate the behaviour of consent-declining users based on similar consenting users), and first-party data collection at conversion (asking "how did you hear about us?" in the form) to supplement the tracking gaps
  • CFO-facing attribution — the "business truth" requirement — understanding that attribution models are not marketing-internal debates — they directly determine budget allocation decisions and, therefore, revenue generation; a CFO who sees "Google Ads drives 72% of conversions" based on last-click will cut LinkedIn and Meta budgets, which may reduce top-of-funnel awareness and ultimately hurt long-term revenue; the performance marketer's job is to present attribution data with appropriate caveats and to recommend budget allocation based on the most defensible evidence, not the most flattering attribution model
  • Media Mix Modelling (MMM) fundamentals — knowing that MMM is a statistical regression technique that uses historical marketing spend data and historical revenue/conversion data to estimate the contribution of each marketing channel to outcomes; MMM does not require cookies or individual user tracking, making it the most GDPR-compatible attribution method; it is also the most expensive and time-consuming (requires 2+ years of historical data and an econometrics specialist to build); for a £1.2M budget, MMM is justified

2. Framework: Multi-Source Attribution Triangulation and Defensible Budget Allocation Model (MSATDBAM)

  1. Assumption Documentation — Confirm the conversion definition: is the primary conversion a form fill, a demo booked, a trial started, or a closed deal? Last-click attribution on form fills overstates channels that appear at the moment of form submission; if the primary business outcome is closed revenue, the attribution must trace from the first touchpoint to the closed deal (not just to the form fill)
  1. Constraint Analysis — A 40% cookie consent decline rate means your GA4 data represents only 60% of actual traffic behaviour; any attribution model built on GA4 data alone systematically undercounts all channels proportionally to their reliance on cookie tracking; channels without click data (programmatic display on CPM) are completely invisible to GA4 regardless of consent status
  1. Tradeoff Evaluation — Full Media Mix Model (expensive: £20K–80K to build, takes 12 weeks, requires econometrics expertise) vs. a pragmatic multi-touch triangulation approach (free with existing tools, can be built in 4 weeks by the in-house performance team, less statistically rigorous but sufficient for CFO-level budget decisions); for a £1.2M budget, the pragmatic triangulation approach is correct first; if budget grows to £5M+, invest in a full MMM
  1. Hidden Cost Identification — "How Did You Hear About Us" (HDYHAU) survey data requires someone to manually review and code the freetext responses; at 500 new customers/year, this is a 20–30 hour annual analysis task; it should be automated with a dropdown or structured multiple choice option in the form
  1. Risk Signals / Early Warning Metrics — Attribution model divergence (if last-click says LinkedIn drives 8% of conversions but self-reported HDYHAU says LinkedIn is mentioned by 35% of new customers, that 4.4x discrepancy signals that last-click is severely undercounting LinkedIn); incrementality test significance (alert if the LinkedIn holdout test shows no statistically significant lift — this would mean LinkedIn is genuinely not driving incremental conversions, not just being missed by last-click attribution)
  1. Pivot Triggers — If after running a LinkedIn incrementality holdout test (4 weeks, 50% holdout control), the conversion rate in the holdout group is identical to the treatment group: LinkedIn is providing zero incremental value and the sales team's self-reported attribution is not reflecting actual causal impact; in this case, reduce LinkedIn budget and reallocate to channels with demonstrated incremental lift
  1. Long-Term Evolution Plan — Month 1: implement blended attribution triangulation (GA4 data-driven + HDYHAU + platform-reported); Month 2: design and launch LinkedIn incrementality holdout test; Month 3: run programmatic display assisted-conversion analysis; Month 4: present multi-source attribution findings to CFO with budget recommendation; Month 6: evaluate full MMM build if budget justifies

3. The Answer

Step 1: Build a Multi-Source Attribution Triangulation Framework

No single attribution model is "correct." The solution is to triangulate across three independent data sources and present a range of estimates:

SOURCE 1: GA4 Data-Driven Attribution (what's measurable)
- Use GA4's Data-Driven Attribution model (not last-click):
  GA4 DDA uses machine learning to assign partial credit to each touchpoint
  in the conversion path based on which touchpoints are statistically
  associated with higher conversion probability
- Limitation: Only captures 60% of users (consent-declining users excluded);
  does not capture cross-device journeys without login stitching

SOURCE 2: Platform Self-Reported Attribution (cross-platform picture)
- LinkedIn Campaign Manager: reports "View-through conversions" (conversions
  within 30 days of seeing a LinkedIn ad, even without clicking) + cross-device
- Meta Ads Manager: reports "Click + View Through" conversions using Meta's
  identity graph (which includes cross-device via Facebook/Instagram login)
- Google Ads: reports "Google-attributed" conversions (GA4-linked)
- Note: Each platform's self-reported attribution is self-serving
  (LinkedIn credits all conversions that happened after a LinkedIn impression);
  use as an upper bound, not as truth

SOURCE 3: First-Party Self-Reported Data (the human signal)
- Add "How did you first hear about us?" as a required field in the demo request form
- Options: LinkedIn, Google Search, Google Ad, Friend/Colleague, Industry Event,
  Other Social Media, Industry Publication/Blog, Email, Other
- Analyse quarterly: what percentage of new customers report each channel?
- This is the "ground truth" of awareness attribution — the moment they first
  encountered the brand

TRIANGULATION TABLE (current state):

Channel        | Last-Click (GA4) | GA4 Data-Driven | Platform Self-Reported | HDYHAU Survey | Best Estimate |
---------------|-----------------|-----------------|------------------------|---------------|---------------|
Google (Brand) | 72%             | 45%             | 48%                    | 12%           | 35–48%       |
Google (Gen.)  | 12%             | 14%             | 15%                    | 8%            | 12–14%       |
LinkedIn       | 8%              | 18%             | 35%                    | 35%           | 25–35%       |
Meta           | 4%              | 8%              | 12%                    | 5%            | 7–10%        |
Programmatic   | 1%              | 4%              | N/A (no click)         | 3%            | 3–5%         |
Organic/other  | 3%              | 11%             | N/A                    | 37%           | 10–20%       |

The triangulation reveals: LinkedIn's true contribution is likely 25–35% of conversions (not 8% as last-click suggests). This reframing changes the budget recommendation significantly.

Step 2: Design a LinkedIn Incrementality Test

The most defensible data for the CFO is a causal test. Design a geo-based holdout experiment:

LinkedIn Incrementality Test Design:

Duration: 4 weeks (minimum for statistical significance)
Budget: £40K LinkedIn spend (continue running for the test period)

Treatment regions (70% of LinkedIn budget):
- All UK regions EXCEPT the holdout regions
- Run LinkedIn campaigns as normal

Control/Holdout regions (30% of LinkedIn budget):
- Scotland + Wales (similar business demographics to England)
- LinkedIn campaigns completely PAUSED for these regions
- Matched on: company size distribution, industry mix, historical conversion rate

Measurement:
- Week 0: Establish baseline conversion rate for both regions (from CRM, not GA4)
- Weeks 1–4: Run campaigns in treatment, pause in holdout
- Week 4: Compare conversion rates:
  Treatment conversion rate: [X%]
  Holdout conversion rate: [Y%]
  Incremental lift: (X - Y) / Y × 100 = LinkedIn's true causal impact

Statistical significance: Minimum 100 conversions per group for 80% power

Reporting to CFO:
"LinkedIn drives [Z%] incremental conversions above the baseline.
At our current LinkedIn CPO of £X, this represents £Y in incremental pipeline
per £1 of LinkedIn spend."

Step 3: Solve the Cookie Consent Gap

40% of UK visitors declining cookies means 40% of their journeys are invisible. Mitigate with:

Server-side tagging (implementation, 2 weeks of engineering time):

Current setup: Client-side GA4 tag (runs in browser, blocked by cookie decline)
Server-side setup: Events sent to your server first, then forwarded to GA4
Benefit: Captures data for users who decline cookies (aggregated, not individual tracking)
Limitation: Still requires some consent signals; GDPR compliance depends on implementation
Engineering cost: ~£5K–10K to implement a server-side tagging container (GTM Server-Side)

GA4 Modelled Conversions:

GA4 automatically applies machine learning to model the behaviour of
consent-declining users based on similar consenting users.

Enable in: GA4 → Admin → Reporting Identity → Include modelled data
GA4 will add estimated conversions from non-consenting users to your reports

Benefit: Fills ~30–50% of the measurement gap automatically
Limitation: These are statistical estimates, not actual user data —
  caveat when presenting to CFO

Step 4: Quantify the Programmatic Display Assist

Programmatic display campaigns generate zero clicks (CPM model) and therefore zero credit in click-based attribution. But they may drive awareness that enables other channels to convert. Measure this:

Method 1: Time-series analysis
- Plot weekly: programmatic display spend vs. branded Google search volume (Google Trends or Search Console)
- Hypothesis: weeks with higher display spend should show higher branded search volume
  (if display is building awareness, people search for the brand more)
- If correlation is positive and lagged by 5–10 days: display is contributing to brand search lift

Method 2: Audience overlap analysis
- Export the programmatic display "reached users" list (available from some DSPs)
- Compare to your converters' IP ranges, geo, job titles (for B2B)
- Are your converters over-represented in the display-reached audience?
- If yes: display is reaching the right people, even if it doesn't get attribution credit

Method 3: Path analysis in GA4
- GA4 → Advertising → Attribution → Conversion paths
- Filter: Conversions where programmatic (or Display Network) appeared in the path
- What % of all conversions had at least one display touchpoint earlier in the path?
- If 30%+ of conversions have a display touchpoint: display is assisting conversions
  even if it never gets last-click credit

Step 5: Present to the CFO — Defensible Budget Recommendations

BOARD PRESENTATION — Marketing Attribution and Budget Allocation

FRAMING:
"Our last-click GA4 data shows Google Ads driving 72% of conversions.
However, last-click attribution systematically undercounts channels that
generate awareness and assist conversions. Here is our multi-source assessment:"

EVIDENCE:
1. GA4 Data-Driven Attribution: LinkedIn = 18% of conversions (vs. 8% last-click)
2. LinkedIn Platform-Reported: LinkedIn = 35% of conversions (upper bound)
3. Customer Self-Report (HDYHAU, 312 responses): 35% mention LinkedIn as first awareness touchpoint
4. Incrementality Test Result: LinkedIn drives [X%] incremental lift in holdout regions
   (to be filled in after test completes)

CONCLUSION:
"LinkedIn's true contribution is likely 25–35% of conversions — not 8%.
Our current LinkedIn budget (£240K/year = 20% of total) is proportionate
to its actual contribution. The last-click model was under-valuing LinkedIn
and would have led to a budget cut that reduced pipeline."

BUDGET RECOMMENDATION (based on triangulated attribution):
Channel          | Current Budget | Current True ROAS | Recommended Budget | Expected Impact
Google Brand     | £150K          | 12x (brand)       | £160K (+7%)        | Protect brand share
Google Generic   | £420K          | 2.8x              | £380K (-10%)       | Reduce low-intent broad
LinkedIn         | £240K          | 3.2x (adjusted)   | £300K (+25%)       | Proven mid-funnel impact
Meta             | £180K          | 2.4x              | £200K (+11%)       | Scale after creative fix
Programmatic     | £60K           | 1.8x (assisted)   | £60K (hold)        | Awareness, monitor assist
Retargeting      | £150K          | 6.5x              | £100K (-33%)       | Reduce audience saturation

4. Interview Score: 10 / 10

Why this demonstrates staff-level maturity: The explicit identification that last-click attribution gives 100% credit to branded Google search (because B2B buyers always end their 6-week research journey with a branded search) — and the consequence that cutting LinkedIn (which is the awareness driver that causes the branded searches) would reduce revenue, not improve efficiency — is the strategic insight that separates performance marketers who understand the full funnel from those who optimise individual channels in isolation. The triangulation framework (GA4 Data-Driven + Platform Self-Reported + Customer HDYHAU Survey) is the practical synthesis of three imperfect data sources that produces a defensible range estimate rather than false precision from a single model. The incrementality test design (geo-based holdout with matched regions, minimum 100 conversions for statistical significance) is the gold standard of causal attribution measurement.

What differentiates it from senior-level thinking: A senior performance marketer would identify the last-click problem and recommend switching to data-driven attribution — correct but insufficient. They would not design the geo-based holdout experiment, would not connect the 35% HDYHAU LinkedIn mention to the case for maintaining the LinkedIn budget, and would not identify the server-side tagging solution for the cookie consent gap. They would not present the board-level budget recommendation with the multi-source attribution evidence table.

What would make it perfect: This response scores 10/10. The only possible enhancement would be a specific Python/R script for the media mix model regression showing how to estimate channel contributions from time-series data — but for a board-level CFO presentation, the incrementality test and triangulation framework are more credible than a regression model.



Question 4: Programmatic Advertising and DSP Strategy — Building a Full-Funnel Programmatic Stack for a £3M Brand

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Deliveroo, Cazoo, Gousto, Moonpig, Auto Trader


The Question

You are a Senior Performance Marketing Manager at a scale-up consumer brand with a £3M annual media budget. Until now the company has run only Google Ads and Meta Ads (managed in-house) with no programmatic display or video advertising. The CEO wants to build brand awareness with consumers aged 25–45 across the UK who have shown interest in your product category but have not yet purchased. The growth team estimates that reaching this audience at scale could unlock 25–30% incremental growth in new customer acquisition over 18 months. Your challenge: design a programmatic advertising strategy that uses a DSP (Demand-Side Platform) to purchase display, video, and native inventory across premium publisher networks. Specific requirements: (1) the audience must be reached across premium placements (The Guardian, BBC Online, Sky News, Daily Mail Online), not just long-tail ad networks; (2) the targeting should use a combination of first-party data (existing customer email list of 85,000 records), third-party audience segments (interest-based), and contextual targeting (appearing alongside relevant content); (3) you need a measurement framework that proves incremental value given that programmatic display does not generate direct click-through conversions in a last-click model; (4) brand safety controls must prevent ads appearing alongside inappropriate content (a controversy about competitor advertising appearing on extremist content has made the CEO particularly sensitive to this); (5) you must choose between a managed-service DSP (where the DSP manages buying on your behalf) vs. a self-serve DSP (where you manage buying directly) given your team has no programmatic expertise. Walk through your platform selection, targeting strategy, creative requirements, measurement plan, and brand safety setup.


1. What Is This Question Testing?

  • DSP selection and managed vs. self-serve tradeoff — understanding the ecosystem: a DSP (Demand-Side Platform) is software that automates the purchase of digital ad inventory across multiple publishers and ad exchanges through real-time bidding (RTB); knowing the major DSPs and their positioning: The Trade Desk (premium, self-serve, excellent data partnerships, best for large in-house teams with expertise), DV360 (Google's DSP, strong YouTube + Google Display Network integration, managed-service option available, easiest integration with Google Analytics), Xandr/Microsoft Advertising Display (strong premium publisher relationships in the UK), Amazon DSP (best for e-commerce with first-party purchase data); knowing when managed service is correct (no in-house programmatic expertise, budget <£500K/year on programmatic, need guidance on strategy) vs. self-serve (in-house expertise, budget >£1M/year on programmatic, need maximum control and data visibility)
  • Audience targeting types and privacy implications — understanding three targeting approaches in programmatic: (a) first-party data targeting (uploading your customer email list, hashing it, and matching it to publisher cookie IDs — most accurate, most privacy-compliant because it is your own data); (b) third-party audience segments (buying pre-built audience segments from data providers like Nielsen, Oracle Audience Science, LiveRamp — these are being phased out by Google's deprecation of third-party cookies in Chrome, making them less reliable); (c) contextual targeting (targeting based on the content of the page being displayed, not the user's identity — no cookies required, fully GDPR-compliant, proven to perform well for brand awareness); knowing the industry shift toward contextual targeting in a post-cookie world
  • Premium inventory buying and PMPs (Private Marketplace) — understanding the difference between open exchange buying (bidding in the general ad auction for any available inventory — high volume, lower quality, more brand safety risk) vs. private marketplace deals (PMP — a direct deal between you and a specific publisher like The Guardian, where you get first-access to their premium inventory at a negotiated floor price); for a brand-safety-sensitive CEO and premium placement requirement, PMPs are mandatory — you cannot guarantee BBC Online or The Guardian placement through open exchange
  • Viewability, brand safety, and ad fraud controls — knowing the industry standards: viewability (an ad is "viewable" per IAB/MRC standards if 50% of the display ad's pixels are in view for at least 1 second; for video, 50% in view for 2 continuous seconds); brand safety (using category-level exclusions — exclude News/Extremism, Adult, Violence categories — plus keyword-level blocking — exclude specific keywords from appearing on pages with those words); IAS (Integral Ad Science) and DV (DoubleVerify) are third-party verification tools that layer onto the DSP to provide pre-bid filtering (block non-viewable or unsafe inventory before bidding on it) and post-bid reporting (confirm what was actually served)
  • Programmatic creative specifications and DCO — understanding that programmatic display requires multiple creative sizes (standard IAB sizes: 300×250, 728×90, 160×600, 300×600, 320×50 mobile) and that dynamic creative optimisation (DCO) allows a single creative template to be personalised in real time based on the user's audience segment (e.g., users who have purchased before see a "try our new product" message; new prospects see a "discover [brand]" message — without creating separate creatives for each variant)
  • Programmatic measurement and brand lift — understanding that programmatic display cannot be measured by last-click attribution because display generates impressions (not necessarily clicks) that build awareness over time; the correct measurement framework includes: (a) brand lift surveys (measuring brand recall, ad recall, purchase intent with a test vs. control group), (b) search lift studies (measuring whether branded search volume increases in regions where programmatic campaigns are active vs. holdout regions), (c) custom attribution windows (assigning post-view conversion credit within a defined window, e.g., any purchase within 7 days of viewing the ad gets partial credit)

2. Framework: Full-Funnel Programmatic Strategy and Measurement Model (FFPSMM)

  1. Assumption Documentation — Confirm the CEO's definition of "premium placements": does "The Guardian" mean any Guardian page, or specific sections (News, Lifestyle, Technology)? Premium placement requirements dramatically change the inventory procurement approach (open exchange vs. PMP deals); confirm whether first-party data can be used for targeting (GDPR requires that the customer email list was collected with explicit consent for marketing — confirm the consent language in your sign-up forms covers programmatic retargeting)
  1. Constraint Analysis — The team has no programmatic expertise; self-serve DSP management requires 6–12 months of learning before it becomes efficient; a managed service partner is correct for the first 12 months, with a plan to bring expertise in-house for year 2 if programmatic proves effective
  1. Tradeoff Evaluation — DV360 managed service (strong Google/YouTube integration, familiar reporting in GA4, easiest onboarding) vs. The Trade Desk managed service (better independent publisher relationships, stronger UK premium publisher PMPs, richer first-party data matching via LiveRamp); for UK premium publishers (Guardian, BBC, Sky) and brand safety sensitivity, The Trade Desk has better relationships and more granular brand safety controls; DV360 is correct if YouTube video is a priority
  1. Hidden Cost Identification — Managed-service DSP fees are typically 15–25% of media spend (charged as a management fee on top of media); on £500K programmatic spend, this is £75K–125K in fees; net-net, the effective CPM and CPV are higher than the gross media rate; this must be factored into the ROI calculation
  1. Risk Signals / Early Warning Metrics — Weekly viewability rate (target >70% viewable impressions; if below 60%, bid adjustments are needed to prioritise viewable inventory); brand safety incident rate (monitor weekly in IAS/DV dashboard; any ad appearing alongside blocked content categories must trigger an immediate investigation of the exclusion list); brand lift survey interim results (4 weeks into campaign, check interim lift scores; if ad recall is below 15% after 4 weeks, creative is not breaking through and should be refreshed)
  1. Pivot Triggers — If after 12 weeks of programmatic at £500K spend, branded search volume in targeted regions shows no meaningful lift vs. holdout regions (the search lift study shows <5% incremental branded search), the programmatic is not building awareness at sufficient scale or frequency; pivot to higher-frequency creative rotation or larger creative sizes (300×600 or full-page units) that demand more attention
  1. Long-Term Evolution Plan — Months 1–2: RFP and DSP partner selection; negotiate PMP deals with target publishers; build creative assets; Month 3: campaign launch (£40K test budget); Months 4–6: scale to £100K/month; Month 6: brand lift survey results; Month 7–12: full scale (£150K–200K/month); Month 13: bring programmatic in-house with a dedicated programmatic buyer hire

3. The Answer

Step 1: DSP Platform and Managed Service Selection

RECOMMENDATION: The Trade Desk — Managed Service (first 12 months)

Rationale:
✓ UK premium publisher PMP relationships: The Trade Desk has direct PMP deals
  with Guardian Media Group, Sky Media, Reach PLC (Daily Mail/Mirror), and
  BBC Studios — enabling guaranteed access to premium inventory
✓ Brand safety: DV (DoubleVerify) pre-bid integration is the industry standard
  for brand safety in the UK; The Trade Desk's native integration is mature
✓ First-party data onboarding: LiveRamp partnership allows customer email list
  upload with privacy-safe hashing; matched to publisher user graphs without
  exposing email addresses to the DSP
✓ Managed service: For a team with no programmatic expertise, managed service
  means the DSP assigns a dedicated trading desk team to manage buying,
  optimisation, and reporting

Alternative considered: DV360 (Google)
✗ Better YouTube integration (not a priority in this brief)
✗ Weaker UK premium publisher PMP relationships vs. The Trade Desk
✓ Better if we want YouTube video as part of the programmatic stack (reconsider in Year 2)

Contract structure:
- Managed service fee: 20% of media spend
- Minimum spend commitment: £40K/month (to justify dedicated trading desk attention)
- Measurement: Monthly reporting cadence + access to The Trade Desk platform UI for transparency

Step 2: Inventory and PMP Deal Structure

For premium placement requirements, negotiate Private Marketplace (PMP) deals with each publisher:

PMP Deal Negotiations (target completion: Months 1–2):

Publisher: The Guardian
- Sections: News, Lifestyle, Technology, Environment
- Format: 300×250 display + 300×600 half page
- Floor CPM: £8–12 (premium UK publisher CPM range)
- Minimum commitment: £10K/month
- Brand safety: Guardian curates its own brand safety categories (inherently safer than open exchange)

Publisher: Sky News (via Sky Media)
- Sections: News, Sports
- Format: 728×90 leaderboard + 300×250 + pre-roll video (15s)
- Floor CPM display: £7–10; CPM video: £15–20
- Minimum commitment: £8K/month

Publisher: Daily Mail Online / Mail Online (via Reach PLC)
- Sections: News, Femail, Money (avoids extremist content risk — Reach PLC has strong brand safety controls)
- Format: 300×250 + 160×600 skyscraper
- Floor CPM: £5–8 (higher volume, slightly lower CPM than Guardian)
- Minimum commitment: £8K/month

Open Exchange (supplementary, lower priority):
- Buy via The Trade Desk's marketplace with IAS/DV pre-bid filters applied
- Target: Premium publishers not covered by PMPs; exclude long-tail ad networks
- CPM target: £3–6

Budget allocation:
- PMP (guaranteed premium inventory): 60% of programmatic budget = £240K/year
- Open Exchange (managed quality): 30% = £120K/year
- YouTube / CTV (via The Trade Desk or DV360 Year 2): 10% = £40K/year

Step 3: Audience Targeting Strategy

Layer three targeting approaches for maximum precision and reach:

Layer 1: First-Party Data Suppression and Lookalike (Segment: Existing Customers)

Upload: 85,000 customer email list → hashed via LiveRamp → matched to publisher user graph
Use case A (Suppression): Exclude existing customers from acquisition targeting
  (no point paying to advertise to people who already purchased)
Use case B (Lookalike): Create a "lookalike" audience of users who resemble
  existing customers in demographic + behavioural characteristics
  → Estimated lookalike audience: 2–5M UK adults (depending on similarity threshold)

Layer 2: Third-Party Interest Segments (Segment: In-Market Prospects)

Source: Oracle Audience Science or Nielsen Scarborough UK segments
Categories relevant to the product category:
  - "Home Improvement Intent" (past 30 days)
  - "Premium Product Buyers" (historical purchase behaviour)
  - "25–44, HHI £40K+" (demographic overlay)
  - "Urban/Suburban UK Adults" (geographic + lifestyle)
Note: Third-party segments are declining in reliability post-cookie deprecation;
use as supplementary layer, not primary targeting

Layer 3: Contextual Targeting (Segment: Relevant Content Readers)

Target by page content topic, not by user identity:
- Include: Pages about home furnishing, interior design, home improvement, lifestyle, wellness
- Include: Review content about products in your category
- Exclude: Extremist content, violence, adult content (see brand safety section)
Tool: Peer39 or IAS contextual targeting classification (integrates with The Trade Desk)
Advantage: No cookies required, fully GDPR-compliant, works for cookie-declined users

Audience priority:
Tier A (highest bid): First-party lookalike + contextual match (double qualification)
Tier B: Third-party interest segment + contextual match
Tier C: Contextual match only (broadest reach)
Tier D: Open exchange with interest segment only (most volume, lowest CPM)

Step 4: Brand Safety Setup — Non-Negotiable Controls

Given CEO sensitivity to brand safety after the competitor controversy:

Pre-bid brand safety (prevent buying unsafe inventory before the auction):

IAS Pre-bid Integration (apply to ALL buying, including open exchange):
- Category exclusions: Adult Content, Alcohol, Gambling, Hate Speech,
  Illegal Content, Malware, Violence/Extreme Violence, Weapons, Fake News
- Viewability threshold: Only bid on placements with predicted viewability >60%
- Fraud filter: Only bid on inventory classified as "low risk" for ad fraud (IAS Fraud Shield)

In-platform The Trade Desk controls:
- Domain-level exclusions: Maintain a custom "exclusion domain list"
  (block specific domains known for controversial content)
- Keyword exclusions: Block pages containing specific keywords
  (e.g., "extremism," "terrorism," "hate," — platform keyword blocking)

PMP deals (premium placements):
- PMP inventory from Guardian, Sky, Reach inherently carries lower brand safety risk
  because these publishers apply their own editorial standards;
  brand safety controls here are a backup, not the primary protection

Post-bid verification (confirm what actually ran):
- DV (DoubleVerify) post-impression reporting: shows what content each ad
  actually appeared alongside, categorised by IAS/DV taxonomy
- Review weekly: if any ads appeared in "questionable" content categories,
  add those domains/categories to the exclusion list

CEO reporting:
- Weekly brand safety report:
  "% of impressions in premium placements: 72%"
  "% of impressions flagged as brand-unsafe: 0.3% (auto-blocked pre-bid)"
  "Domains blocked this week: 0 new incidents"

Step 5: Measurement Framework — Proving Programmatic Value

METRIC 1: Brand Lift Survey (primary)

Setup:
- The Trade Desk runs a built-in brand lift study (or partner with Kantar/Dynata)
- Survey sent to two groups:
  - Test group: users who saw 3+ programmatic impressions
  - Control group: matched users who did NOT see the programmatic ads
- Survey questions:
  1. "Have you heard of [Brand]?" (Brand Awareness)
  2. "Have you seen any advertising from [Brand] recently?" (Ad Recall)
  3. "How likely are you to purchase from [Brand] in the next 3 months?" (Purchase Intent)
- Measure: lift in each metric (test vs. control)
- Timeline: Run for 8 weeks, measure at midpoint and endpoint

Target lifts:
  Brand Awareness: +5–10 percentage points
  Ad Recall: +15–25 percentage points
  Purchase Intent: +3–8 percentage points

METRIC 2: Branded Search Lift (secondary)

Setup:
- Run programmatic only in 5 of 10 UK TV regions (regional geo targeting on TTD)
- Track branded Google search volume week-over-week in programmatic vs. non-programmatic regions
- Data source: Google Search Console (branded impressions by region) or Google Trends

METRIC 3: View-Through Attribution (reporting only, not primary)

Setup:
- Set a 7-day view-through window in The Trade Desk
- Any customer who purchased within 7 days of seeing the ad gets partial attribution credit
- Use as a directional signal, not as a ROAS metric (VTA can overcount)

METRIC 4: Revenue Impact Estimate (for CFO)

Method:
- If brand lift shows +6% purchase intent lift
- Existing new customer acquisition rate: 500/month
- +6% purchase intent across the target audience → estimated incremental customers:
  500 × 1.06 = 530 (+30 customers/month)
- Average order value: £85
- Incremental monthly revenue: 30 × £85 = £2,550/month
- Annualised: £30,600
- Against programmatic spend: £500K/year
- Note to CFO: "These are directional estimates; brand investment has long-term value
  beyond immediate revenue that is not captured in short-term attribution"

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The explicit recommendation of managed service over self-serve — with the specific rationale that a team without programmatic expertise should not attempt self-serve on a £500K budget — shows the operational honesty that prevents the classic mistake of attempting to learn programmatic on a large live budget and paying for the learning curve with wasted media spend. The PMP deal strategy (negotiating directly with Guardian, Sky, and Reach rather than hoping open exchange delivers "premium" inventory) is the correct answer to the CEO's premium placement requirement; open exchange cannot guarantee specific publisher placements. The brand safety setup (IAS pre-bid + DV post-bid + PMP editorial standards as primary protection) shows the layered defence-in-depth that addresses the CEO's sensitivity without over-engineering a solution that blocks too much inventory and limits reach.

What differentiates it from mid-level thinking: A mid-level performance marketer would say "use Google Display Network" (wrong — GDN is open exchange, not premium publishers), "set up brand safety keywords" (correct but insufficient without IAS/DV pre-bid filtering), and "measure by click-through conversions" (wrong — display at scale should be measured by brand lift, not CPA). They would not know about PMPs, managed-service DSP economics, or the distinction between pre-bid and post-bid brand safety verification.

What would make it a 10/10: A 10/10 response would include a specific PMP deal term sheet showing CPM floor prices, minimum spend commitments, and measurement pixels negotiated into the publisher contract, plus a DCO creative brief showing how the first-party customer list and the prospect audience would receive dynamically personalised ad messages from the same creative template.



Question 5: Performance Marketing Leadership and Team Building — Building a High-Output Performance Marketing Team from Scratch

Difficulty: Elite | Role: Performance Marketing Manager | Level: Senior / Staff | Company Examples: Deliveroo, Monzo, Babylon Health, Cazoo, Bulb Energy


The Question

You are a newly hired Head of Performance Marketing at a Series B scale-up with a £4M annual paid media budget and zero existing performance marketing team. The previous performance marketing function was entirely outsourced to an agency that has just been terminated due to underperformance (blended ROAS of 1.8x against a 3.0x target, significant overspend against budget, and an attribution model that showed misleading results to the board). Your mandate: build an in-house performance marketing team over 12 months, take ownership of all paid channels (Google Ads, Meta Ads, LinkedIn Ads, programmatic display), implement rigorous measurement and attribution, and hit a blended ROAS of 3.0x by month 12. Constraints: (1) you have headcount approval for 4 FTEs (yourself + 3 hires), (2) the £4M media budget is fixed — you cannot increase it, (3) during the transition from agency to in-house, there will be a 60–90 day "learning phase" where ROAS is expected to decline (the agency holds institutional knowledge and access that must be rebuilt), (4) the CEO is nervous about the transition risk and wants weekly performance updates, (5) the existing paid media accounts (Google Ads, Meta Ads Manager) are owned by the agency and must be transferred to you — the agency is not cooperating with the transfer. Walk through your hiring plan, the agency transition protocol, the measurement framework you implement on day 1, and the 12-month roadmap to ROAS 3.0x.


1. What Is This Question Testing?

  • Organisational design for a performance team — understanding which roles are essential in a high-performing in-house performance marketing team: a Paid Search Specialist (owns Google Ads + Bing), a Paid Social Specialist (owns Meta + LinkedIn + TikTok if relevant), a Data/Analytics Analyst (owns measurement, attribution, reporting, and experimentation), and a Programmatic/Display Specialist (owns DSP, display, video); knowing the sequencing of which roles to hire first (data analyst first — because without measurement infrastructure, all other hires are flying blind); knowing the difference between a generalist performance marketer (can run all channels at junior to mid level) vs. a specialist (expert in one channel, needed at senior level when that channel spends >£1M/year)
  • Agency transition management — understanding the operational risks of transitioning from an agency to in-house: (a) account access (if the agency owns the Google Ads account, you may not have access to historical conversion data, negative keyword lists, audiences, and campaign history — all of which must be transferred or rebuilt); (b) institutional knowledge (the agency knows which keywords perform, which creatives are fatigued, which audiences are saturated — this knowledge must be documented before the agency is terminated); (c) knowledge transfer obligation (most agency contracts include a 30–90 day transition period with knowledge transfer obligations — if the agency is "not cooperating," review the contract for breach of contract and legal remedies); knowing that the correct protocol is to create new accounts owned by the company (not the agency) and request account linking/historical data export before terminating the agency
  • Day 1 measurement infrastructure — understanding that the first action on Day 1 of an in-house transition is not to optimise campaigns — it is to ensure that you have accurate measurement of what is currently happening; without baseline measurement, you cannot tell whether your changes are helping or hurting; Day 1 requires: confirming Google Ads conversion tracking is firing correctly, confirming GA4 is installed and events are configured, confirming that the account has a clean conversion history, and setting up a weekly reporting dashboard that the CEO can access
  • ROAS target setting and expectation management — understanding the 60–90 day ROAS dip during transition: when an in-house team takes over from an agency, ROAS typically declines because the new team needs time to learn the account, rebuild institutional knowledge, and implement improvements; this dip must be communicated to the CEO proactively (not explained after it happens) with a specific timeline (ROAS will decline to approximately X in months 1–3, recover to baseline by month 4–6, and reach target by month 12) and a clear decision framework (if ROAS has not recovered to X by month 6, reassess the in-house strategy)
  • Channel ownership and prioritisation — understanding that with 4 people managing a £4M budget across 4 channels, prioritisation is critical; the most impactful initial focus is the highest-spend, worst-performing channel (if Google Ads represents 60% of spend at 1.5x ROAS while LinkedIn at 20% of spend runs at 3.5x ROAS, fixing Google Ads is the highest-leverage action even though LinkedIn is technically underperforming its potential); knowing how to calculate the revenue impact of improving each channel's ROAS
  • Performance marketing culture and experimentation cadence — understanding that a high-output in-house team is distinguished not just by technical skills but by a testing culture; the team should run 4–8 controlled experiments per month (creative tests, audience tests, bidding strategy tests, landing page tests), each with a defined hypothesis, control group, measurement plan, and success criteria; this testing cadence is what compounds performance improvement over time

2. Framework: In-House Performance Marketing Build and ROAS Recovery Model (IHPMBRRM)

  1. Assumption Documentation — Confirm what assets are owned by the agency vs. the company: are the Google Ads and Meta Ads accounts owned by the company (the company is "admin" with "owner" access) or by the agency (the agency is the account owner and the company has "standard" access)? This distinction is critical — if the agency owns the accounts, termination means losing all historical data and having to start fresh; if the company owns the accounts, the agency's access can be revoked without data loss
  1. Constraint Analysis — 4 FTEs managing a £4M budget is approximately £1M per person in media responsibility; this is manageable with strong automation and tool support (smart bidding, creative management tools, automated reporting) but requires each person to be highly skilled and to operate with minimal management overhead; the Head of Performance (yourself) should spend 40% of time on strategy/leadership and 60% on hands-on execution in the first 6 months until the team is fully ramped
  1. Tradeoff Evaluation — Hire experienced senior specialists immediately (fast ramp-up, higher salary, less management required) vs. hire mid-level generalists and train them (lower salary, slower ramp-up, more management required, higher culture-fit risk); for a 4-person team managing £4M with a 12-month ROAS target, experienced specialists are correct — the business cannot afford a 12-month learning curve on a £4M budget
  1. Hidden Cost Identification — In-house performance marketing has hidden costs beyond salary: tools (Google Ads Editor, SEMrush, Meta Business Suite, SA360, The Trade Desk licence, Looker Studio or Tableau for reporting, Supermetrics for data aggregation) cost £30K–80K/year; this must be budgeted in addition to headcount costs; do not confuse "in-house is cheaper than agency" with "in-house has no costs" — the saving is in margin capture, not in elimination of costs
  1. Risk Signals / Early Warning Metrics — Weekly blended ROAS (alert if below 1.5x in months 1–3 — indicates the transition is going worse than expected and intervention is needed); account transfer completeness (alert if any channel's historical data has not been fully exported before agency termination — historical data is irreplaceable); team ramp velocity (alert if any hire cannot independently manage their channel within 8 weeks of starting — may indicate a wrong hire or insufficient onboarding)
  1. Pivot Triggers — If at month 6, blended ROAS is below 2.5x (instead of the expected 2.8x recovery): either the agency was doing more than the team can replicate (in which case, the in-house model needs a reassessment), or there is a measurement error (in which case, fix the measurement before concluding performance is bad); run a 2-week measurement audit before concluding that ROAS is genuinely bad
  1. Long-Term Evolution Plan — Month 1: account transfer + measurement infrastructure + first hire (data analyst); Month 2: second and third hires (paid search, paid social); Month 3: fourth hire (programmatic); Month 4–6: all channels stabilised, ROAS recovering; Month 7–9: first optimisation cycle (creative refresh, bidding strategy overhaul, audience expansion); Month 10–12: compound improvements deliver ROAS 3.0x target

3. The Answer

Step 1: Resolve the Account Transfer Crisis (Days 1–14)

The most urgent priority is not campaign performance — it is account ownership. Without the accounts, you cannot measure, let alone optimise.

Account Transfer Protocol:

Step 1: Audit current account ownership (Day 1)
Go to:
- Google Ads: Admin → Account Access → confirm whether the company
  has "Admin" access and the agency has "Standard" access (company-owned)
  OR the agency has "Admin/Owner" and the company has "Read-Only" (agency-owned)
- Meta Business Manager: Business Settings → People → confirm whether the
  company's Business Manager owns the ad accounts

Step 2: If company-owned accounts (best case):
- Revoke agency access immediately (Google Ads: Admin → User access → Remove)
- Change all account passwords and 2FA to company-controlled email
- Export all historical data BEFORE revoking agency access:
  ☐ All search terms reports (last 24 months)
  ☐ Negative keyword lists (download from Shared Library)
  ☐ Audience lists and custom combinations
  ☐ Conversion action definitions and values
  ☐ All campaign settings, ad copy, and keyword data (Google Ads Editor bulk export)

Step 3: If agency-owned accounts (worst case):
- Review agency contract: does it include a "transition" clause requiring
  account transfer and data export on termination?
- If yes: Send formal written notice citing the contract clause;
  give 14-day deadline for account transfer or data export
- If no cooperation by day 14: Seek legal advice (breach of contract);
  the agency may be in violation of their data handling obligations under GDPR
  (your customer data in the account is your data, not the agency's)
- Parallel track: Create NEW company-owned Google Ads accounts immediately;
  begin rebuilding campaigns from scratch using any data you can recover from
  GA4, CRM, and historical invoices (yes — this is painful; it is the cost of
  not securing account ownership before the agency relationship started)

Step 4: Account security post-transfer
- Add all new team members as account users (give appropriate access levels)
- Enable Google Ads API access for SA360 or Supermetrics reporting integration
- Set up Google Ads account-level monthly budget cap (prevents overspend)

Step 2: Implement Day 1 Measurement Infrastructure

Before making any campaign changes, ensure you can accurately measure what is happening:

Day 1 Measurement Checklist:

GA4:
☐ Confirm GA4 is installed on all website pages (check with GA4 Debugger Chrome extension)
☐ Confirm key events are tracking: demo_request, free_trial_start, purchase, page_view
☐ Confirm GA4 is linked to Google Ads account
☐ Confirm consent mode is configured (UK GDPR requires cookie consent before tracking)
☐ Set up GA4 → BigQuery export (for advanced analysis and data retention)

Google Ads:
☐ Confirm conversion actions are defined with correct counting method
  (One conversion per click for leads; Every conversion for e-commerce)
☐ Confirm conversion values are assigned (£X per demo request, £Y per trial)
☐ Confirm auto-tagging is enabled (GCLID parameter appended to all clicks)
☐ Confirm Google Ads ↔ GA4 bidirectional link is active (Linked → GA4)

Meta Ads:
☐ Confirm Meta Pixel is installed (verify with Meta Pixel Helper Chrome extension)
☐ Confirm Conversions API (CAPI) is implemented (server-side tracking for iOS 14.5)
☐ Confirm standard events are tracking: Lead, StartTrial, Purchase
☐ Confirm datasets match between Pixel and CAPI (no duplicate event counting)

LinkedIn Ads:
☐ Confirm LinkedIn Insight Tag is installed
☐ Confirm conversion tracking events are configured
☐ Confirm matched audiences are set up (email list upload for retargeting)

Weekly CEO Dashboard (set up in Looker Studio, Day 1):
- Blended ROAS (total revenue / total media spend)
- Cost per opportunity by channel
- Total opportunities generated by channel
- Week-over-week spend trend
- Learning Phase status alerts (any campaign in learning = flagged red)

Step 3: Hiring Plan — Sequence and Criteria

HIRE 1 (Month 1): Marketing Data Analyst
Why first: Without measurement, all other hires are optimising blind.
Responsibilities: Attribution model, weekly/monthly reporting, A/B test design,
  data pipeline (GA4 → BigQuery → Looker Studio), conversion tracking audit
Skills required: SQL (advanced), Python or R (data analysis), GA4 (advanced),
  Looker Studio, statistics/experimentation
Salary range: £45K–65K (London)
Interview test: "Given this GA4 export, identify which channels have attribution
  discrepancy between last-click and data-driven attribution models"

HIRE 2 (Month 2): Senior Paid Search Specialist
Why second: Google Ads represents the largest spend (estimated 50–60% of £4M);
  fixing it has the highest ROAS impact
Responsibilities: Google Ads strategy, keyword management, bidding strategy,
  Quality Score optimisation, search term mining, landing page brief writing
Skills required: Google Ads (advanced), Google Ads Editor, SA360 (preferred),
  Smart Bidding strategy, conversion rate optimisation principles
Salary range: £50K–70K (London)
Interview test: "Given this Google Ads account with a 4/10 average QS and
  3,400 irrelevant search terms, walk me through your first 30-day action plan"

HIRE 3 (Month 2): Senior Paid Social Specialist
Why third: Meta and LinkedIn represent the second and third largest spend pools
Responsibilities: Meta Ads strategy (prospecting + retargeting), LinkedIn Ads
  strategy, creative testing system, audience expansion, UGC brief writing
Skills required: Meta Ads Manager (advanced), Meta Advantage+ knowledge,
  LinkedIn Campaign Manager, creative strategy, iOS 14.5 attribution workarounds
Salary range: £48K–68K (London)
Interview test: "Walk me through how you would structure a Meta Ads account
  scaling from £50K/month to £200K/month without ROAS collapse"

HIRE 4 (Month 3): Programmatic and Display Specialist
Why fourth: Programmatic is the most specialised channel and can be covered
  by managed service DSP until month 3 without a dedicated hire
Responsibilities: The Trade Desk DSP management (or managed service oversight),
  PMP deal management, brand safety monitoring, programmatic measurement
Skills required: DSP experience (The Trade Desk preferred), RTB mechanics,
  brand safety tools (IAS/DV), programmatic creative specifications
Salary range: £50K–70K (London)
Interview test: "Explain how you would design a Private Marketplace deal
  structure for a CEO who is sensitive to brand safety"

Step 4: 12-Month ROAS Recovery Roadmap

PHASE 1: Stabilise (Months 1–3)
Target ROAS: 1.8x → 2.0x (maintaining current level while rebuilding)

Actions:
- Complete account transfers and measurement infrastructure (Month 1)
- Implement emergency negative keyword lists across all Google Ads campaigns (Month 1)
- Fix conversion tracking errors (Month 1)
- Hire data analyst; begin weekly reporting (Month 1)
- Hire paid search + paid social specialists (Month 2)
- Pause obvious waste (broad match keywords with zero conversions, tiny competitor campaigns)
- Set Target CPA on Smart Bidding to stabilise algorithm (Month 2–3)

Expected ROAS: 1.8x → 2.0x (small recovery from fixing obvious waste)
Expected CPO: £278 → £250

PHASE 2: Optimise (Months 4–6)
Target ROAS: 2.0x → 2.5x

Actions:
- Complete campaign restructure (tightly themed ad groups, improved QS)
- Launch creative testing system (2–3 new creative concepts per week)
- Implement first-party data audiences (customer email list upload)
- Build full-funnel Meta campaign structure
- Launch LinkedIn incrementality test (to validate LinkedIn attribution)
- Hire programmatic specialist (Month 3); launch managed-service programmatic
- First brand lift study launch (programmatic)

Expected ROAS: 2.0x → 2.5x (structural improvements compound)
Expected CPO: £250 → £200

PHASE 3: Scale (Months 7–9)
Target ROAS: 2.5x → 2.8x

Actions:
- Scale winning creative formats across all paid social
- Expand audience layers (Advantage+ campaigns, new Lookalike sizes)
- Geographic expansion to new UK regions if core markets saturating
- Implement advanced attribution (triangulated model: GA4 DDA + self-reported + holdout)
- Reallocate budget toward highest-performing channels based on incrementality data

Expected ROAS: 2.5x → 2.8x

PHASE 4: Excel (Months 10–12)
Target ROAS: 2.8x → 3.0x+

Actions:
- Compound all optimisations: account structure + creative quality + audience precision + bidding efficiency
- A/B test landing pages on top-traffic terms (targeting 20–30% conversion rate improvement)
- Negotiate performance-based PMP deals with publishers
- Build 12-month look-back on which channels drive the highest LTV customers
  (not just CPO — but also retention rates and average order value)
- Present attribution insights to CFO; recommend Year 2 budget allocation

Target: Blended ROAS 3.0x by end of Month 12

Step 5: CEO Weekly Communication Protocol

Every Monday, send a 1-page performance update:

WEEK [N] PERFORMANCE UPDATE — Performance Marketing

HEADLINE: "On track / Slightly behind / Ahead of target"

METRICS:
Metric                | This Week | Last Week | Month Target | Year Target
Blended ROAS          | 2.1x      | 1.9x      | 2.3x         | 3.0x
Total Spend           | £77K      | £76K      | £80K         | £4M
Total Opportunities   | 324       | 289       | 350          | 4,000
Cost Per Opportunity  | £238      | £263      | £230         | £150
Google ROAS           | 2.8x      | 2.4x      | 3.0x         | 3.5x
Meta ROAS             | 1.8x      | 1.7x      | 2.0x         | 2.5x
LinkedIn ROAS         | 1.4x*     | 1.2x      | 1.5x         | 2.0x

*LinkedIn ROAS understated due to last-click attribution gap (see last week's note)

WHAT CHANGED THIS WEEK:
- Paused 28 low-volume competitor keywords → saved £18K/month run rate
- Launched new Meta creative set (before/after skin testimonials) → CTR +34% vs. control
- Conversion tracking fix deployed → now counting demo requests separately from whitepaper downloads

WHAT'S NEXT WEEK:
- Google Ads ad group restructure (splitting broad topics into themed groups)
- Meta creative A/B test 2: price objection handling creative vs. product-benefit creative
- Monthly LinkedIn Ads review (considering increasing LinkedIn budget if incrementality test confirms lift)

RISKS:
- Google Ads account still showing Learning Phase on 2 campaigns →
  pausing budget changes until Learning Phase exits (est. 10 days)

CEO ACTION NEEDED: None this week — monitoring only.

4. Interview Score: 10 / 10

Why this demonstrates staff-level maturity: The account transfer crisis resolution — specifically the legal rights of company-owned data under GDPR and the parallel track of rebuilding new accounts while pursuing legal remedies — demonstrates the operational leadership that goes beyond performance marketing expertise into business operations and stakeholder management. The hiring sequence (data analyst first, because without measurement all other hires are optimising blind) shows the systems-level thinking that understands team effectiveness is a function of infrastructure, not just individual skill. The 4-phase 12-month roadmap with specific ROAS targets per phase (1.8x → 2.0x → 2.5x → 2.8x → 3.0x) sets expectations with mathematical precision — communicating to the CEO that ROAS recovery is a staged process with leading indicators at each gate.

What differentiates it from senior-level thinking: A senior performance marketer would correctly identify the channel-level actions needed (fix Google Ads, restructure Meta, implement attribution) but would not design the hiring sequence and criteria, the account transfer legal protocol, or the CEO communication rhythm. They would not understand the distinction between company-owned and agency-owned accounts or the GDPR implication that customer data in the account is the company's data. They would not build the 12-month phased ROAS recovery model with gate criteria that allow for early intervention if the trajectory is wrong.

What would make it perfect: This response scores 10/10 across all dimensions tested: organisational design (hiring plan, sequencing, interview criteria), crisis management (account transfer protocol, legal remedy, parallel track), measurement (day 1 infrastructure, weekly CEO dashboard), strategy (4-phase roadmap with quantified targets), and leadership (proactive transition risk communication, escalation protocol). The one enhancement would be a specific tool stack recommendation with total cost (e.g., "Supermetrics £12K/year, SA360 £30K/year, The Trade Desk managed service at 20% of £500K programmatic spend = £100K/year — total tooling budget: £150K, included within the existing agency cost savings").



Question 6: Conversion Rate Optimisation and Landing Page Strategy — Fixing a 1.2% Conversion Rate Killing Paid Media ROI

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Monzo, Bulb Energy, Graze, Bloom & Wild, Zopa


The Question

You are a Senior Performance Marketing Manager at a B2C subscription brand selling meal kit deliveries at £59/week. Your paid media team spends £180,000/month across Google Ads, Meta Ads, and affiliate channels — driving 45,000 monthly sessions to four landing pages. Your current conversion rate is 1.2% (540 conversions/month), your Cost Per Acquisition (CPA) is £333, and your payback period is 11 months (LTV is £600 over 18 months). The CFO has informed you that the business cannot sustain a CPA above £200 without threatening profitability. You have two options: (1) cut media spend by 40% (which would reduce volume but improve CPA if the budget cuts focus on inefficient channels), or (2) fix the conversion rate. You choose option 2. Your analysis reveals: (1) the four landing pages were built 2 years ago, are not mobile-optimised (73% of traffic is mobile), and have an average page load time of 6.8 seconds on 4G (Google's threshold for conversion drop-off is 3 seconds); (2) the hero section of each landing page shows the product (a meal kit box) rather than the customer outcome (a family enjoying a home-cooked meal) — behavioural data from Hotjar shows users scrolling past the hero without clicking the CTA; (3) the primary CTA is "Start Your Free Trial" but the form requires 14 fields including home address, dietary preferences, and payment details before the user can begin — this is a 14-field friction wall that depresses conversion rates; (4) there is no social proof on the landing pages (no customer reviews, no press mentions, no "rated 4.8/5 from 12,000 reviews" trust signals); (5) the landing pages are identical regardless of which channel or creative the user came from — a user who clicked a Meta ad about "quick 20-minute meals" lands on the same generic page as a user who searched "meal kit delivery comparison" on Google. Design a CRO strategy that gets conversion rate from 1.2% to 2.5% within 90 days, reducing CPA from £333 to £160.


1. What Is This Question Testing?

  • CRO methodology and prioritisation — understanding that conversion rate optimisation is not a collection of best-practice changes applied simultaneously; it is a structured testing methodology (hypothesis → experiment design → statistical significance → decision) applied to the highest-impact problems first; with five distinct problems identified (mobile optimisation, page speed, hero image, form friction, no social proof), you cannot fix all five simultaneously in a single test — that would create confounded results (if conversion rate improves, you don't know which change drove it); the correct approach is to prioritise by impact × effort and test changes sequentially, starting with the changes most likely to produce the largest conversion lift
  • Page speed and its measurable impact on conversion — knowing the specific data: Google's research shows that pages loading in 1–3 seconds have a baseline conversion rate, but pages loading in 5–6 seconds show a 70% higher bounce rate vs. 1-second pages; at 6.8 seconds load time, the landing page is losing approximately 50–60% of mobile visitors before they even see the offer; fixing page speed from 6.8s to <3s could improve effective conversion rate by 30–50% on mobile alone, making it the highest-impact single change available; knowing the technical fixes: image compression (convert JPG/PNG to WebP format, reduce file size by 60–80%), eliminate render-blocking JavaScript (defer non-critical JS), use a CDN (Content Delivery Network) to serve assets from servers geographically close to the user, and enable browser caching for static assets
  • Form optimisation and progressive disclosure — understanding that a 14-field form is one of the highest-friction conversion barriers possible; the principle of progressive disclosure: collect only the minimum information needed for the user to experience value (just email address to start), then collect additional information at each subsequent step as the user is progressively more committed; the multi-step form approach (Step 1: email only → Step 2: delivery preferences → Step 3: payment) consistently outperforms single-page long forms because (a) the first step is almost frictionless (completion rates of 80%+), (b) each subsequent step is completed by users who have already invested in the process, and (c) partial completions (users who complete step 1 but not step 3) can be retargeted with email or paid social
  • Message match and channel-specific landing pages — understanding that conversion rate is directly correlated with the degree to which the landing page's headline and offer match the ad creative or search term that brought the user there; a user who clicked a Meta ad about "quick 20-minute meals" has a specific intent (speed, convenience) and expects the landing page to address that intent immediately; showing them a generic "Fresh Ingredients, Delivered to Your Door" headline that could apply to any meal kit brand creates a cognitive disconnect that increases bounce rate; the solution is dynamic landing pages or multiple dedicated landing pages that match the specific ad creative (quick meals → page headline: "Dinner Ready in 20 Minutes — Even on Your Busiest Nights")
  • Social proof mechanics and trust signals — understanding that for a subscription product with an 11-month payback period, trust is a significant barrier to conversion; new visitors have no reason to trust the brand; social proof (customer reviews, Trustpilot ratings, press mentions, subscriber count) provides external validation that reduces perceived risk; knowing which social proof elements convert best: numerical proof ("4.8/5 from 12,000 reviews") outperforms general claims ("customers love us"), recognisable logos (press mentions from The Guardian, BBC, Sunday Times) signal legitimacy, and video testimonials outperform text testimonials for hesitant users
  • Statistical significance in A/B testing — knowing the mathematics of valid A/B testing: a test requires sufficient sample size to be statistically meaningful; with 540 monthly conversions (270 per variant in a 50/50 split), a test detecting a 20% relative improvement (1.2% → 1.44%) requires approximately 8,000 sessions per variant to reach 95% statistical significance — meaning a test comparing two landing pages needs 16,000 sessions or approximately 11 days at 45,000 monthly sessions; tests run for fewer days are underpowered and produce false positives; knowing tools: VWO, Optimizely, Google Optimize 360, or native AB testing in most landing page builders (Unbounce, Webflow)

2. Framework: Conversion Rate Optimisation Prioritisation and Testing Model (CROPTM)

  1. Assumption Documentation — Confirm the conversion event being optimised: is the conversion "trial started" (user entered email), "subscription activated" (user completed payment), or "first delivery received" (user did not churn in week 1)? Optimising toward "trial started" may inflate volume but include low-intent users who cancel before payment; optimising toward "subscription activated" is the correct business metric; the CPA calculation (£333) should be based on the metric that represents a real customer
  1. Constraint Analysis — Page speed fixes require engineering time (2–4 weeks); new landing page designs require design + development time (2–4 weeks); A/B test duration requires sufficient traffic (45,000 sessions/month = ~11 days per test); with 90 days and ~4 weeks of engineering/design lead time, you have time for approximately 3–4 sequential tests — prioritise the highest-impact changes first
  1. Tradeoff Evaluation — Fix all issues on a redesigned landing page (one big-bang launch, faster time to market, but no ability to isolate which change drove improvement) vs. test each change sequentially (slower, but builds a validated evidence base); for a business with a £333 CPA vs. £200 target, sequential testing with fast iteration is correct — you need to know what is working so you can compound improvements, not just make a bet on a full redesign
  1. Hidden Cost Identification — CRO tools cost £500–2,000/month (VWO, Optimizely); heatmap and session recording tools cost £100–400/month (Hotjar, Microsoft Clarity); A/B testing a new landing page design requires design and dev time (£5K–20K per new design); the total CRO infrastructure cost is £15K–30K for the first 90 days — a worthwhile investment if the result is reducing CPA from £333 to £160 (saving £93/acquisition × 540 acquisitions/month = £50K/month saved)
  1. Risk Signals / Early Warning Metrics — Weekly conversion rate trend by device (mobile vs. desktop; alert if mobile conversion rate does not improve after page speed fix — the speed fix may not have worked, or there are other mobile UX issues); form completion rate by step (alert if step 1 completion is below 70% — the first step should have near-zero friction); statistical significance check on every active test (alert if a test reaches 95% significance before the planned duration — implement the winner immediately rather than waiting)
  1. Pivot Triggers — If after sequential testing all five identified issues (speed, hero image, form length, social proof, message match), conversion rate has improved to only 1.8% instead of the target 2.5%: the remaining gap may be an offer problem (not a UX problem) — the product price (£59/week), the trial terms, or the value proposition itself is not compelling enough for the target audience; pivot to testing the offer (free first box, lower first-week price, pause/skip functionality prominently displayed)
  1. Long-Term Evolution Plan — Days 1–14: page speed fix (engineering sprint); Days 15–28: form reduction A/B test (14 fields → 3 fields, progressive disclosure); Days 29–42: social proof addition test (Trustpilot widget + press logos); Days 43–56: hero image test (product photo vs. customer outcome lifestyle photo); Days 57–70: message match test (channel-specific landing pages for top 5 ad themes); Days 71–90: compound best performers, measure overall CPA improvement

3. The Answer

Step 1: Fix Page Speed First — The Highest-Impact Change Requiring No Test

Page speed at 6.8 seconds on 4G is not a UX preference — it is a conversion crisis. With 73% mobile traffic, 30–50% of sessions are likely abandoning before the page fully loads. This is not A/B testable in the traditional sense (the control group of "slow page" vs. treatment group of "fast page" is not ethical when you know the slow page is destroying conversions). Fix it and validate improvement through before/after measurement.

Engineering sprint (2 weeks):

Image Optimisation:
- Convert all hero images and product photos from JPG/PNG → WebP format
  (WebP images are 25-35% smaller than JPG at equivalent quality)
- Implement lazy loading for below-the-fold images:
  <img src="meal-kit.webp" loading="lazy" alt="Meal kit delivery">
- Compress all images to <100KB (hero image can be 200KB max)

JavaScript Optimisation:
- Audit all JavaScript libraries loading on the page (use Chrome DevTools → Network → JS)
- Defer non-critical JS (analytics, chat widgets, social proof widgets):
  <script defer src="hotjar.js"></script>
- Remove unused JavaScript (identify with Chrome DevTools → Coverage)
- Move all <script> tags to bottom of <body> (not <head>)

CDN Implementation:
- Serve all static assets (images, CSS, JS) via a CDN (Cloudflare, AWS CloudFront, Fastly)
- CDN serves assets from edge servers geographically close to the UK user
- Expected improvement: 40-60% reduction in asset load time

Server Response Time:
- Target: Time to First Byte (TTFB) <200ms (current check: Google PageSpeed Insights)
- If TTFB >500ms: server infrastructure needs upgrading (not just front-end optimisation)

Measurement:
- Before: Google PageSpeed Insights score (mobile), Largest Contentful Paint (LCP), CrUX data
- After: same metrics, measured 7 days post-deployment
- Target: LCP <2.5 seconds, total page load <3.5 seconds on 4G
- Conversion tracking: compare weekly mobile conversion rate pre vs. post fix
  (expect +20-40% improvement in mobile conversion rate)

Step 2: Form Reduction — 14 Fields to 3, Progressive Disclosure (Days 15–28)

The 14-field form is the single most measurable conversion killer. Industry data shows that reducing form fields from 11+ to 3–4 increases completion rates by 50–120%.

Current (broken):
Step 1 (all on one page):
  Field 1: First Name
  Field 2: Last Name
  Field 3: Email Address
  Field 4: Phone Number
  Field 5: Delivery Address Line 1
  Field 6: Delivery Address Line 2
  Field 7: City
  Field 8: Postcode
  Field 9: Meal plan size (2/4/6 people)
  Field 10: Dietary preference (omnivore/vegetarian/vegan/gluten-free)
  Field 11: Allergies
  Field 12: Delivery day preference
  Field 13: Credit/Debit Card Number
  Field 14: Card Expiry and CVV

Redesigned (progressive disclosure):

STEP 1 (shown immediately, above the fold):
  "Start Your Free Trial"
  Field 1: Email Address [required]
  Field 2: Postcode [required — used to check delivery availability]
  CTA: "Check If We Deliver to You →"

  Microcopy: "No card required for this step. Takes 30 seconds."

STEP 2 (shown after step 1 completion):
  "Great news! We deliver to [postcode]"
  Field 3: Meal plan size (2-person / 4-person / Family)
  Field 4: Dietary preference (dropdown — 5 options)
  CTA: "Choose My Plan →"

STEP 3 (shown after step 2 completion):
  "Your first box is FREE — add payment to claim it"
  Field 5: Full Name
  Field 6: Delivery Address (with address lookup/autocomplete)
  Field 7: Payment details (card or Apple Pay / Google Pay)
  CTA: "Claim My Free Box →"

  Microcopy: "Cancel anytime. No commitment."

A/B test setup:

  • Control: current 14-field single-page form
  • Variant: 3-step progressive disclosure form
  • Traffic split: 50/50
  • Primary metric: Subscription activation rate (step 3 completion)
  • Secondary metrics: Step 1 completion, step 2 completion, drop-off by step
  • Sample size needed: 8,000 sessions per variant (approximately 11 days at 45K/month)
  • Expected lift: 40–80% improvement in form completion rate

Step 3: Social Proof — Add Trust Signals Above the Fold (Days 29–42)

Current: no reviews, no ratings, no press. A new visitor has zero external validation.

Trust signal placement (above the fold, immediately below hero):

Row 1: Star rating + review count
"★★★★★ Rated 4.8/5 from 12,847 reviews on Trustpilot"
[Trustpilot widget — real-time, pulls live rating]

Row 2: Press logos (small, monochrome, recognisable)
"As featured in:"
[The Guardian logo] [BBC Good Food logo] [The Sunday Times logo] [Evening Standard logo]

Row 3: Subscriber stat
"Join 220,000 families cooking better meals every week"

Social proof in the hero section (A/B test variant):
Control: "Fresh Ingredients, Delivered to Your Door"
Variant: "Rated Britain's #1 Meal Kit — 4.8/5 from 12,847 families"

A/B test: Does adding "Rated Britain's #1" to the headline
vs. social proof as a separate row below the headline produce
a higher conversion rate?

Additional test (separate, after social proof test concludes):
Video testimonial vs. text review — does a 30-second video of a
real customer ("I was sceptical but now I cook dinner in 20 minutes
every night") outperform a text quote with name and photo?

Step 4: Hero Image — Outcome vs. Product (Days 43–56)

Hotjar data shows users scrolling past the hero without engaging with the CTA. The product image (a meal kit box) is visually arresting but emotionally neutral — it shows what you receive, not why you should want it.

A/B Test: Hero Image

Control: Product photo — meal kit box with ingredients arranged artfully on a white background
Variant A: Outcome photo — family at a dinner table, smiling, with a plated meal from the kit
Variant B: Process photo — person cooking in a kitchen, appearing relaxed and enjoying it
         (Rationale: the emotional hook for meal kits is "I feel competent and relaxed
         cooking this" — not just "I get the end result")

Measurement:
Primary: Conversion rate (subscription activation)
Secondary: Time on page (does the lifestyle image increase engagement?),
           hero CTA click-through rate (does the image change CTR?)

Expected lift: 10–25% improvement in conversion rate
(lifestyle images consistently outperform product images in A/B tests for
subscription products — proven by HelloFresh, Gousto, and Mindful Chef)

Step 5: Message Match — Channel-Specific Landing Pages (Days 57–70)

Currently all traffic lands on the same generic page regardless of ad creative or search intent:

Create dedicated landing pages for the top 5 ad themes:

Traffic Source → Ad Theme → Landing Page Headline

Google Search: "meal kit delivery uk"
→ Page: "The UK's Most-Loved Meal Kit — Try Your First Box Free"
   (broad intent, focus on brand authority + trial offer)

Google Search: "quick dinner ideas uk"
→ Page: "Dinner on the Table in 20 Minutes — Seriously"
   (intent: speed, headline directly addresses the problem)

Meta Ad: "Our 20-Minute Meals" creative
→ Page: "20-Minute Meals, Every Night. No Thinking, No Shopping."
   (message matches the ad's specific claim — instant relevance)

Meta Ad: "Family meals" creative
→ Page: "Dinner the Whole Family Will Actually Eat — Guaranteed"
   (intent: family harmony, addresses specific parent pain point)

Affiliate: Voucher/discount sites
→ Page: "Your Exclusive Welcome Offer — First Box Free + 20% Off Week 2"
   (intent: deal-seeker, lead with the offer immediately)

Implementation: Use UTM parameters to route traffic to correct page:
?utm_source=meta&utm_content=20min-meals → /landing/quick-meals/
?utm_source=google&utm_campaign=brand → /landing/brand/
?utm_source=affiliate → /landing/affiliate-offer/

Expected lift per matched page: 15–35% improvement in conversion rate
vs. generic page (well-documented in CRO literature)

Step 6: 90-Day CPA Projection

DaysChangeConversion RateCPA
BaselineNo changes1.2%£333
Day 14Page speed fixed1.6% (+33%)£250
Day 28Form: 14 → 3 fields2.1% (+75%)£190
Day 42Social proof added2.2% (+83%)£182
Day 56Hero image optimised2.3% (+92%)£174
Day 70Message-matched pages2.5% (+108%)£160

Early Warning Metrics:

  • Weekly mobile conversion rate (alert if mobile does not improve within 7 days of speed fix — indicates the fix was not deployed correctly or there is a secondary mobile UX issue)
  • Form step completion rates (alert if step 1 completion falls below 65% — the email field may have a validation error, or the microcopy is not reassuring enough)
  • Test statistical significance (alert if a running test shows >95% significance before planned end date — implement winner immediately; alert if significance is not reached by planned end date and extend the test rather than making a decision on underpowered data)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The sequencing of changes — page speed first (no test needed, pure uplift), then form reduction (highest friction, highest impact), then trust signals, hero image, message match — shows the prioritisation logic that distinguishes experienced CRO practitioners from those who apply best-practice changes without quantifying their relative impact. The explicit statistical significance calculation (8,000 sessions per variant, approximately 11 days at 45K/month traffic) prevents the most common CRO mistake: ending tests early when one variant appears to be winning, without reaching the sample size needed to be confident the winner is real.

What differentiates it from mid-level thinking: A mid-level performance marketer would list the same five problems but would attempt to redesign the entire landing page at once — creating a confounded test where any improvement cannot be attributed to a specific change. They would not calculate statistical significance requirements, would not design the progressive disclosure form step-by-step, and would not build the UTM-based routing system for message-matched landing pages.

What would make it a 10/10: A 10/10 response would include a specific Hotjar session recording analysis methodology (how to systematically review 50 session recordings to identify the exact scroll depth and rage-click patterns that confirm each hypothesis), a copy of the Hotjar heatmap overlay showing where users are clicking vs. ignoring, and a Looker Studio CRO dashboard tracking conversion rate by step, by device, and by traffic source in real time.



Question 7: Customer Acquisition Cost and LTV Modelling — Building a Payback Period Model That Justifies a £5M Media Budget

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Monzo, Gousto, Babylon Health, Cazoo, OVO Energy


The Question

You are a Senior Performance Marketing Manager at a subscription SaaS company selling HR software to SMBs at £299/month. The company currently spends £1.5M/year on paid media and acquires 300 new customers/year (average CPA of £5,000). The CEO wants to scale paid media spend to £5M/year to accelerate growth. The CFO is asking for a financial model that answers three questions before approving the budget increase: (1) At £5M spend, what is the projected CPA and how does it compare to customer LTV? (2) At what point does the payback period become unacceptable (i.e., what is the maximum CPA the business can sustain)? (3) If the conversion rate stays flat while CPA rises due to diminishing returns, how much incremental revenue does the additional £3.5M spend generate and over what timeframe? Your additional data: average customer LTV is £18,000 over 36 months (£299 × 36 × 85% retention per month, compounded), average gross margin is 72%, sales-assisted close rate is 35% (not all demos become customers), average sales cycle is 6 weeks, and you anticipate that scaling from £1.5M to £5M will increase CPA by approximately 25% due to diminishing returns on audience saturation.


1. What Is This Question Testing?

  • LTV:CAC ratio and payback period fundamentals — understanding that the fundamental metric for sustainable paid acquisition is the ratio of Customer Lifetime Value (LTV) to Customer Acquisition Cost (CAC); industry benchmarks for SaaS: LTV:CAC ratio >3:1 is considered healthy (meaning for every £1 spent acquiring a customer, you generate £3 in lifetime value); payback period (the time it takes to recoup the acquisition cost from gross margin) should ideally be below 12–18 months for SaaS; knowing how to calculate these metrics from revenue, margin, and retention data
  • Diminishing returns modelling — understanding that paid media spend exhibits diminishing returns because the most efficient audiences (high-intent, highly relevant) are reached first and at lower cost; as you spend more, you must reach progressively less-engaged audiences at higher CPMs and lower conversion rates; knowing how to model this: if current CPA is £5,000 at £1.5M spend and you anticipate 25% CPA increase to scale to £5M, the projected CPA is £6,250 — but the actual relationship is non-linear (CPA increases more slowly at first, then accelerates as core audiences saturate)
  • Incremental revenue modelling and contribution margin — understanding how to translate paid media spend into projected revenue and contribution margin: (a) project acquisitions at the new CPA (£5M / £6,250 = 800 new customers), (b) model the revenue stream over 36 months with monthly compounding churn (800 customers × £299/month × 85% monthly retention), (c) calculate gross profit contribution (revenue × 72% margin), (d) subtract the media spend, (e) calculate net contribution and payback period; knowing that incremental revenue from a spend increase is not immediate — there is a 6-week sales cycle plus a 36-month value realisation period
  • Maximum sustainable CPA calculation — understanding how to derive the maximum CPA the business can tolerate from its LTV and margin data; the formula: Maximum CAC = LTV × Gross Margin × [Acceptable LTV:CAC ratio threshold]; if the business requires LTV:CAC ≥ 3:1 and LTV is £18,000 at 72% gross margin: Maximum CAC = (£18,000 × 0.72) / 3 = £4,320; at the current £5,000 CPA this is already above the sustainable threshold, making the case for CRO before budget increases; at £6,250 CPA (post-scale), the LTV:CAC ratio is (£18,000 × 0.72) / £6,250 = 2.07x — significantly below the 3:1 minimum
  • Cohort-based revenue modelling — understanding that subscription revenue is best modelled as cohorts (groups of customers acquired in the same period who then generate recurring revenue for months/years); a cohort of 66 customers acquired in January (at £5M annual run rate / 12 months) generates revenue in January, February, March... for 36 months, decaying each month based on the churn rate; modelling cohort by cohort gives a more accurate picture of when the business becomes profitable from the incremental spend than a simple average LTV calculation
  • CFO communication and scenario planning — understanding that a CFO does not want a single-point estimate ("the CPA will be £6,250"); they want scenario analysis (base case, upside, downside) with the key assumptions stated explicitly and the sensitivity of the model to those assumptions; if the diminishing returns assumption is wrong (CPA increases 40% instead of 25%), what is the impact on LTV:CAC? If the retention rate drops from 85% to 80% (possible when acquiring lower-quality customers at scale), what is the impact on LTV and payback period?

2. Framework: Paid Media Investment Justification and LTV:CAC Modelling Model (PMIJTM)

  1. Assumption Documentation — The £18,000 LTV calculation must be validated: is it based on actual historical cohort data (customers acquired 36+ months ago, tracking their actual revenue to date) or a modelled projection? Actual cohort data is far more reliable; if the business is less than 36 months old, the LTV is a projection and must be presented as such; the CFO will ask
  1. Constraint Analysis — A 25% CPA increase to £6,250 puts LTV:CAC at 2.07x — below the 3:1 healthy benchmark; before recommending a £5M budget increase, the model must show that either (a) the CPA increase can be controlled through CRO and creative optimisation, or (b) the payback period at the projected CPA is still acceptable given the business's growth stage (early-stage growth companies sometimes accept LTV:CAC of 2:1 if growth velocity justifies it)
  1. Tradeoff Evaluation — Scale spend to £5M immediately (maximise growth at the cost of LTV:CAC efficiency) vs. scale spend to £2.5M while investing £500K in CRO to improve conversion rate before the next scale step (slower growth, but sustainable LTV:CAC); the CFO should understand this tradeoff explicitly before making the budget decision
  1. Hidden Cost Identification — Scaling from 300 to 800 customers requires scaling the sales team (300 demos/year at 35% close rate = 857 demos; 800 customers at 35% close rate = 2,286 demos — a 2.7x increase in demo volume that may require 2–3 additional salespeople at £60K–80K/year each); the media budget increase of £3.5M generates a sales headcount cost of £150K–240K that must be included in the full acquisition cost model
  1. Risk Signals / Early Warning Metrics — Monthly LTV:CAC trend (if CPA rises faster than projected — above £6,500 in the first quarter of scale — the diminishing returns are steeper than modelled; pause scale and investigate); monthly cohort retention (if the retention rate for newly acquired customers drops below 80% at scale — indicating that a larger audience produces lower-quality customers — the LTV model is overstated and the full LTV:CAC calculation changes); sales cycle length at scale (if the sales cycle extends from 6 weeks to 10 weeks at higher volume — because the sales team is overwhelmed — the payback period lengthens materially)
  1. Pivot Triggers — If at Q2 of the £5M budget year, quarterly LTV:CAC has dropped to 1.5:1 (indicating the diminishing returns are more severe than the 25% CPA increase assumption): reduce spend back to £3M (the last sustainable operating point), invest £200K in conversion rate optimisation, and reassess scale at Q3 with a new CPA baseline
  1. Long-Term Evolution Plan — Q1: launch at £2.5M run rate (midpoint between current and target); measure CPA at scale; Q2: if CPA is within 30% of target, scale to £3.5M; Q3: if LTV:CAC is still above 2:1, scale to £5M; Q4: full model review with actual cohort data vs. projected

3. The Answer

Step 1: Build the Current State Baseline

Before modelling the scale scenario, establish the current financial state precisely:

CURRENT STATE (£1.5M spend / year):

Media spend: £1,500,000/year
New customers acquired: 300/year (25/month)
Customer Acquisition Cost (CPA/CAC): £5,000
Average contract value: £299/month
Monthly retention rate: 85%
Gross margin: 72%

LTV Calculation (36-month cohort):
Month 1 revenue per customer: £299
Month 2 revenue: £299 × 85% = £254.15
Month 3 revenue: £254.15 × 85% = £216.03
...
36-month LTV (sum of monthly revenues × retention curve):
= £299 × Σ(0.85^n) for n=0 to 35
= £299 × (1 - 0.85^36) / (1 - 0.85)
= £299 × (1 - 0.00413) / 0.15
= £299 × 6.637
= £1,984.45/month — wait, this is monthly average

Corrected: 36-month cumulative revenue per customer:
= £299 / (1 - 0.85) × (1 - 0.85^36)
= £299 / 0.15 × 0.99587
= £1,993.33 × 0.99587
= £1,985 [this is LTV at 100% margin]

At 72% gross margin:
Gross Profit LTV = £1,985 × 72% = £1,429

Wait — the question states LTV of £18,000. This implies either:
(a) 36-month contract value (not gross profit): £299 × 36 × 85% retention...
    Let me recalculate: if the business means total revenue collected over 36 months
    from a cohort that started at 100 and decays at 85%/month:

Actual cohort model (100 customers starting):
Month 1: 100 customers × £299 = £29,900
Month 2: 85 customers × £299 = £25,415
...This approach gives total revenue = 100 × £299/0.15 × (1-0.85^36) ≈ £198,500 / 100 = £1,985 per customer

OR: The company means 85% annual retention (not monthly):
Monthly churn = 1 - 0.85^(1/12) = 1 - 0.9864 = 1.36%/month
Monthly retention = 98.64%

36-month LTV at 1.36% monthly churn:
= £299 / 0.0136 × (1 - 0.9864^36)
= £21,985 × (1 - 0.612)
= £21,985 × 0.388
= £8,530 [still not £18,000]

For the model to produce £18,000 LTV:
If 85% retention is annual: average customer stays 1/0.15 = 6.67 years
Annual revenue: £299 × 12 = £3,588
LTV = £3,588 × (1/0.15) × [appropriate discount] ≈ £23,920 undiscounted
At 72% margin: £23,920 × 0.72 = £17,222 ≈ £18,000 ✓

CONCLUSION: The £18,000 LTV uses annual retention of 85%
(15% annual churn = 1.35%/month), undiscounted over the customer lifetime.

LTV:CAC at current state:
= £18,000 / £5,000 = 3.6x ← healthy (above 3:1 minimum)

Payback period at current state:
Monthly gross profit per customer = £299 × 0.72 = £215.28/month
Payback period = £5,000 / £215.28 = 23.2 months

Step 2: Model the Scale Scenario — Three Cases

SCALE SCENARIO MODEL: £1.5M → £5M spend

Assumption: 25% CPA increase due to diminishing returns

BASE CASE (25% CPA increase as projected):
  New spend: £5,000,000/year
  New CPA: £5,000 × 1.25 = £6,250
  New customers: £5,000,000 / £6,250 = 800/year
  LTV:CAC: £18,000 / £6,250 = 2.88x ← BELOW 3:1 minimum; acceptable at growth stage
  Payback period: £6,250 / £215.28/month = 29 months (extended from 23 months)
  Incremental customers from additional spend: 800 - 300 = 500 new customers/year
  Incremental annual revenue (Year 1): 500 × £299 × 12 × [retention factor]
    = 500 × £3,588 × 0.85 (avg retention in year 1) = £1,524,900
  Gross profit from incremental customers (Year 1): £1,524,900 × 72% = £1,097,928
  Net contribution after additional media spend: £1,097,928 - £3,500,000 = -£2,402,072 (Year 1)
  Note: Negative in Year 1 is EXPECTED for subscription businesses.
  The payback comes in Years 2 and 3 as cohorts continue paying.

DOWNSIDE CASE (40% CPA increase — more severe diminishing returns):
  New CPA: £5,000 × 1.40 = £7,000
  New customers: £5,000,000 / £7,000 = 714/year
  LTV:CAC: £18,000 / £7,000 = 2.57x ← Warning zone
  Payback period: £7,000 / £215.28 = 32.5 months
  Incremental customers: 714 - 300 = 414
  Year 1 gross contribution: 414 × £3,588 × 0.85 × 0.72 = £908,000
  Net Year 1: £908,000 - £3,500,000 = -£2,592,000

UPSIDE CASE (10% CPA increase — better than expected):
  New CPA: £5,000 × 1.10 = £5,500
  New customers: £5,000,000 / £5,500 = 909/year
  LTV:CAC: £18,000 / £5,500 = 3.27x ← Healthy
  Payback period: £5,500 / £215.28 = 25.5 months
  Incremental customers: 909 - 300 = 609
  Year 1 gross contribution: 609 × £3,588 × 0.85 × 0.72 = £1,334,000
  Net Year 1: £1,334,000 - £3,500,000 = -£2,166,000

SUMMARY TABLE FOR CFO:

Metric                    | Current  | Base Case | Downside | Upside
--------------------------|----------|-----------|----------|-------
Annual media spend        | £1.5M    | £5.0M     | £5.0M    | £5.0M
CPA                       | £5,000   | £6,250    | £7,000   | £5,500
New customers/year        | 300      | 800       | 714      | 909
LTV:CAC                   | 3.6x     | 2.88x     | 2.57x    | 3.27x
Payback period            | 23 mo    | 29 mo     | 33 mo    | 26 mo
Year 1 gross contribution | N/A      | £1.10M    | £0.91M   | £1.33M
Year 1 net contribution   | N/A      | -£2.40M   | -£2.59M  | -£2.17M
Year 3 cumulative revenue | N/A      | +£8.2M    | +£6.9M   | +£10.1M

Step 3: 3-Year Cohort Revenue Model (CFO-Level Detail)

Cohort model for base case (800 new customers at £5M spend):

Cohort of 800 customers acquired in Year 1:
Year 1 revenue: 800 × £299 × 12 × [avg retention in yr1]
  Average in-year retention: (100% + 85%) / 2 = 92.5%
  Year 1 revenue: 800 × £3,588 × 92.5% = £2,655,120

Year 2 revenue (same 800 cohort, retention applied):
  85% of original 800 = 680 customers still active
  Year 2 revenue: 680 × £299 × 12 = £2,440,560

Year 3 revenue (85% retention again):
  85% of 680 = 578 customers
  Year 3 revenue: 578 × £299 × 12 = £2,075,544

Total 3-year gross revenue from Year 1 cohort: £2,655,120 + £2,440,560 + £2,075,544 = £7,171,224
Total 3-year gross profit (72% margin): £5,163,281
Total spend to acquire: £5,000,000
Net 3-year contribution from Year 1 cohort alone: +£163,281

Add Year 2 cohort (another 800 customers): +£163,281 × [Year 2 + Year 3 only]
Cumulative Year 3 profitability from both cohorts: Significant positive

KEY INSIGHT FOR CFO: "The investment in paid media does not pay back in Year 1.
It pays back in Year 2–3. The question is whether the business has the cash runway
to fund 29 months of payback on the incremental cohort. At our current burn rate
and cash position of [£X], we can fund [Y months] of scale before requiring
additional financing."

Step 4: Maximum Sustainable CPA Calculation

At what CPA does the investment become unacceptable?

The business requires LTV:CAC ≥ 3:1 for long-term sustainability
(Below 3:1 at growth stage is tolerable; below 2:1 is a red flag)

Maximum CAC at 3:1 minimum:
Max CAC = LTV / 3 = £18,000 / 3 = £6,000

At £6,000 CPA and £5M spend:
New customers = £5,000,000 / £6,000 = 833/year

Maximum CAC at 2:1 absolute floor:
Max CAC = £18,000 / 2 = £9,000

INTERPRETATION FOR CFO:
"Our sustainable CAC ceiling is £6,000 (maintaining 3:1 LTV:CAC).
The base case projects £6,250 CPA — £250 above the sustainable ceiling.

We recommend two mitigations before approving the full £5M:
1. A 90-day CRO programme targeting 2.0x conversion rate improvement
   (this reduces CPA by 50% on the conversion side, bringing it below £6,000
   even at 25% diminishing returns)
2. A phased budget increase: £2.5M in H1, review CPA, scale to £5M in H2 only
   if CPA remains below £6,000"

Step 5: Present to CFO — Three-Slide Model Summary

Slide 1: Current State and Benchmark
- Current CPA: £5,000 | LTV: £18,000 | LTV:CAC: 3.6x | Payback: 23 months
- Benchmark: Healthy SaaS LTV:CAC > 3:1, Payback < 24 months
- Status: At boundary of sustainable (3.6x, 23 months)

Slide 2: Scale Scenario Analysis
[Table from Step 2 — three cases]
- Base case: 2.88x LTV:CAC, 29 months payback — tolerable at growth stage
- Downside: 2.57x — warning zone; trigger point for scale reduction
- Upside: 3.27x — healthiest outcome
- Risk: If annual churn worsens from 15% to 20% at scale (lower quality customers),
  LTV drops from £18K to £13.5K; at base case CPA, LTV:CAC drops to 2.16x —
  below the 2:1 absolute floor

Slide 3: Recommendation
Phase 1 (Q1): Scale to £2.5M, implement CRO programme (target CPA reduction to £4,200)
Phase 2 (Q2): If CPA is confirmed below £5,500 at £2.5M, scale to £3.5M
Phase 3 (Q3): If LTV:CAC remains above 3:1 at £3.5M, scale to £5M
Phase 4 (Q4): Full £5M deployment with validated diminishing returns model

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: Working through the LTV calculation to verify the £18,000 figure — and identifying that it implies annual (not monthly) 85% retention — demonstrates the financial rigour that prevents a model built on an incorrect assumption from being presented to the CFO. The three-scenario model (base/downside/upside) with explicit key assumption per scenario shows the financial communication discipline that CFOs expect: not a single-point estimate, but a range with the key variable clearly identified. The phased budget recommendation (£2.5M → £3.5M → £5M with CPA gate criteria at each step) is the risk management approach that protects the business from a scenario where the full £5M is deployed into a market that proves more expensive than projected.

What differentiates it from mid-level thinking: A mid-level performance marketer would calculate CPA at the new spend level and report "800 customers at £6,250 CPA" without modelling the LTV:CAC ratio, the payback period, the maximum sustainable CPA ceiling, or the Year 3 cohort profitability that makes the case for why the negative Year 1 contribution is acceptable. They would not design the phased scale with CPA gate criteria, and would not identify the churn sensitivity analysis that shows how a retention rate decline from 15% to 20% annual churn changes the entire investment thesis.

What would make it a 10/10: A 10/10 response would include a fully built Excel/Google Sheets cohort model with month-by-month revenue and gross profit for all three cohorts (Year 1, Year 2, Year 3 acquisitions), a sensitivity table showing LTV:CAC at five different CPA levels × five different churn rates, and a cash flow projection showing when the incremental spend becomes cash-flow positive on a cumulative basis.



Question 8: Affiliate and Partnership Marketing — Building an Affiliate Programme That Doesn't Cannibalise Paid Search

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: MoneySuperMarket, Quidco, TopCashback, GoCompare, Confused.com


The Question

You are a Senior Performance Marketing Manager at a UK personal finance brand offering a credit card comparison service. You currently run Google Ads, Meta Ads, and SEO, but have no affiliate programme. The Head of Growth wants to launch an affiliate programme to expand reach and drive additional acquisitions without increasing direct media spend. The proposal: recruit affiliate publishers (cashback sites like Quidco, TopCashback; comparison sites like MoneySuperMarket, Compare the Market; financial content publishers like MoneySavingExpert) and pay a commission of £35 per completed application. Your analysis raises five concerns: (1) cashback affiliates drive users who would have converted anyway through your direct channels — pure commission cost with no incremental lift; (2) comparison site affiliates currently appear below you in Google search results; if they join your affiliate programme they have financial incentive to bid aggressively on your branded keywords, cannibalising your paid search and inflating your CPA; (3) your current average CPA from Google Ads is £28 — paying affiliates £35 per application makes the affiliate channel more expensive than your existing channels; (4) there is no incremental attribution framework — you cannot distinguish between an affiliate driving a genuinely new customer vs. an affiliate "tagging along" at the end of a customer journey that started with one of your own ads; (5) affiliate fraud is prevalent in the financial services sector — publishers have been known to generate fake applications, use cookie stuffing (setting affiliate tracking cookies without genuine referrals) to claim commissions, and deploy click farms to generate fraudulent conversion signals. Design an affiliate programme structure that avoids these five pitfalls while generating genuinely incremental customer acquisitions.


1. What Is This Question Testing?

  • Incrementality vs. attribution cannibalism in affiliate programmes — understanding the fundamental problem with last-click affiliate attribution: an affiliate publisher places a cookie on the user's browser when the user visits the publisher's site; if the user subsequently converts (even if they were already planning to convert before visiting the affiliate), the affiliate receives the commission — even though they provided zero incremental value; knowing how to measure incrementality in affiliate: publisher-specific holdout tests (pause one affiliate's tracking for 2 weeks, measure whether conversion volume drops proportionally — if not, the affiliate was providing zero incremental lift)
  • Branded keyword bidding and affiliate agreement terms — understanding that affiliate publishers in the comparison/cashback space have strong incentive to bid on your branded search terms (e.g., "YourBrand credit card") because branded terms convert at high rates and the affiliate earns the commission even though the customer had already decided to use your product; knowing that the solution is a contractual clause in the affiliate agreement: "Affiliates are prohibited from bidding on [Brand Name], [Brand Name] + any modifier, and common misspellings of [Brand Name] in paid search; violation results in immediate programme termination and clawback of commissions earned in the violation period"
  • Commission structure design and CPA alignment — understanding that a flat £35 CPA commission may be economically worse than your existing channels (current Google Ads CPA is £28); knowing how to design a commission structure that makes the affiliate channel incremental: tiered commission (lower commission for conversion paths that include a prior touchpoint from your own channels; higher commission for truly cold acquisitions with no prior touchpoint); new customer only commission (pay only for customers who have not previously applied or converted through direct channels — verified via CRM matching); quality-gated commission (pay commission only after the applicant completes a quality step — e.g., the applicant is approved for the card, not just applied — reducing incentive to drive low-quality applications)
  • Affiliate fraud detection in financial services — understanding the specific fraud vectors in financial services affiliate programmes: (a) application fraud (submitting fictitious applications to generate commission — mitigated by only paying commission after approval, not application); (b) cookie stuffing (an affiliate injects their tracking cookie into a user's browser without a genuine referral by hiding a pixel on an unrelated website — mitigated by requiring the affiliate referral URL to be in the referral session); (c) click farms (generating fraudulent clicks from automated systems to claim CPA commissions — mitigated by device fingerprinting, IP analysis, and conversion rate monitoring — a publisher converting at 40% when the site average is 2% is likely generating fraudulent conversions); (d) lead duplication (submitting the same application multiple times with minor variations — mitigated by deduplication on email, phone, and date of birth in the CRM before confirming commission)
  • Affiliate platform and network selection — knowing the major affiliate networks in the UK: AWIN (largest network in UK/Europe, strong financial services publisher base, advanced tracking and fraud tools), Impact.com (modern platform, strong incrementality measurement tools, better suited to in-house management), CJ Affiliate (Commission Junction, strong US presence, good for international expansion), and Partnerize (enterprise-grade, strong financial services clients); knowing that networks charge both a setup fee (£5K–15K) and a percentage of commissions paid (20–30% override — meaning a £35 commission to the publisher costs you £42–45.50 including the network override)
  • Publisher recruitment strategy — understanding the difference between quantity and quality in affiliate recruitment: recruiting 500 low-quality affiliates (personal finance bloggers with minimal traffic, cashback sites with 500 members) generates high programme management overhead with minimal incremental revenue; recruiting 10–15 tier-1 publishers (MoneySavingExpert, Which?, MoneySuperMarket, Uswitch, Quidco, TopCashback) with strict contractual terms and ongoing performance management generates the majority of programme revenue with manageable complexity

2. Framework: Incremental Affiliate Programme Design and Fraud Prevention Model (IAPDFPM)

  1. Assumption Documentation — Define "incremental customer" before launching the programme: a customer is incremental if they have (a) never previously visited your site (no existing cookie/session), (b) no existing application or account in your CRM, and (c) came from the affiliate's referral URL in their session history (not just a last-click cookie set by a publisher they visited 30 days ago); this definition must be codified in the affiliate agreement and enforced technically through CRM deduplication
  1. Constraint Analysis — At £35/application commission + 25% network override fee, the effective CPA per affiliate conversion is £43.75 — 56% more expensive than the current £28 Google Ads CPA; the programme is only financially justified if: (a) the affiliate channel reaches audiences that Google Ads and SEO cannot reach (net new audience, not overlapping), or (b) the affiliate channel operates at higher volume that offsets the higher per-unit CPA through greater scale
  1. Tradeoff Evaluation — Open affiliate programme (any publisher can join, maximum volume, minimal barrier to entry, high fraud risk, high management overhead) vs. closed/curated programme (invite only, 10–15 tier-1 publishers, maximum control, lower fraud risk, lower volume); for a financial services brand where regulatory compliance (FCA requirements on financial promotions) and fraud prevention are paramount, a closed/curated programme is mandatory; open programmes in financial services are a regulatory and fraud liability
  1. Hidden Cost Identification — Running an affiliate programme requires a dedicated programme manager (in-house at £40K–60K/year, or outsourced to an affiliate management agency at £2K–4K/month); network fees (£5K–15K setup + 25% override on all commissions); legal fees for affiliate agreement drafting (£2K–5K); fraud monitoring tools (TrafficGuard, PerformLine: £500–2,000/month); first-year total infrastructure cost: £80K–120K before any commission is paid
  1. Risk Signals / Early Warning Metrics — Publisher conversion rate (alert if any publisher's conversion rate is >3x the site average — indicates possible fraud); commission-to-application ratio (alert if a publisher submits >100 applications in a week without prior historical volume — unusual spike suggests click farm activity); branded keyword bidding detection (set up a Google Ads Search Terms alert or use a tool like AdSafe/BrandVerity that monitors branded keyword bidding by affiliates daily); new vs. returning customer rate by publisher (alert if any publisher's traffic is >30% returning customers — they are claiming commission on existing customers)
  1. Pivot Triggers — If after 6 months of the affiliate programme, the incremental customer rate (verified via holdout test) is below 50% (i.e., more than half of affiliate-attributed conversions would have occurred anyway without the affiliate): the programme is net-negative (spending £35–44 per conversion that costs £28 via direct channels, for customers who were going to convert anyway); pause the programme, renegotiate commission terms, and restructure toward quality-gated or new-customer-only commission
  1. Long-Term Evolution Plan — Month 1–2: AWIN network setup, legal agreement drafting, fraud monitoring tools; Month 2–3: recruit 5 tier-1 publishers (MoneySuperMarket, Uswitch, Quidco, TopCashback, Which?); Month 3–4: soft launch with strict fraud monitoring; Month 5: incremental holdout test (pause one affiliate, measure impact); Month 6: full programme review, commission optimisation, expand to 10 publishers if incrementality is confirmed

3. The Answer

Step 1: Commission Structure — Align Incentives with Incrementality

Replace the flat £35 commission with a structure that rewards genuine incremental acquisitions:

COMMISSION STRUCTURE:

Tier 1 — New Customer, No Prior Touchpoint (truly incremental):
Commission: £45
Qualification:
  ✓ Applicant email does not exist in CRM (new customer)
  ✓ No prior visit to your site in last 90 days (session history check via 1st party data)
  ✓ Referral URL is from the affiliate's site (direct referral, not last-click from a retargeting pool)
  ✓ Application approved by underwriting (not just submitted)
Rationale: These customers would not have converted without the affiliate —
           pay a premium for true incrementality

Tier 2 — New Customer, Prior Direct Touchpoint:
Commission: £20
Qualification:
  ✓ Applicant email does not exist in CRM (new customer)
  ✗ Prior visit to your site exists (customer was in your funnel already)
  ✓ Application approved by underwriting
Rationale: The affiliate contributed to the journey but did not originate it —
           partial credit at a lower commission

Tier 3 — Existing Customer or Unverified:
Commission: £0
Qualification:
  ✗ Applicant already exists in CRM (prior customer, no incremental value)
  OR: Application not approved (low quality / fraudulent applications)
Rationale: Zero incremental value; zero commission

PUBLISHER AGREEMENT CLAUSE:
"[Brand] pays commission only on Tier 1 or Tier 2 applications as defined above.
Applications from existing customers, applications not approved by underwriting,
and applications flagged by our fraud monitoring systems are excluded from
commission payment. Publishers acknowledge that commission payments are final
only after a 30-day fraud review period."

Step 2: Branded Keyword Protection — Non-Negotiable Contractual Clause

Every affiliate agreement must include:

PROHIBITED ACTIVITIES (extract from affiliate agreement):

6.1 Paid Search Restrictions

Publisher agrees not to engage in any of the following without prior written
consent from [Brand]:

(a) Bidding on any paid search keyword that contains [Brand Name], any
    misspelling of [Brand Name], or any combination of [Brand Name] with
    other terms, in any search engine or platform, including but not limited
    to Google, Bing, Yahoo, and DuckDuckGo.

(b) Bidding on any search keyword that directs traffic to a landing page that
    includes [Brand Name] or redirects to [Brand]'s website.

(c) Setting display URLs in paid search ads to any URL that includes [Brand Name].

6.2 Monitoring and Enforcement

[Brand] uses automated brand monitoring software (BrandVerity / Adthena) that
continuously monitors paid search auction landscapes for compliance with Section 6.1.

Violations will result in:
  - Immediate programme suspension pending investigation
  - Clawback of all commissions earned during the violation period
  - Programme termination for repeated violations

Publisher is responsible for ensuring all sub-affiliates (where permitted)
also comply with Section 6.1.

Enforcement tooling: BrandVerity (£500–1,500/month) monitors Google, Bing, and Yahoo auction landscapes 24/7 for any publisher bidding on protected brand terms and generates daily alerts for violations.

Step 3: Fraud Prevention Framework

Layer four complementary fraud detection methods:

METHOD 1: Approval-Gated Commission
Only pay commission after the application is approved by underwriting.
Benefit: Eliminates incentive to submit fake/fraudulent applications
(a rejected application earns £0 commission regardless of how many are submitted)

METHOD 2: CRM Deduplication Before Commission Confirmation
Before confirming any commission, run the applicant's data against the CRM:
  Match on: Email address, phone number, date of birth, full name
  If any match: Commission denied (existing customer, no incremental value)
  Deduplication must occur within the 30-day holding period before commission is released

METHOD 3: Publisher Conversion Rate Monitoring
Run weekly analysis of each publisher's conversion rate:
  Site-wide average conversion rate: 2.0%
  Alert threshold: Publisher conversion rate > 6.0% (3x site average)

  If publisher X shows 15% conversion rate:
    → Likely fraud (click farms, cookie stuffing, fake applications)
    → Immediate commission hold + investigation
    → Request publisher to provide traffic quality evidence

METHOD 4: Traffic Quality Audit (new publishers)
Before confirming commission payment to any new publisher in their first 30 days:
  1. Sample 50 approved applications from that publisher
  2. Cross-check against:
     - IP address geolocation (are all applications from the same IP range? = fraud)
     - Device fingerprint diversity (are all applications from identical devices? = fraud)
     - Application completion time (all under 90 seconds? = bot activity)
     - Email domain patterns (all from disposable email domains? = fraud)
  3. If >10% of sampled applications show fraud signals:
     → Hold all commissions pending full review
     → Escalate to fraud team

Step 4: Incremental Attribution — Holdout Test Protocol

To confirm that each affiliate is driving genuinely incremental conversions (not just claiming credit for conversions that would have happened anyway):

HOLDOUT TEST DESIGN (per publisher, once per quarter):

Setup:
- Identify the affiliate's tracking mechanism (cookie, redirect URL, click ID)
- For 2 weeks, randomly suppress tracking for 10% of the affiliate's referrals
  (this creates a "holdout" control group that does not receive the affiliate's cookie)
- The holdout group still visits the site (via the same referral link) but the
  cookie is suppressed, so they are not tracked as affiliate conversions

Measurement:
- Holdout group conversion rate: [X%] (converts without affiliate commission being credited)
- Treatment group conversion rate: [Y%] (converts with affiliate commission being credited)
- Incremental lift = (Y - X) / X × 100

Interpretation:
If Y = 2.8% and X = 2.0%:
  Incremental lift = (2.8 - 2.0) / 2.0 = 40%
  40% of affiliate-attributed conversions are genuinely incremental
  60% would have converted anyway
  Effective incremental CPA = £35 / 0.40 = £87.50 per truly incremental conversion

This result: publisher's commission should be renegotiated or they should be removed
if their effective incremental CPA (£87.50) significantly exceeds the max sustainable CPA

If Y = 0.5% and X = 0.4%:
  Very small incremental lift (only 10% incremental)
  Effective incremental CPA = £35 / 0.10 = £350 — clearly not viable

Step 5: Publisher Recruitment — Quality Over Quantity

TIER 1 PUBLISHERS (invite-only, immediate priority):

Publisher: MoneySavingExpert (MSE)
  - Type: Financial content + comparison
  - Monthly UK visitors: 9M+
  - Audience: Actively seeking financial product comparisons
  - Commission structure: Flat fee per approved application (negotiate jointly)
  - Brand safety: Exceptionally high (Martin Lewis is UK's most trusted money voice)
  - Risk: Branded keyword bidding (not applicable — MSE uses editorial links, not paid search)
  - Incrementality expectation: HIGH (MSE audience actively comparing products)

Publisher: MoneySuperMarket / Confused.com
  - Type: Price comparison website (PCW)
  - Monthly UK visitors: 8M+
  - Risk: HIGH branded keyword bidding risk — must enforce clause 6.1 strictly
  - Commission: Negotiate tiered (lower for returning/existing customers)
  - Incrementality expectation: MEDIUM (some users comparison shop widely, some are close-to-decision)

Publisher: Quidco and TopCashback
  - Type: Cashback platforms
  - Audience: Deal-seeking, high application volume, lower average quality
  - Risk: HIGH cannibalism risk (cashback users often have existing intent,
    the cashback just confirms the decision)
  - Commission: Pay ONLY Tier 1 (new customer, no prior touchpoint)
  - Require holdout test in month 1 to verify incrementality before scaling
  - Incrementality expectation: LOW to MEDIUM (expect 30-50% incremental rate)

Publisher: Which? Magazine
  - Type: Consumer journalism + product reviews
  - Audience: Research-intensive, high-trust, high conversion quality
  - Commission: Premium (£45 Tier 1 rate) — this audience is high-quality
  - Incrementality expectation: HIGH (users visiting Which? are comparison-shopping,
    not searching for a specific brand)

DO NOT RECRUIT:
  - Generic personal finance bloggers (low traffic, low incremental value, high management overhead)
  - Any publisher with a history of brand keyword bidding violations on other programmes
  - Any publisher operating in non-FCA regulated markets (regulatory liability)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The tiered commission structure (£45 for genuinely incremental new customers, £20 for new customers with prior touchpoints, £0 for existing customers or unapproved applications) directly addresses the incentive misalignment in flat-rate affiliate commissions — by paying more for demonstrably incremental traffic and nothing for claimed-but-not-incremental traffic, the structure eliminates the economic incentive for publishers to engage in cookie stuffing or to target existing customers. The holdout test design — with the specific calculation of effective incremental CPA (£35 / 40% incremental rate = £87.50 true cost) — is the measurement tool that transforms affiliate attribution from a guessing game into a financially defensible channel analysis.

What differentiates it from mid-level thinking: A mid-level performance marketer would launch a flat-rate affiliate programme, discover 6 months later that cashback sites are claiming commission on existing customers, and have no framework to measure how much waste has occurred. They would not design the tiered commission structure, would not include the branded keyword bidding clause with automated monitoring, would not set up the holdout test protocol, and would not understand the effective incremental CPA calculation that reveals the true cost of affiliate conversions.

What would make it a 10/10: A 10/10 response would include a full affiliate agreement term sheet with all non-standard clauses (branded keyword restriction, fraud clawback, holdout test consent, FCA regulated financial promotions compliance requirement), a publisher recruitment scorecard (weighted scoring on audience size × incrementality expectation × brand safety × fraud risk), and a monthly affiliate programme P&L template tracking commissions paid, holdout-adjusted incremental acquisitions, and effective incremental CPA by publisher.



Question 9: Creative Strategy and Ad Testing at Scale — Building a Creative Testing Machine That Computes Winners and Retires Losers

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: ASOS, Gymshark, Charlotte Tilbury, Huel, Grind Coffee


The Question

You are a Senior Performance Marketing Manager at a DTC fashion brand spending £300,000/month on Meta Ads. Creative fatigue is your biggest operational challenge — you have identified that your top-performing creatives have an average lifespan of 21 days before their CTR declines significantly, yet your creative production team takes 15 days to produce a new batch of creatives (brief → photography/videography → editing → approval). This means you are in a constant cycle of running fatigued ads because new creative is not ready before the old creative is exhausted. Additionally: (1) you currently test 2–3 new creatives per month in an unstructured way (no hypothesis, no control group, no statistical significance threshold) — you pick "winners" based on gut feel after 3–4 days of data, leading to frequent false positives where an ad appears to be winning early but regresses when scaled; (2) your creative team briefs are descriptive ("lifestyle photo of a woman wearing the summer collection on a beach") rather than hypothesis-driven ("we believe showing the product being worn in a social/party context outperforms the current beach lifestyle context because our customer's primary purchase trigger is wanting to look good for social occasions"); (3) your current creative mix is 80% studio photography and 20% video, but industry benchmarks for DTC fashion show 40% video + 30% UGC (user-generated content) + 30% static outperforming the current mix; (4) you have no process for retiring creative — ads are paused only when someone manually notices poor performance, meaning fatigued ads often run for 10–15 days past their optimal performance window, wasting £40,000–60,000/month in media spend on declining creative. Design a creative testing system and production process that eliminates the fatigue gap, produces statistically valid test results, and shifts the creative mix toward higher-performing formats.


1. What Is This Question Testing?

  • Statistical validity in creative testing — understanding that 3–4 days of data is almost never sufficient to reach statistical significance in an A/B test on Meta; at £300K/month and typical Meta CPMs, an ad set might receive 1,500–2,500 impressions in 3 days — far too few to distinguish a genuine performance difference from random variation; knowing the statistical requirements: minimum 100 conversions per variant for a conversion-based test (CPA, ROAS), or minimum 500 clicks per variant for a click-based test (CTR, CPC); knowing that Meta's own "Learning Phase" requires 50 optimization events per ad set per week before the algorithm has sufficient data to optimize efficiently
  • Hypothesis-driven creative briefing — understanding that descriptive creative briefs ("show a woman on a beach") produce creative that cannot generate insights — even if the creative performs well, you don't know why it performed well, so you cannot improve the next brief; hypothesis-driven briefs ("we believe social context outperforms beach context because our customer's primary purchase trigger is social occasion") generate testable predictions; when the test confirms the hypothesis, you have a replicable principle (always use social context for this audience); when the test rejects it, you have a falsified assumption (social context does not outperform beach — investigate why)
  • Creative production pipeline and lead time management — understanding that a 15-day creative production lead time against a 21-day creative lifespan creates a structural 6-day gap where fatigued ads run with no replacement ready; the solution requires either (a) reducing production lead time (faster brief → shoot → editing → approval process), (b) maintaining a "creative buffer" (always have 2–3 approved creatives waiting in the queue before they are needed), or (c) using faster-to-produce creative formats (UGC from existing customers, lo-fi video shot on iPhone, still images with motion graphics) that can be produced in 3–5 days rather than 15 days
  • Creative retirement triggers and automation — understanding that manual creative retirement (pausing only when someone notices poor performance) is a reactive process that consistently loses £40K–60K/month; the solution is automated creative retirement rules in Meta Ads Manager: "Pause this ad if CTR drops below 0.8% for 7 consecutive days" or "Pause this ad if CPC rises above £1.50 for 7 consecutive days"; Meta Ads Manager's automated rules can be configured to pause, send an alert, or reduce budget automatically when performance metrics cross defined thresholds
  • UGC (User-Generated Content) sourcing and integration — understanding that UGC (video or photo content created by real customers, influencers, or brand partners in an authentic, non-studio style) consistently outperforms studio photography in DTC performance marketing because it looks native to the platform (Instagram, Facebook, TikTok show UGC as posts — ads that look like posts have lower "ad blindness" resistance from users); knowing how to source UGC at scale: existing customer outreach (email to top customers asking to share content in exchange for discount codes), micro-influencer partnerships (100–10,000 followers, higher engagement, lower cost), UGC platforms (Insense, Billo — connect brands with content creators who produce UGC for a flat fee of £50–200 per video)
  • Creative learning velocity and the testing throughput problem — understanding that the rate at which a creative testing programme improves performance is directly proportional to the number of tests run per month (testing throughput) × the quality of each test (statistical validity); most DTC brands run 2–5 tests per month at insufficient statistical rigour (high throughput, low quality) or 1–2 tests per month at full statistical rigour (low throughput, high quality); the optimal approach is systematic testing at moderate rigour (sufficient for directional decisions) combined with a small number of high-rigour confirmation tests for major creative decisions (shifting the entire creative strategy from studio photography to UGC)

2. Framework: Creative Testing Machine and Production Pipeline Optimisation Model (CTMPOM)

  1. Assumption Documentation — Confirm the primary optimisation metric: is the creative test measured on CTR (click-through rate — measures engagement, not conversion), CPC (cost per click — measures efficiency of engagement), ROAS (revenue per pound spent — measures full-funnel value), or CPP (cost per purchase — measures acquisition efficiency); CTR is the fastest metric to reach significance (requires fewer impressions) but does not predict ROAS; ROAS is the most valuable metric but requires the most conversions to reach significance; use CTR as a screening metric (fast, cheap) and ROAS as a confirmation metric (slower, more expensive, only for top CTR performers)
  1. Constraint Analysis — £300K/month at Meta CPM of approximately £10–15 for DTC fashion generates approximately 20,000–30,000 impressions/day across all ad sets; with 5–6 active ad sets, each ad set receives 3,500–5,000 impressions/day; reaching 500 clicks per creative variant (sufficient for CTR-based decisions) takes approximately 4–6 days at a 3% CTR; reaching 100 conversions per variant (sufficient for ROAS-based decisions) takes 10–20 days at a 0.5% conversion rate; plan test duration accordingly
  1. Tradeoff Evaluation — Test many creatives simultaneously (high throughput, but Meta's algorithm cannot optimise effectively within a single campaign when too many ad sets compete for budget — the "too many variables" problem) vs. test one or two new creatives per week against a proven control (lower throughput, cleaner test, higher statistical validity); the structured approach (one to two new creatives per week against a stable control) is correct for a programme at this stage
  1. Hidden Cost Identification — Moving to 40% video + 30% UGC + 30% static requires a significant shift in creative production budget; video production at professional quality costs £2K–8K per video; UGC from a platform like Insense costs £50–200 per creator × 20 creators/month = £1K–4K/month; the total shift in creative production budget from the current studio photography-heavy approach to the recommended mix may cost an additional £10K–20K/month; this must be presented as an investment alongside the expected ROAS improvement
  1. Risk Signals / Early Warning Metrics — Creative buffer status (always have 5+ approved creatives waiting to go live; alert if buffer drops below 3); average creative lifespan trend (if creative lifespan is decreasing month-over-month — from 21 days to 17 days — the audience is becoming more saturated and creative frequency needs to increase); UGC performer rate (what percentage of UGC content passes the CTR screening threshold? alert if below 30% — UGC sourcing or briefing needs improvement)
  1. Pivot Triggers — If after 3 months of the new testing system, ROAS has not improved despite running 12+ tests: the problem may not be creative — it may be audience saturation, landing page performance, or offer resonance; pivot to a month of audience testing (same creative, different audience segments) to isolate whether creative or audience is the binding constraint
  1. Long-Term Evolution Plan — Month 1: implement automated retirement rules, establish creative buffer, launch hypothesis-driven brief template; Month 2: shift creative production budget toward UGC (20 UGC pieces commissioned); Month 3: run first systematic creative format test (studio vs. UGC vs. video — statistically valid); Month 4: implement findings, expand best format; Month 6: full creative testing machine running at cadence (2–3 new creatives/week, automated retirement, 3-month creative learning backlog)

3. The Answer

Step 1: Implement Automated Creative Retirement Rules (Week 1)

Stop the £40K–60K/month waste immediately with automated retirement:

In Meta Ads Manager → Automated Rules → Create Rule:

RULE 1: CTR Fatigue Detection
Name: "Pause — CTR Fatigue"
Apply to: All active ads in [Campaign Name]
Action: Pause ad
Conditions:
  - CTR < 0.8% (based on last 7 days)
  AND Impressions > 5,000 (ensure sufficient data before pausing)
  AND Age of ad > 7 days (don't pause new ads before they've had time to learn)
Schedule: Check daily at 8:00 AM

RULE 2: CPC Escalation Alert
Name: "Alert — CPC Rising"
Apply to: All active ads
Action: Send email notification to [performance-team@brand.com]
Conditions:
  - CPC > £1.80 (based on last 7 days)
  AND Impressions > 3,000
Schedule: Check daily at 8:00 AM

RULE 3: ROAS Floor
Name: "Pause — ROAS Below Floor"
Apply to: All active ads
Action: Pause ad
Conditions:
  - Purchase ROAS < 2.0 (based on last 14 days)
  AND Spend > £500 (minimum spend before making a decision)
  AND Age of ad > 14 days
Schedule: Check weekly on Monday at 9:00 AM

RULE 4: Budget Reallocation to Top Performer
Name: "Increase Budget — Top ROAS Performer"
Apply to: All active ad sets
Action: Increase daily budget by 20%
Conditions:
  - Purchase ROAS > 5.0 (based on last 7 days)
  AND Spend > £300 in last 7 days
Schedule: Check weekly on Monday at 9:00 AM
Maximum: No more than 2 budget increases per ad set per week

These four rules replace the manual monitoring process and ensure creative is retired within 7–14 days of performance degradation rather than 10–15 days late.

Step 2: Hypothesis-Driven Creative Brief Template

Mandate this template for every new creative brief:

CREATIVE BRIEF — HYPOTHESIS-DRIVEN FORMAT

Brief ID: [CB-2024-047]
Requested by: [Performance Marketing Manager]
Due date: [14 days from brief date]

HYPOTHESIS:
"We believe that [creative concept X] will outperform [current control creative Y]
because [specific reason rooted in customer insight or data]."

Example:
"We believe that showing the dress being worn at a house party / social gathering
(social context) will outperform the current beach lifestyle context because:
  (a) Our customer survey (n=1,200) shows that 64% of purchases are motivated by
      'wanting to look good for a social event' vs. 18% for 'holiday/vacation'
  (b) Our social media engagement data shows party/event content receives 2.4x
      more saves than beach content
  (c) Competitor [Brand X] is running social-context creative with strong
      performance signals visible in their social feeds"

WHAT TO CREATE:
Format: [15-second video / Static image / Carousel]
Context: Social event (house party, restaurant dinner, birthday celebration)
Product: [Specific dress SKU or range]
Tone: Aspirational but relatable — not aspirational luxury
CTA: "Shop the Look" → [PDP URL]

WHAT NOT TO CREATE:
✗ Beach / holiday backdrop (this is what we're testing against — do not replicate the control)
✗ Studio white background (removes the contextual element we're testing)
✗ More than one scene (keep it comparable with the control)

SUCCESS CRITERIA (defined before creative is produced):
Primary metric: CTR ≥ 1.2% after 5,000 impressions (= passed to ROAS test)
Secondary metric: ROAS ≥ 4.5x after 100 purchases (= declare winner, replace control)
Failure metric: CTR < 0.7% after 5,000 impressions (= retire, record learning)

LEARNING REGARDLESS OF OUTCOME:
"Regardless of whether this test wins or loses, we will learn:
does social context outperform beach context for dress advertising to
our 25–35 female audience in the UK market?"

Step 3: Testing Structure — Controls, Variants, and Clear Winner Criteria

TESTING AD SET STRUCTURE:

Dedicated Testing Campaign (separate from main performance campaigns):
  Daily budget: £500/day (enough to gather data quickly without influencing main campaigns)
  Audience: Broad prospecting (Advantage+ or LAL 1-5%)

Ad Set 1: CONTROL (current best-performing creative — do not change)
  Budget: £250/day
  Creative: [Best performing ad from last 30 days]
  Purpose: Stable baseline reference

Ad Set 2: VARIANT (new hypothesis-driven creative)
  Budget: £250/day
  Creative: [New creative from the brief above]
  Purpose: Challenge the control

Duration: 7 days minimum (or until each ad set has 5,000+ impressions)

Decision framework:
  Day 7 check:
  - If Variant CTR > Control CTR × 1.2 (20% better) AND statistical confidence >80%:
    → Promote Variant to main campaigns as new control
    → Archive old control with performance data for the learning library

  - If Variant CTR < Control CTR × 0.9 (10% worse) AND impressions > 5,000:
    → Retire Variant immediately
    → Record learning: "Social context does NOT outperform beach context — reason unknown"
    → Brief follow-up test to investigate why (different execution? wrong audience?)

  - If results are inconclusive (within 10% either way):
    → Extend test for 7 more days
    → If still inconclusive after 14 days: call it a tie, retire Variant,
      record "No significant difference between social and beach contexts"

Statistical confidence calculator:
  Use: Neil Patel's A/B Test Significance Calculator (free, web-based)
  Input: Control CTR, Variant CTR, Control impressions, Variant impressions
  Output: Statistical confidence (aim for >80% for directional decisions,
         >95% for major strategy pivots)

Step 4: Creative Production Pipeline — Eliminate the Fatigue Gap

Current state:
  Creative lifespan: 21 days
  Production lead time: 15 days
  Buffer: 6 days (dangerously thin)
  Result: Fatigued ads run while waiting for new creative

Target state:
  Creative lifespan: 21 days (unchanged)
  Production lead time: 8 days (reduced by process changes)
  Buffer target: 5 creatives always in queue (never <3)
  Result: Replacement creative is always ready before current creative fatigues

How to reduce lead time from 15 days to 8 days:

  Phase 1 (Brief → Shoot): 3 days (reduced from 7)
    - Brief template pre-approved (no back-and-forth on concept)
    - Shoot location and model booked on rolling monthly schedule
      (not booked ad-hoc per brief)
    - Shoot 5–8 concepts per shoot day (batch production, not one-at-a-time)

  Phase 2 (Shoot → Edit): 3 days (reduced from 5)
    - Editor briefed before shoot (can begin cut while shoot is in progress)
    - First cut reviewed day after shoot (not 2–3 days after)
    - One revision round maximum (two-revision approach breeds indecision)

  Phase 3 (Edit → Approval): 2 days (reduced from 3)
    - Approval via Slack or Frame.io (async review, not scheduled meeting)
    - Performance team has final approval authority on ad creative
      (removes dependency on brand/creative director for performance ads)

PLUS: Add UGC as a 3–5 day fast track:
  Source from Insense / Billo:
    - Brief sent to 5 creators (Monday)
    - Creators shoot and submit (Wednesday–Thursday)
    - Edit and approve in-house (Friday)
    - Live on Sunday
  Cost: £100–150 per creator × 5 = £500–750 per UGC batch
  Speed: 5 days from brief to live
  Volume: 5 new UGC pieces per week at minimal cost

Step 5: Shift Creative Mix — Systematic Format Test

Run a structured test to validate the format shift from 80% studio / 20% video to 40% video / 30% UGC / 30% static:

CREATIVE MIX TEST (Month 3):

Allocate £30K/month (10% of total budget) as a controlled test budget:

Ad Set A (Current Mix — Control): £10K
  Creative: 80% studio photography, 20% video

Ad Set B (Video-Heavy Variant): £10K
  Creative: 60% video, 20% studio, 20% UGC

Ad Set C (UGC-Heavy Variant): £10K
  Creative: 50% UGC, 30% video, 20% studio

All ad sets:
  Same audience (LAL 2-3% of purchasers)
  Same offer (no discount vs. discount is a separate variable — do not test simultaneously)
  Same placement mix (Feed + Reels + Stories, proportional to delivery)

Duration: 21 days (allow full creative fatigue cycle)

Primary metric: ROAS
Secondary: CPP, CTR, creative lifespan (how many days before CTR drops 30%)

Decision: Whichever mix shows best ROAS at 21 days becomes the new standard;
the winning creative mix is then rolled out across all £300K/month budget in Month 4

Early Warning Metrics:

  • Creative buffer count weekly (alert if fewer than 3 creatives approved and waiting to go live)
  • Average CTR across all active ads week-over-week (alert if declining >15% — signal that the creative testing is not replacing fatigued ads fast enough)
  • UGC pass rate (what % of commissioned UGC pieces pass the CTR screening threshold; alert if below 25% — brief quality or creator selection needs improvement)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The automated retirement rules — with explicit thresholds (CTR < 0.8%, CPC > £1.80, ROAS < 2.0) and impression minimums (5,000+ impressions before a retirement rule fires) — stop the £40K–60K/month waste within the first week without requiring any additional budget or creative; this is a pure operational efficiency win achievable before any other changes are made. The hypothesis-driven brief template with pre-defined success criteria ("CTR ≥ 1.2% after 5,000 impressions") transforms creative testing from an art into a science — the decision of whether a creative "wins" is made before the test runs, eliminating post-hoc rationalisation of results based on whichever metric happened to look good.

What differentiates it from mid-level thinking: A mid-level performance marketer would say "test more creative" and "retire fatigued ads faster" without specifying the statistical requirements for a valid test, the automated rule configuration that enforces retirement without manual monitoring, or the production pipeline changes (batch shooting, brief pre-approval, async review) that reduce lead time from 15 to 8 days. They would not calculate the buffer requirement (5 creatives in queue) or design the three-way creative mix format test that validates the industry benchmark data before committing the entire £300K/month budget to a new mix.

What would make it a 10/10: A 10/10 response would include a creative learning library template (a running database of every test run, the hypothesis, the result, and the learning — so that the team never re-tests the same hypothesis twice and compound insights build over time), a specific Insense brief for sourcing UGC from micro-creators (showing creator criteria, content requirements, and usage rights terms), and a month-by-month creative production calendar showing exactly how many studio shoots, UGC commissions, and video shoots are needed to maintain the 5-creative buffer at all times.



Question 10: Performance Marketing in a Privacy-First World — Adapting to Cookie Deprecation, ATT, and First-Party Data Strategy

Difficulty: Elite | Role: Performance Marketing Manager | Level: Senior / Staff | Company Examples: Asos, Farfetch, Monzo, OVO Energy, Bloom & Wild


The Question

You are a Head of Performance Marketing at a large UK e-commerce retailer with a £6M annual paid media budget. Over the next 18 months, your paid media programme faces three simultaneous privacy headwinds: (1) Google has committed to deprecating third-party cookies in Chrome (representing 65% of UK browser usage), which will break audience retargeting, lookalike audience creation, and cross-site conversion tracking for all programmatic and display campaigns; (2) Apple's App Tracking Transparency (ATT) has already reduced Meta's tracking of iOS users from ~95% pre-2021 to approximately 30–40% opt-in today, causing your Meta ROAS to be systematically overstated and making retargeting iOS users unreliable; (3) the ICO is increasing GDPR enforcement specifically targeting performance marketing — cookie consent banners that make "decline" harder to click than "accept" have been flagged, and your legal team expects the ICO to issue fines against non-compliant UK retailers in the next 12 months. You currently rely on: third-party cookies for programmatic retargeting (30% of your display budget), Meta Pixel for conversion tracking (all Meta attribution), Google Analytics 4 with cookie-based attribution (all Google channel attribution), and third-party audience data from a DMP (Data Management Platform) for prospect targeting on programmatic DSPs. Design a strategy to maintain paid media performance through the privacy transition while building first-party data infrastructure that reduces dependence on third-party tracking.


1. What Is This Question Testing?

  • Third-party cookie deprecation mechanics and impact — understanding exactly what breaks when third-party cookies are deprecated: (a) cross-site retargeting (a user who visits your site gets a third-party cookie; when that cookie is read by a DSP, the user is served a retargeting ad on another site — this breaks without third-party cookies); (b) cross-site frequency capping (without cookies, you cannot prevent the same user from seeing your ad 50 times across different sites); (c) third-party audience segment buying from DMPs (segments like "in-market for luxury fashion — female, 25–44" are built using third-party cookie data — this data source is eliminated); (d) attribution tracking that traces conversions back to display impressions (the conversion event on your site cannot be linked to the impression event on another site without a cookie or ID that crosses the site boundary)
  • Privacy-enhancing technologies (PETs) and Google's Privacy Sandbox — knowing Google's replacement technologies for third-party cookies: Topics API (replaces interest-based targeting — the browser classifies the user into topic categories based on their browsing history, without sharing the browsing history with advertisers); Protected Audience API / FLEDGE (replaces retargeting — allows interest group-based bidding where the user's audience memberships are stored locally in the browser, not on a server); Attribution Reporting API (replaces cross-site conversion tracking — sends noisy aggregate attribution reports without individual-level data); knowing that these APIs are implemented in Chrome and represent Google's privacy-preserving alternative, not a complete solution
  • First-party data strategy and identity resolution — understanding that first-party data (data collected directly from your own users with their consent: email addresses, purchase history, account profiles, loyalty programme data) is the most valuable and most privacy-compliant data asset; knowing how to use first-party data for paid media: Customer Match (upload hashed email lists to Google Ads and Meta to create Custom Audiences); Conversions API (Meta) and Enhanced Conversions (Google) — server-side tracking that sends first-party data to the platforms without relying on browser cookies; Lookalike audiences built from first-party data (1% LAL of your best customers built from your CRM email list, not from cookie-based site visitor segments)
  • Consent Management Platform (CMP) and GDPR compliance — understanding that a valid cookie consent banner must: (a) present accepting and declining cookies with equal prominence and ease (no dark patterns — "Accept All" button cannot be larger or more prominent than "Reject All"); (b) allow users to change their consent at any time; (c) not use pre-ticked consent boxes; (d) not make service conditional on accepting cookies; knowing that Google's Consent Mode v2 (mandatory for all Google Ads users in the EEA and UK since March 2024) requires integration with a CMP — it modifies the behaviour of Google tags based on user consent, and Google uses modelled data to estimate conversions from non-consenting users
  • Server-side tracking and Conversions API implementation — understanding the difference between client-side tracking (a JavaScript pixel fires in the user's browser — vulnerable to ad blockers, iOS ITP, ATT, and cookie consent decline) vs. server-side tracking (your server sends the event directly to the platform's API using first-party data such as email hash or phone hash — not affected by ad blockers, iOS restrictions, or browser limitations); knowing that Meta's Conversions API (CAPI) and Google's Enhanced Conversions require engineering implementation (passing first-party data from your CRM or server to the platform API at the moment of conversion) but deliver the most resilient tracking in a privacy-first world
  • Contextual targeting and cohort-based targeting as cookie replacements — understanding that contextual targeting (serving ads based on the content of the page the user is reading, not the user's identity or cross-site behaviour) is the most privacy-compliant form of targeting and does not require cookies or user identification; knowing that contextual targeting's weakness (it cannot distinguish a user's intent level or purchase history) can be partially compensated by first-party data signals (users who have visited your site and given consent can be retargeted via consented first-party mechanisms like email, CAPI-based Custom Audiences, or consented on-site retargeting)

2. Framework: Privacy-First Performance Marketing Transition Model (PFPMTM)

  1. Assumption Documentation — Confirm the current reliance on each data source: what percentage of programmatic targeting uses third-party cookie segments vs. first-party data? What is the current GDPR consent rate (what percentage of UK site visitors accept cookies)? What is the current Meta ROAS before and after adjusting for iOS 14.5 signal loss? These baselines determine the scale of the transition challenge
  1. Constraint Analysis — Building first-party data infrastructure (CDP, email marketing, loyalty programme) takes 6–18 months to generate sufficient data to replace third-party audience segments; the gap between cookie deprecation (potentially 2025–2026) and the maturity of first-party data alternatives creates an "attribution trough" — a period where tracking becomes less reliable before the new infrastructure is fully operational; contingency planning for this trough is essential
  1. Tradeoff Evaluation — Invest heavily in Google's Privacy Sandbox APIs (uncertain adoption timeline, limited transparency, dependent on Google's implementation choices) vs. invest in first-party data infrastructure owned by the business (longer to build, but not dependent on any single platform's decisions, and valuable beyond paid media — for email marketing, personalisation, and CRM); first-party data is the correct long-term investment because it is platform-independent and privacy-regulation-resilient
  1. Hidden Cost Identification — Building a Customer Data Platform (CDP) to centralise and activate first-party data costs £50K–200K/year (Segment, Tealium, mParticle); implementing server-side tracking (Conversions API, Enhanced Conversions) requires 80–120 hours of engineering time; improving cookie consent rates (redesigning the CMP) requires legal review, UX design, and A/B testing; the total first-year investment in privacy infrastructure is £150K–350K — justified by the revenue protection it provides on a £6M media budget
  1. Risk Signals / Early Warning Metrics — Weekly consent rate trend (what percentage of new UK site visitors accept cookies? Alert if consent rate drops below 60% — indicates cookie banner is having lower acceptance or ICO guidance is changing user behaviour); CAPI event match quality (Meta provides an Event Match Quality score 0–10; alert if below 6.0 — indicates insufficient first-party data being sent via CAPI, reducing audience matching quality); Enhanced Conversions coverage (what percentage of Google Ads conversions are tracked via Enhanced Conversions vs. pixel-only? Alert if below 40% — pixel-only conversions are at risk of signal loss)
  1. Pivot Triggers — If Chrome's third-party cookie deprecation rolls out in Q2 2025 and your programmatic retargeting audience reach drops >60% (measured by addressable audience size in the DSP): the Privacy Sandbox APIs are not providing sufficient replacement targeting; pivot to 100% contextual targeting and first-party Custom Audience retargeting; redirect the 30% of display budget currently used for third-party cookie retargeting to consented email retargeting (upload first-party CRM segments to DSPs via LiveRamp or direct platform integrations)
  1. Long-Term Evolution Plan — Q1 2024: GDPR consent audit + CMP redesign; Q2 2024: CAPI + Enhanced Conversions implementation; Q3 2024: CDP implementation begins; Q4 2024: first-party audience segments live in Google + Meta; Q1 2025: programmatic targeting migrated from DMP segments to first-party + contextual; Q2 2025: cookie deprecation readiness review; ongoing: quarterly first-party data quality audit

3. The Answer

Step 1: Audit Current Privacy Exposure (Week 1–2)

Before building, understand exactly what you will lose:

PRIVACY EXPOSURE AUDIT

Channel: Meta Ads (£2.4M/year)
Current tracking: Meta Pixel (client-side, affected by iOS ATT + ad blockers)
iOS ATT impact: ~40% of iOS traffic is untracked
  → Meta-reported ROAS overstated by estimated 25-40%
CAPI implementation: NOT YET (gap)
First-party data signals sent: Email hash? Phone hash? → CHECK AND DOCUMENT

Channel: Google Ads (£2.1M/year)
Current tracking: GA4 (client-side) + Google Ads conversion tag
Consent Mode v2: NOT YET CONFIGURED (compliance risk — mandatory since March 2024)
Enhanced Conversions: NOT YET IMPLEMENTED
Cookie-dependent tracking at risk: 35–40% of UK visitors who decline cookies

Channel: Programmatic Display (£900K/year via The Trade Desk)
Third-party cookie dependency: HIGH (retargeting 100% relies on third-party cookies)
DMP audience segments: 100% third-party cookie based
At-risk spend when cookies deprecate: £270,000/year (30% used for retargeting)
Contextual targeting already in use: MINIMAL

Channel: Affiliate (£600K/year via AWIN)
Tracking method: Third-party cookie (affiliate click → cookie → conversion)
Impact of cookie deprecation: 20–35% of affiliate conversions may become untracked
First-party alternative: AWIN's server-to-server (S2S) tracking (not yet implemented)

RISK SUMMARY:
  Immediate risk (iOS ATT, already happening): £2.4M Meta budget operating on degraded data
  Medium-term risk (cookie deprecation): £270K display retargeting + £600K affiliate tracking
  Compliance risk (GDPR/ICO): Consent Mode v2 not configured — legal exposure

Step 2: Implement Server-Side Tracking — First Priority (Weeks 3–10)

Server-side tracking bypasses all browser-level restrictions:

META CONVERSIONS API (CAPI) IMPLEMENTATION:

What it does:
Your server sends conversion events (purchase, lead, checkout) directly to
Meta's API using first-party identifiers (email hash, phone hash, IP address,
browser cookie) — bypassing the browser entirely

Data flow:
User purchases on your site → Your server receives order confirmation →
Your server sends to Meta CAPI:
  {
    "event_name": "Purchase",
    "event_time": 1709123456,
    "user_data": {
      "em": "hashed_email_b64", ← SHA256 hash of user's email (from your CRM)
      "ph": "hashed_phone_b64", ← SHA256 hash of phone (from order data)
      "client_ip_address": "1.2.3.4",
      "client_user_agent": "Mozilla/5.0...",
      "fbc": "_fbp=xxxxx", ← Facebook click ID (from URL parameter)
      "fbp": "_fbp=xxxxx"  ← Facebook browser ID (from cookie if available)
    },
    "custom_data": {
      "currency": "GBP",
      "value": 85.00,
      "order_id": "ORD-2024-12345"
    }
  }

Benefits:
- Works even when iOS users have declined ATT
- Works even when users have ad blockers
- Works even when cookies are declined (uses email/phone hash instead)
- Deduplication: same event sent by Pixel + CAPI; Meta deduplicates using order_id

Implementation:
  Engineering effort: 40–60 hours
  Who does it: Backend developer (Python, Node.js, or PHP depending on your stack)
  Tools: Meta's CAPI endpoint, or managed via server-side tag manager (GTM Server-Side)

Target metric after implementation:
  Meta Event Match Quality score: target ≥ 7.5/10
  (Score measures how well Meta can match events to user profiles; higher = better targeting)

GOOGLE ENHANCED CONVERSIONS:

What it does:
Supplements Google Ads' click-based conversion tracking with hashed first-party data
(email address) captured at the conversion point; even if the click-based cookie
cannot be matched, the hashed email can match to a Google account

Implementation:
  In Google Ads: Goals → Conversions → Settings → Enhanced Conversions
  On your website: When the purchase confirmation page loads, pass the customer's
  email address to the Google Ads conversion tag:
    gtag('set', 'user_data', {'email': 'customer@email.com'});
    gtag('event', 'conversion', {'send_to': 'AW-XXXXXXXX/XXXXXXXX', 'value': 85.00, 'currency': 'GBP'});

  Google hashes the email server-side and matches to the user's Google account
  Works even when the GCLID cookie is blocked

GOOGLE CONSENT MODE V2 (COMPLIANCE PRIORITY):

Configuration in Google Tag Manager:
  gtag('consent', 'default', {
    'analytics_storage': 'denied',
    'ad_storage': 'denied',
    'ad_user_data': 'denied',
    'ad_personalization': 'denied',
    'wait_for_update': 500 // milliseconds to wait for CMP to signal consent
  });

  // CMP updates consent once user makes a choice:
  gtag('consent', 'update', {
    'analytics_storage': 'granted', // only if user accepted analytics
    'ad_storage': 'granted',        // only if user accepted advertising cookies
    'ad_user_data': 'granted',
    'ad_personalization': 'granted'
  });

What Consent Mode v2 enables:
  - Modelled conversions: Google uses AI to estimate conversions from users
    who declined cookies, based on consenting users' patterns
  - Signal: Google signals that your account is consent-compliant,
    protecting against ICO action

ICO compliance requirement:
  Current (non-compliant): "Accept All" button is blue, 15px, prominent
  "Manage Preferences" (required to decline) is grey, 10px, in small print below

  Compliant redesign:
  Two equal-prominence buttons: [Reject All] [Accept All]
  Optional: [Manage Preferences] for granular control
  No pre-ticked boxes
  "Reject All" same size, colour contrast, and position as "Accept All"

  Note: Consent rate will likely drop 15-25% after the redesign (more users decline)
  This is expected and unavoidable; the alternative is ICO enforcement action

Step 3: Build First-Party Data Infrastructure

CUSTOMER DATA PLATFORM (CDP) IMPLEMENTATION:

Recommended platform: Segment (£15K–40K/year) or Tealium (£30K–80K/year)

What it does:
  - Centralises first-party data from all touchpoints (website, app, CRM, email, POS)
  - Creates unified customer profiles (resolves "user A from website" = "customer A in CRM")
  - Activates segments to paid media platforms (Google, Meta, The Trade Desk)
    via direct integrations

First-party data sources to connect:
  Source 1: Website (GA4 events — purchase, login, product view, add to cart)
  Source 2: CRM (Salesforce, HubSpot — customer records, purchase history, LTV)
  Source 3: Email platform (Klaviyo, Mailchimp — email engagement data)
  Source 4: Loyalty programme (if applicable — purchase frequency, points balance)
  Source 5: Customer service (Zendesk — support ticket history, NPS scores)

Segments to create for paid media activation:
  Segment A: High-LTV customers (top 20% by lifetime spend) → exclude from acquisition;
             use for lookalike audience building (LAL 1-3%)
  Segment B: Churned customers (no purchase in 12 months) → win-back retargeting
  Segment C: Cart abandoners (add to cart, no purchase in 48 hours) → retargeting
  Segment D: Category browsers (viewed dresses 3+ times, no purchase) → targeted ads
  Segment E: New customers (first purchase in last 30 days) → upsell sequence

Activation method (privacy-compliant):
  Upload to Google Customer Match: segment CSV → hashed → Google Ads Custom Audience
  Upload to Meta Custom Audiences: segment CSV → hashed → Meta Custom Audience
  Upload to The Trade Desk: segment CSV → LiveRamp IdentityLink → DSP targeting
  All uploads are hashed at source (SHA256 of email + phone) before leaving your systems

Step 4: Replace Third-Party Cookie Retargeting with First-Party Alternatives

CURRENT (breaking with cookie deprecation):
  User visits site → third-party cookie set → DSP reads cookie on another site → ad served
  Budget at risk: £270K/year of display retargeting

REPLACEMENT STRATEGY:

Alternative 1: First-Party Cookie Retargeting (CONSENTED users only)
  User visits site and accepts cookies → first-party cookie set (your domain) →
  CAPI / Enhanced Conversions sends event to Google/Meta → retargeting via
  Custom Audience built from consented first-party data
  Cost: No additional media cost (reallocation of existing budget)
  Coverage: Only the ~65% of users who consent (cannot retarget non-consenters)

Alternative 2: Contextual Retargeting (ALL users, no consent required)
  User reads fashion content on publisher sites → contextual targeting serves
  fashion ads to that content environment
  Coverage: 100% of users (contextually targeted, not individually targeted)
  Targeting precision: Lower than cookie-based (no individual-level retargeting)
  Use: Awareness retargeting (brand recall) rather than direct response

Alternative 3: Email Retargeting (OPTED-IN users only)
  User provides email on site (with marketing consent) → email sequence with
  product recommendations → drives back to site
  Cost: Email platform cost (marginal)
  Conversion rate: Higher than display (users actively read email)
  Scale: Limited to opted-in email list

RECOMMENDED BUDGET REALLOCATION (post-cookie deprecation):
  Current: £270K on third-party cookie retargeting
  Post-deprecation:
    £120K → First-party CAPI-based Custom Audience retargeting (consented 65%)
    £80K  → Contextual targeting (all users, awareness focus)
    £50K  → Email retargeting (opted-in users, highest conversion)
    £20K  → Retain for experimentation with Privacy Sandbox APIs

Step 5: Programmatic Targeting — Replace DMP Third-Party Segments

CURRENT (at risk):
  DMP third-party audience segments ("In-market luxury fashion, female, 25-44 UK")
  Built on third-party cookie data → becoming unreliable now, eliminated when cookies deprecate

REPLACEMENT STRATEGY:

Layer 1: First-Party Lookalike Audiences (primary replacement)
  Upload your top-LTV customer segment (CDP Segment A) to The Trade Desk via LiveRamp
  TTD creates a lookalike model based on your customers' logged-in publisher identities
  Coverage: High-quality, but limited to publishers in The Trade Desk's identity graph

Layer 2: Contextual Targeting via Peer39 / IAS Contextual
  Target pages about: fashion, style, luxury goods, sustainability (for your eco-friendly range)
  Tool: Peer39 or IAS contextual intelligence (integrated into The Trade Desk)
  No cookies required, fully GDPR-compliant

Layer 3: Publisher First-Party Data (PMPs)
  Premium publishers (The Guardian, Vogue UK, Cosmopolitan) have their own logged-in
  first-party audience data:
  - Guardian registered users: 8M UK
  - Vogue UK subscribers: 200K digital subscribers
  These publishers offer their first-party segments via PMP deals — targeted to their
  known audience without using third-party cookies

Layer 4: Google's Privacy Sandbox (Experimental)
  Enable Topics API and Protected Audience API in The Trade Desk's Privacy Sandbox integration
  Use as a supplementary layer (expect 20-40% reach vs. third-party cookies)
  Monitor performance vs. other layers quarterly; adjust budget allocation

Budget shift away from DMP:
  Current: £180K/year on DMP third-party segments
  2025 target:
    £80K → First-party lookalike (CDP + LiveRamp)
    £60K → Contextual targeting
    £30K → Publisher first-party PMPs
    £10K → Privacy Sandbox experimentation

Step 6: 18-Month Transition Roadmap

TIMELINE:

Q1 2024 (Months 1–3): Compliance and tracking foundation
  ☐ GDPR CMP redesign — equal prominence for Reject/Accept (ICO compliance)
  ☐ Google Consent Mode v2 configuration
  ☐ Meta CAPI implementation (engineering sprint)
  ☐ Google Enhanced Conversions implementation
  ☐ Affiliate S2S tracking migration (AWIN server-to-server)
  TARGET: All tracking is consent-mode compliant; CAPI/Enhanced Conversions live

Q2 2024 (Months 4–6): First-party data infrastructure
  ☐ CDP selection and implementation begins (Segment recommended)
  ☐ CRM → CDP data pipeline live
  ☐ First-party audience segments defined and tested in Google Customer Match
  ☐ First-party audience segments tested in Meta Custom Audiences
  TARGET: First-party audiences generating 10–20% of retargeting volume

Q3 2024 (Months 7–9): Programmatic migration
  ☐ DMP contracts under review — plan for renewal vs. replacement
  ☐ Contextual targeting live in The Trade Desk (replace 40% of DMP segments)
  ☐ Publisher PMP deals negotiated with first-party data packages
  ☐ LiveRamp integration with CDP for identity resolution in programmatic
  TARGET: 50% of programmatic targeting using first-party or contextual

Q4 2024 (Months 10–12): Scale first-party
  ☐ First-party lookalike audiences reaching same volume as third-party segments
  ☐ Email retargeting programme scaling
  ☐ Loyalty programme data integrated into CDP for highest-quality lookalikes
  TARGET: 80% of targeting using first-party or contextual

Q1–Q2 2025 (Months 13–18): Cookie deprecation readiness
  ☐ Third-party cookie deprecation contingency plan tested in Chrome DevTools
    (disable third-party cookies, measure impact on each channel)
  ☐ Privacy Sandbox APIs enabled in TTD for experimental budget
  ☐ DMP contracts terminated or renegotiated for cookieless data
  TARGET: Zero dependence on third-party cookies; full first-party infrastructure operational

4. Interview Score: 10 / 10

Why this demonstrates staff-level maturity: The privacy exposure audit — quantifying exactly which £ of spend is at risk from each privacy change (£270K display retargeting from cookie deprecation, £2.4M Meta budget on degraded iOS data, compliance risk from non-configured Consent Mode v2) — transforms abstract privacy concerns into specific financial risks that justify the infrastructure investment to the CEO and CFO. The server-side tracking implementation with actual JSON payload structure for Meta's CAPI shows the technical depth to brief an engineering team precisely, not just conceptually, reducing the risk of misimplementation. The 18-month phased transition roadmap with quarterly milestones and binary target metrics (e.g., "80% of programmatic targeting using first-party or contextual by Q4 2024") provides the governance structure that ensures the transition is executed rather than merely planned.

What differentiates it from senior-level thinking: A senior performance marketer would identify the privacy challenges and recommend "implementing CAPI" and "building first-party data" — correct but insufficient. They would not perform the privacy exposure audit quantifying the at-risk spend by channel, would not write the Consent Mode v2 configuration code, would not design the budget reallocation from third-party cookie retargeting to its three first-party alternatives, and would not build the 18-month phased roadmap with engineering, legal, and media dependencies mapped across quarters.

What would make it perfect: This response scores 10/10. The only potential enhancement would be a specific first-party data collection strategy (how to increase the email opt-in rate on the site from current levels to 30%+ of new visitors through a value exchange — loyalty points, exclusive discounts, early access to sales) that feeds the CDP with higher-quality and higher-volume data faster than organic growth would provide.



Question 11: Paid Search Automation and Scripts — Using Google Ads Scripts to Manage a 50,000-Keyword Account Without Manual Overhead

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Booking.com, Auto Trader, Rightmove, Trivago, Skyscanner


The Question

You are a Senior Performance Marketing Manager at a travel comparison platform. Your Google Ads account contains 50,000 active keywords spanning 12 destination markets (UK, France, Spain, Germany, Italy, Portugal, Greece, Croatia, Thailand, Bali, Maldives, Dubai). Each destination has 3–5 campaigns (Brand, Generic, Competitor, Retargeting, and Non-Brand Exact) with between 200 and 800 ad groups each. The account generates £18M in annual bookings revenue from £2.1M paid search spend, a blended ROAS of 8.6x. Your challenge: the account is too large to manage manually — at 50,000 keywords, even reviewing performance weekly would take 40+ hours per analyst. The current team of 2 paid search specialists spend 70% of their time on manual tasks: checking for underspent budgets, identifying keywords with CPA spikes, pausing seasonal terms that are no longer relevant, and adjusting bids after conversion rate shifts. This leaves only 30% of their time for strategic work. Additionally: (1) budget pacing is inconsistent — some campaigns overspend by 15–20% on high-demand weekends while others underspend by 25–30% on low-demand weekdays; (2) 800 keywords have not generated a single click in 90 days despite spending £0.50/day in impression share minimum bids; (3) the account has 2,100 search terms from the last 30 days with zero conversions and >2 clicks each — none have been added to the negative keyword list because the team has not had time; (4) Smart Bidding is set to Target ROAS of 8.5x across all campaigns, but the ROAS target is not differentiated by destination market (Thailand should have a higher ROAS target than UK domestic breaks because Thai bookings have a higher average booking value). Design an automation strategy using Google Ads Scripts and automated rules that reduces manual management time by 60% while improving account performance.


1. What Is This Question Testing?

  • Google Ads Scripts fundamentals and use cases — understanding that Google Ads Scripts are JavaScript programmes that run against the Google Ads API, allowing you to read account data, make changes programmatically, and schedule recurring tasks; knowing the three categories of automation: (a) reporting scripts (pull data into Google Sheets, generate alerts, send emails); (b) optimisation scripts (pause keywords, adjust bids, add negatives, change budgets based on rules); (c) management scripts (bulk-create campaigns, ad groups, or keywords from a spreadsheet data source); knowing when scripts are more appropriate than automated rules (complex conditional logic, multi-campaign operations, external data source integration) vs. when automated rules are sufficient (simple threshold-based actions on individual campaigns or ad groups)
  • Budget pacing and dayparting automation — understanding that budget pacing problems (overspend on weekends, underspend on weekdays) occur because Google's Smart Bidding distributes budget based on predicted conversion opportunities, not evenly across time; in accounts with strong day-of-week conversion patterns (travel bookings spike on Sunday evenings when people plan next week's trips), Google's algorithm allocates more budget on high-conversion-probability days — which is correct behaviour but can exhaust daily budgets early on peak days; the solution is either higher daily budgets (allowing Google to self-pace more flexibly) or a script that monitors intraday spend velocity and adjusts budget dynamically throughout the day
  • Zero-click and zero-conversion keyword cleanup — understanding the crawl/walk/run framework for keyword hygiene at scale: (a) keywords with zero clicks in 90 days are not participating in the auction at all (bid too low, QS too low, or no search volume for that term); they waste impression count and Quality Score averaging but no actual spend — can be paused en masse via a script without individual review; (b) keywords with clicks but zero conversions over 90 days need manual review before pausing (some may be "assist" keywords that contribute to the conversion path without being the last click); (c) search terms with zero conversions and >2 clicks need to be bulk-added as negatives — this is the highest-impact automated task since every irrelevant click costs money
  • Differentiated ROAS targets by market and campaign type — understanding that a blanket 8.5x ROAS target ignores the structural differences between markets: a Thailand holiday booking averages £2,200 vs. a UK short break at £450; Google's Target ROAS algorithm allocates more budget to higher-revenue opportunities regardless of booking value if the same ROAS target is applied — this biases the algorithm toward UK domestic (high volume, lower value) over international (lower volume, higher value); the correct setup is market-specific ROAS targets calibrated to each market's average booking value, margin, and conversion rate
  • Negative keyword automation via search term mining scripts — understanding that the search term mining process (export search terms → filter zero-conversion terms → add as negatives) is the most impactful automated task for reducing wasted spend in a large account; at 50,000 keywords with hundreds of thousands of search queries per month, manual mining is impossible; a weekly script can: pull all search terms from the last 7 days, filter for those with 0 conversions and >1 click, cross-reference against the existing negative keyword list (to avoid duplication), and add net-new negatives to a shared negative keyword list — saving 4–6 hours per week of manual work with zero loss in performance
  • Smart Bidding and portfolio strategy for multi-market accounts — understanding that running all 12 destination markets under separate Target ROAS strategies allows the algorithm to optimise each market independently; alternatively, a Portfolio Bidding Strategy groups multiple campaigns under a single strategy, allowing the algorithm to shift budget between markets dynamically to hit the portfolio-level ROAS target; knowing when portfolio is correct (markets have similar booking values, you want Google to optimise across markets dynamically) vs. individual strategies per campaign (markets have different booking values and should have different ROAS targets — the travel platform case)

2. Framework: Google Ads Automation Architecture for Scale Management (GAAASM)

  1. Assumption Documentation — Confirm whether the 50,000 keywords are managed in Google Ads Editor (desktop, bulk operations) or purely in the Google Ads UI (web, slow for large accounts); Google Ads Scripts operate on the account API level and are independent of the UI; also confirm which Google Sheet or data warehouse the team uses for reporting — the scripts will write output data there
  1. Constraint Analysis — Google Ads Scripts run on Google's servers with a 30-minute execution time limit per script run; for a 50,000-keyword account, processing every keyword in a single script run is possible but requires efficient API calls (use batch operations, not one API call per keyword); scripts cannot exceed 250,000 API operations per day — large accounts must use MCC-level scripts that process accounts in batches
  1. Tradeoff Evaluation — Full script automation (scripts handle everything — high efficiency, lower manual oversight, but requires script maintenance when account structure changes) vs. scripts for reporting only with automated rules for actions (scripts generate alerts and reports; automated rules execute the changes — more conservative, lower risk of incorrect automated changes); for a £2.1M spend account, a conservative approach (scripts for reporting + rules for execution) is correct until the team has confidence in the automation logic
  1. Hidden Cost Identification — Developing custom Google Ads Scripts requires JavaScript skills and Google Ads API knowledge; if the team lacks these skills, a Google Ads consultant can build the initial scripts at a cost of £5K–15K; alternatively, pre-built script libraries (Optmyzr, WordStream, or free scripts from the Google Ads scripts community at developers.google.com/google-ads/scripts) can be adapted at lower cost; ongoing script maintenance (when campaign structure changes, scripts must be updated) costs 5–10 hours per quarter
  1. Risk Signals / Early Warning Metrics — Script execution error rate (check the script execution log weekly; scripts that fail silently may not be adding negatives or pausing keywords as intended — every failure is uncaught waste); weekly spend deviation from budget (target <5% deviation above or below daily budget targets; alert if any campaign overspends by >10% in a week — budget script is not calibrating correctly); ROAS trend by destination market (track each of the 12 markets weekly; alert if any market's ROAS drops >20% without a corresponding booking volume change — may indicate the differentiated ROAS target is incorrect for that market)
  1. Pivot Triggers — If after 60 days of running the negative keyword mining script, the account's irrelevant search term volume (tracked in the Search Terms report) has not decreased by >50%: the script's filtering logic may be too conservative (missing irrelevant terms) or the script is failing to execute reliably; review the script execution log, verify the negative keyword list is growing weekly, and check for any API error messages
  1. Long-Term Evolution Plan — Week 1–2: implement budget pacing script (highest impact, reduces overspend immediately); Week 3: implement zero-click keyword pausing script; Week 4: implement search term negative mining script (weekly schedule); Week 5–6: restructure ROAS targets by destination market (manual change, not a script); Month 3: implement anomaly detection script (alerts when any metric deviates >20% from 7-day average); Month 6: evaluate time savings (target 60% reduction in manual management hours)

3. The Answer

Step 1: Implement the Budget Pacing Script (Highest Impact, Week 1)

The 15–20% weekend overspend and 25–30% weekday underspend represent wasted budget and missed revenue opportunities respectively. A budget pacing script monitors intraday spend velocity and adjusts daily budgets dynamically:

// GOOGLE ADS SCRIPT: Budget Pacing Monitor and Adjuster
// Runs: Every 2 hours (scheduled via Google Ads Scripts interface)
// Purpose: Adjust campaign daily budgets based on intraday spend rate vs. target

function main() {

  // Define target daily budgets per campaign (£)
  // In production: pull these from a Google Sheet to allow non-technical editing
  var campaignBudgets = {
    'UK_Generic_Brand': 800,
    'UK_Generic_Non_Brand': 1200,
    'Spain_Generic': 600,
    'Thailand_Generic': 900,
    // ... all 12 markets × 3-5 campaigns each
  };

  var today = new Date();
  var hourOfDay = today.getHours();
  var dayFraction = (hourOfDay + 1) / 24; // Fraction of day elapsed

  var campaignIterator = AdsApp.campaigns()
    .withCondition("Status = ENABLED")
    .get();

  while (campaignIterator.hasNext()) {
    var campaign = campaignIterator.next();
    var campaignName = campaign.getName();

    // Skip campaigns not in our budget map
    if (!campaignBudgets[campaignName]) continue;

    var targetDailyBudget = campaignBudgets[campaignName];
    var stats = campaign.getStatsFor("TODAY");
    var spentToday = stats.getCost();

    // How much SHOULD we have spent by this hour?
    var expectedSpend = targetDailyBudget * dayFraction;

    // Calculate pacing ratio (>1 = overspending, <1 = underspending)
    var pacingRatio = spentToday / expectedSpend;

    var currentBudget = campaign.getBudget().getAmount();
    var newBudget = currentBudget;

    if (pacingRatio > 1.15) {
      // Overspending by >15%: reduce daily budget by 10%
      newBudget = currentBudget * 0.90;
      Logger.log(campaignName + ': OVER PACING (' + pacingRatio.toFixed(2) +
                 'x). Reducing budget from £' + currentBudget + ' to £' + newBudget.toFixed(2));
    } else if (pacingRatio < 0.80 && hourOfDay > 6) {
      // Underspending by >20% and it's past 6am: increase daily budget by 15%
      newBudget = currentBudget * 1.15;
      // Never increase above 120% of target (safety cap)
      newBudget = Math.min(newBudget, targetDailyBudget * 1.20);
      Logger.log(campaignName + ': UNDER PACING (' + pacingRatio.toFixed(2) +
                 'x). Increasing budget from £' + currentBudget + ' to £' + newBudget.toFixed(2));
    }

    // Apply the change if budget changed by more than £1 (avoid micro-adjustments)
    if (Math.abs(newBudget - currentBudget) > 1) {
      campaign.getBudget().setAmount(newBudget);
    }
  }

  Logger.log('Budget pacing script complete. Check Google Ads Scripts log for details.');
}

Expected impact: Budget deviation from target reduces from 15–20% (weekend) to <5%; estimated spend efficiency improvement: £30K–50K/year in recovered underspend and eliminated overspend.

Step 2: Zero-Click Keyword Pausing Script (Week 3)

// GOOGLE ADS SCRIPT: Pause Zero-Click Keywords
// Runs: Monthly (1st of each month)
// Purpose: Pause keywords with 0 clicks in 90 days to reduce noise in account

function main() {

  var dateRange = 'LAST_90_DAYS';
  var pausedCount = 0;
  var report = [];

  var keywordIterator = AdsApp.keywords()
    .withCondition("Status = ENABLED")
    .withCondition("Clicks = 0")       // Zero clicks in 90 days
    .withCondition("Impressions > 0")  // Has appeared in results but no clicks
    .forDateRange(dateRange)
    .get();

  while (keywordIterator.hasNext()) {
    var keyword = keywordIterator.next();
    var adGroup = keyword.getAdGroup();
    var campaign = adGroup.getCampaign();

    // Safety check: don't pause keywords in Brand campaigns
    // (low CTR on brand is acceptable; brand keywords should never be paused automatically)
    if (campaign.getName().toLowerCase().indexOf('brand') !== -1) continue;

    // Log before pausing
    report.push([
      campaign.getName(),
      adGroup.getName(),
      keyword.getText(),
      keyword.getMatchType(),
      keyword.getQualityScore() || 'N/A'
    ]);

    // Pause the keyword
    keyword.pause();
    pausedCount++;
  }

  // Write report to Google Sheets
  var sheet = SpreadsheetApp.openByUrl('YOUR_SHEET_URL').getSheetByName('Zero-Click Paused');
  sheet.clearContents();
  sheet.appendRow(['Campaign', 'Ad Group', 'Keyword', 'Match Type', 'QS', 'Date Paused']);
  sheet.getRange(2, 1, report.length, 5).setValues(report);

  // Email summary
  MailApp.sendEmail({
    to: 'ppc-team@brand.com',
    subject: 'Google Ads: ' + pausedCount + ' zero-click keywords paused',
    body: pausedCount + ' keywords with 0 clicks in 90 days have been paused. ' +
          'Review the full list in the Paused Keywords sheet before any are re-enabled.\n\n' +
          'Sheet: YOUR_SHEET_URL'
  });

  Logger.log('Paused ' + pausedCount + ' zero-click keywords.');
}

Expected impact: Removes 800 inactive keywords from active rotation; reduces account noise; frees analyst time from manual keyword review.

Step 3: Search Term Negative Mining Script (Week 4)

The highest-impact automated task — adds zero-conversion search terms to the shared negative list weekly:

// GOOGLE ADS SCRIPT: Automated Negative Keyword Mining
// Runs: Every Monday at 7:00 AM
// Purpose: Add zero-conversion, multi-click search terms as negatives

function main() {

  var CLICKS_THRESHOLD = 2;      // Minimum clicks before considering for negative
  var CONVERSIONS_THRESHOLD = 0; // Zero conversions
  var COST_THRESHOLD = 1.00;     // Minimum £1 spent (avoid micro-spend terms)
  var NEGATIVE_LIST_NAME = 'Auto-Generated Negatives — Weekly Mining';
  var SHEET_URL = 'YOUR_SHEET_URL';

  // Get (or create) the shared negative keyword list
  var negativeListIterator = AdsApp.negativeKeywordLists()
    .withCondition("Name = '" + NEGATIVE_LIST_NAME + "'")
    .get();

  var negativeList;
  if (negativeListIterator.hasNext()) {
    negativeList = negativeListIterator.next();
  } else {
    negativeList = AdsApp.newNegativeKeywordListBuilder()
      .withName(NEGATIVE_LIST_NAME)
      .build()
      .getResult();
  }

  // Get existing negatives to avoid duplication
  var existingNegatives = {};
  var existingNegIterator = negativeList.negativeKeywords().get();
  while (existingNegIterator.hasNext()) {
    var neg = existingNegIterator.next();
    existingNegatives[neg.getText().toLowerCase()] = true;
  }

  // Mine search terms from last 7 days
  var query =
    'SELECT Query, Clicks, Conversions, Cost ' +
    'FROM SEARCH_QUERY_PERFORMANCE_REPORT ' +
    'WHERE Clicks >= ' + CLICKS_THRESHOLD +
    ' AND Conversions = 0' +
    ' AND Cost >= ' + COST_THRESHOLD +
    ' DURING LAST_7_DAYS';

  var report = AdsApp.report(query);
  var rows = report.rows();

  var termsToAdd = [];
  var newNegativesAdded = [];

  while (rows.hasNext()) {
    var row = rows.next();
    var searchTerm = row['Query'].toLowerCase();
    var clicks = parseInt(row['Clicks']);
    var cost = parseFloat(row['Cost'].replace(',', ''));

    // Skip if already in negative list
    if (existingNegatives[searchTerm]) continue;

    // Skip branded terms (never negative brand keywords)
    if (searchTerm.indexOf('yourbrand') !== -1) continue;
    if (searchTerm.indexOf('your brand') !== -1) continue;

    termsToAdd.push(searchTerm);
    newNegativesAdded.push([searchTerm, clicks, '£' + cost.toFixed(2), new Date()]);
  }

  // Add all new negatives as phrase match (safer than exact — also blocks variations)
  if (termsToAdd.length > 0) {
    negativeList.addNegativeKeywords(
      termsToAdd.map(function(term) { return '"' + term + '"'; }) // Phrase match
    );
  }

  // Write to reporting sheet
  if (newNegativesAdded.length > 0) {
    var sheet = SpreadsheetApp.openByUrl(SHEET_URL).getSheetByName('Weekly Negatives Added');
    sheet.appendRows(newNegativesAdded);
  }

  // Send weekly email
  MailApp.sendEmail({
    to: 'ppc-team@brand.com',
    subject: 'Weekly Negative Mining: ' + termsToAdd.length + ' new negatives added',
    body: termsToAdd.length + ' zero-conversion search terms added as phrase-match negatives.\n' +
          'Total wasted spend on these terms last 7 days: £' +
          newNegativesAdded.reduce(function(sum, row) {
            return sum + parseFloat(row[2].replace('£', ''));
          }, 0).toFixed(2) + '\n\n' +
          'Review the full list: ' + SHEET_URL
  });

  Logger.log('Added ' + termsToAdd.length + ' negative keywords from ' +
             newNegativesAdded.length + ' zero-conversion search terms.');
}

Expected impact: 2,100 zero-conversion search terms added as negatives in week 1; ongoing weekly prevention of new waste accumulation. At the account's average CPC of £0.85 and 2 clicks per term, 2,100 terms × £1.70 = £3,570 of weekly wasted spend eliminated in the first week alone.

Step 4: Differentiated ROAS Targets by Destination Market

Manual change (not a script) but highest strategic impact:

Current: All 12 markets set to Target ROAS = 8.5x

Problem: 8.5x ROAS on a £450 UK short break booking = £52.94 max CPA
         8.5x ROAS on a £2,200 Thailand booking = £258.82 max CPA
         Google's algorithm allocates budget based on ROAS target, not booking value —
         it fills the £52.94 max CPA slots for UK (high volume) before the
         £258.82 slots for Thailand (lower volume) — correct behaviour, wrong setup

Corrected ROAS targets by market (calibrated to average booking value):

Market          | Avg Booking Value | Target ROAS | Max CPA (at Target) | Revenue per £1 Spent
----------------|-------------------|-------------|---------------------|--------------------
Thailand        | £2,200            | 12x         | £183                | £12
Maldives        | £3,800            | 14x         | £271                | £14
Dubai           | £1,800            | 11x         | £164                | £11
Greece          | £1,100            | 10x         | £110                | £10
Spain           | £850              | 9.5x        | £89                 | £9.50
Italy           | £920              | 9.5x        | £97                 | £9.50
Croatia         | £780              | 9x          | £87                 | £9
Portugal        | £720              | 9x          | £80                 | £9
France          | £680              | 8.5x        | £80                 | £8.50
Bali            | £1,950            | 11.5x       | £170                | £11.50
Germany         | £650              | 8x          | £81                 | £8
UK Domestic     | £450              | 7.5x        | £60                 | £7.50

Logic: Higher-value destinations have higher ROAS targets because the absolute margin
per booking is higher; Google's algorithm will correctly prioritise high-ROAS,
high-value bookings over low-ROAS, low-value bookings.

Step 5: Performance Anomaly Detection Script (Month 3)

Alerts the team when any metric deviates significantly from its 7-day rolling average:

// GOOGLE ADS SCRIPT: Anomaly Detection
// Runs: Daily at 9:00 AM
// Purpose: Alert when CPA, ROAS, or CTR deviates >25% from 7-day average

function main() {

  var DEVIATION_THRESHOLD = 0.25; // 25% deviation triggers alert
  var alerts = [];

  var campaignIterator = AdsApp.campaigns()
    .withCondition("Status = ENABLED")
    .get();

  while (campaignIterator.hasNext()) {
    var campaign = campaignIterator.next();

    // Today's stats
    var todayStats = campaign.getStatsFor("TODAY");
    var todayCTR = todayStats.getCtr();
    var todayCost = todayStats.getCost();
    var todayConversions = todayStats.getConversions();
    var todayCPA = todayConversions > 0 ? todayCost / todayConversions : 0;
    var todayROAS = todayCost > 0 ? todayStats.getConversionValue() / todayCost : 0;

    // 7-day average stats
    var weekStats = campaign.getStatsFor("LAST_7_DAYS");
    var avgCTR = weekStats.getCtr();
    var avgCost = weekStats.getCost() / 7;
    var avgConversions = weekStats.getConversions() / 7;
    var avgCPA = avgConversions > 0 ? avgCost / avgConversions : 0;
    var avgROAS = avgCost > 0 ? weekStats.getConversionValue() / weekStats.getCost() : 0;

    // Check for significant deviations (only check if we have meaningful spend)
    if (todayCost < 10) continue;

    if (avgROAS > 0 && Math.abs(todayROAS - avgROAS) / avgROAS > DEVIATION_THRESHOLD) {
      alerts.push({
        campaign: campaign.getName(),
        metric: 'ROAS',
        today: todayROAS.toFixed(1) + 'x',
        average: avgROAS.toFixed(1) + 'x',
        deviation: ((todayROAS - avgROAS) / avgROAS * 100).toFixed(0) + '%'
      });
    }

    if (avgCTR > 0 && Math.abs(todayCTR - avgCTR) / avgCTR > DEVIATION_THRESHOLD) {
      alerts.push({
        campaign: campaign.getName(),
        metric: 'CTR',
        today: (todayCTR * 100).toFixed(2) + '%',
        average: (avgCTR * 100).toFixed(2) + '%',
        deviation: ((todayCTR - avgCTR) / avgCTR * 100).toFixed(0) + '%'
      });
    }
  }

  // Send alert email if any anomalies found
  if (alerts.length > 0) {
    var body = 'PERFORMANCE ANOMALIES DETECTED\n\n';
    alerts.forEach(function(alert) {
      body += '⚠️ ' + alert.campaign + ' — ' + alert.metric +
              ' Today: ' + alert.today + ' vs 7-day avg: ' + alert.average +
              ' (' + alert.deviation + ' deviation)\n';
    });
    body += '\nReview the account immediately: https://ads.google.com/';

    MailApp.sendEmail({
      to: 'ppc-team@brand.com',
      subject: '🚨 Google Ads Anomaly: ' + alerts.length + ' metrics deviated >25%',
      body: body
    });
  }
}

Step 6: 60% Time Savings — Before and After

TaskCurrent Time (weekly)Automated?Time After Automation
Budget pacing checks6 hoursScript0.5 hours (review log)
Zero-conversion keyword review5 hoursScript0 hours (monthly script)
Search term negative mining8 hoursScript1 hour (review additions)
ROAS anomaly detection4 hoursScript0.5 hours (review alerts)
Bid adjustment checks5 hoursAutomated rules1 hour
Quality Score monitoring3 hoursDashboard0.5 hours
Total31 hours3.5 hours
Time savings87% reduction

Team time freed for strategic work: 27+ hours/week. Redirected to: creative testing, audience strategy, new market expansion, landing page CRO.


4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The budget pacing script with intraday spend velocity calculation — adjusting budgets dynamically based on the ratio of actual spend to expected spend at that hour of the day — solves the overspend/underspend problem without requiring manual intervention or restrictive daily budget caps that can cause Google's algorithm to under-deliver on high-value days. The search term negative mining script with the branded keyword safety exclusion and the existing-negative deduplication check shows the operational rigour that prevents the two most common script errors: accidentally negating branded terms and duplicating negatives that are already in the list. The differentiated ROAS target table with explicit logic ("higher-value destinations have higher ROAS targets because the absolute margin is higher") shows the strategic understanding that bidding targets must reflect business economics, not arbitrary uniformity.

What differentiates it from mid-level thinking: A mid-level performance marketer would recommend "use automated rules" and "add negative keywords" without writing the script logic, designing the safety checks, or quantifying the expected time saving at the task level. They would not know about the 30-minute script execution limit, the 250,000 API operations per day constraint, or the distinction between zero-click keywords (pausing is safe) and zero-conversion keywords (requires manual review because they may be assisting conversions).

What would make it a 10/10: A 10/10 response would include a Google Sheet template showing the management dashboard generated by these scripts (campaign-level spend vs. target, weekly negative additions, anomaly log, and ROAS by market), plus a script testing protocol (how to run scripts in preview mode before scheduling live execution) that prevents accidental mass-pausing of active keywords.



Question 12: Influencer Marketing and Performance Integration — Turning Influencer Spend into Measurable ROAS

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Gymshark, Charlotte Tilbury, Huel, Graze, Wild Deodorant


The Question

You are a Senior Performance Marketing Manager at a DTC wellness brand selling supplements at £45/month subscription. The brand works with 40 influencers across Instagram and TikTok, spending £180,000/month on influencer fees (ranging from £500 to £25,000 per influencer per month). The influencer programme is managed by the brand partnerships team and historically has been evaluated on "reach" and "engagement rate" — not on conversions or ROAS. The CMO has asked you to integrate influencer spend into the performance marketing measurement framework and answer: "Which of our 40 influencers are generating positive ROI, and which are we overpaying for?" Your challenge: (1) the partnerships team uses a different attribution system (UTM links and discount codes) from the performance team (GA4 and Google Ads), creating data silos where influencer revenue is tracked separately from paid media revenue — making it impossible to see the full customer journey when a user sees an influencer post and later converts via a Google search; (2) discount codes (the primary attribution mechanism for influencer campaigns) are widely shared online — "INFLUENCERNAME20" appears on discount code aggregator sites, meaning the brand pays £45/month to the influencer for every subscriber who uses the code, even if those subscribers found the product through Google, Meta, or organic search rather than the influencer's content; (3) the brand has never run an incrementality test for influencer spend — it is unknown what percentage of influencer-attributed conversions would have occurred anyway without the influencer; (4) the top 5 influencers (collectively receiving £90,000/month — 50% of the budget) are evaluated on a "brand fit" basis by the CMO's team, not on performance data; (5) micro-influencers (5,000–50,000 followers, receiving £500–2,000/month) are collectively outperforming macro-influencers (500,000–2M followers, receiving £8,000–25,000/month) on a per-conversion basis, but this is not visible to leadership because the data is not aggregated or compared. Design a measurement framework that makes influencer ROI comparable to paid media ROI and enables data-driven influencer investment decisions.


1. What Is This Question Testing?

  • Influencer attribution mechanics and their limitations — understanding the two primary influencer attribution methods: (a) discount codes (the influencer shares a unique code; when a customer converts using that code, the influencer gets credit — simple, trackable, but subject to code sharing on aggregator sites) and (b) UTM-tagged links (the influencer shares a link with UTM parameters; when a customer clicks and converts in the same session, the conversion is attributed to the influencer — accurate for direct clicks, misses customers who see the content and convert later via a different channel or device); knowing that neither method captures the full contribution of influencer content (influencer posts build awareness that drives searches days or weeks later — not captured by click attribution) and that code sharing inflates influencer-attributed revenue with non-incremental conversions
  • Discount code hygiene and validation — understanding the code sharing problem: when "INFLUENCERNAME20" is posted on Honey, RetailMeNot, MoneySavingExpert, or voucher code aggregators, every customer who searches for a discount code before purchasing finds the influencer's code — and the influencer gets credited for those conversions; knowing the solutions: (a) unique codes that expire after the influencer's content is posted (7–14 day window), (b) codes that are audience-specific (email-gated, only accessible to the influencer's followers), (c) cross-referencing the customer email list of code-users against the influencer's known follower demographic (if a 45-year-old male from rural Scotland converts using a female fitness influencer's code, it is likely a code aggregator conversion, not a genuine influencer referral)
  • Incrementality testing for influencer content — understanding that the correct way to measure influencer ROI is an incrementality test: a matched control group that does not see the influencer's content (either a geographic holdout or a time-based holdout) compared against the exposed group (the influencer's actual audience); if conversion rates are materially higher in the exposed group, the influencer is driving incremental conversions; if conversion rates are identical, the influencer is providing zero incremental value; knowing the practical challenge: you cannot easily exclude a geographic group from seeing an influencer's organic social content (unlike a paid ad where you can set geographic exclusions)
  • CPM-normalised influencer evaluation — understanding that comparing influencer performance by raw conversion count is misleading because influencers have different audience sizes; the correct comparison metric is Cost Per Acquisition (CPA) normalised by reach: (a) calculate the number of unique users who saw each influencer's content (platform-reported reach, not followers — reach is actual views), (b) calculate the attributed conversions per influencer, (c) calculate the CPM (cost per 1,000 impressions) and the cost per attributed conversion for each influencer, (d) compare across influencers by CPA — micro-influencers with small but highly engaged audiences often have lower CPA than macro-influencers with large but passive audiences
  • Influencer data integration into the performance marketing stack — understanding how to unify influencer attribution with paid media attribution: (a) implement influencer-specific UTM parameters in a consistent taxonomy (utm_source=influencer, utm_medium=instagram, utm_campaign=influencer-name, utm_content=post-type) so influencer traffic appears correctly in GA4; (b) import influencer spend as an offline cost in GA4 (or in your attribution tool) so the ROAS calculation includes the influencer fee; (c) use GA4's Multi-Source Attribution report to see which customers touched an influencer link before converting via another channel — showing the assisted conversion contribution of influencer content
  • Tiered influencer investment strategy based on performance data — understanding that influencer spend optimisation follows the same logic as paid media optimisation: identify the highest-performing influencers (lowest CPA, highest ROAS, highest incrementality), scale budget toward them, reduce budget from underperformers, and replace underperformers with similar-profile influencers who may perform better; knowing the difference between brand equity influencers (macro-influencers who build brand awareness and trust, difficult to measure directly) and performance influencers (micro-influencers with engaged communities who drive direct conversions, easier to measure)

2. Framework: Influencer Performance Measurement and ROI Attribution Model (IPMRAM)

  1. Assumption Documentation — Confirm the influencer attribution window: when a customer uses an influencer's discount code 45 days after seeing the content, should the influencer receive credit? The longer the attribution window, the more code-sharing inflation occurs; the shorter the window, the more legitimate long-consideration-cycle conversions are missed; for a £45/month subscription with a 30-day consideration period, a 30-day attribution window is appropriate, with code expiry after 14 days (to limit aggregator sharing)
  1. Constraint Analysis — The partnerships team owns the influencer relationships and may resist performance-based measurement if it threatens their top influencers' budgets; the measurement framework must be introduced as additive (adding performance data to existing relationship data, not replacing it) to avoid team conflict; the CMO's "brand fit" influencers should be evaluated on brand lift metrics, not direct conversion metrics — their purpose is different
  1. Tradeoff Evaluation — Full performance-only evaluation (every influencer must show positive ROAS or is cut) vs. tiered evaluation (performance influencers measured on CPA/ROAS; brand influencers measured on brand lift/sentiment) — the tiered approach is correct because different influencers serve different business objectives; applying a direct conversion ROAS target to a brand awareness macro-influencer will cause you to cut influencers who are building long-term brand equity but not generating same-session conversions
  1. Hidden Cost Identification — Influencer attribution tooling (Grin, CreatorIQ, Impact.com for Influencers, Partnerize) costs £1K–5K/month and provides centralised UTM tracking, discount code management, and performance reporting across all 40 influencers; without a dedicated tool, influencer measurement requires manual data aggregation from 40 separate creator dashboards, which takes 15+ hours per month
  1. Risk Signals / Early Warning Metrics — Code sharing rate per influencer (if more than 30% of code uses come from customers whose session started on a coupon aggregator site rather than from the influencer's social profile, the code has leaked to aggregators; calculate using GA4 referrer data for code-converted sessions); influencer CPA vs. blended paid CPA (alert if any influencer's CPA exceeds 2× the blended paid media CPA — they are significantly less efficient than paid channels and should be paused unless they serve a brand equity objective); incrementality holdout results (if a top-5 influencer's holdout test shows <20% incremental lift, they are providing minimal causal value despite the attribution credit)
  1. Pivot Triggers — If after implementing the measurement framework, 30+ of the 40 influencers show negative ROI on a direct-attribution basis: the entire influencer programme may be less efficient than the numbers suggest, but the correct response is not to cut 30 influencers immediately — it is to run incrementality tests on the top 10 (by spend) to determine true causal contribution before making budget decisions
  1. Long-Term Evolution Plan — Month 1: UTM taxonomy standardisation + influencer platform integration (Grin/CreatorIQ); Month 2: GA4 influencer cost import + influencer CPA dashboard; Month 3: code expiry implementation + aggregator leakage analysis; Month 4: incrementality test on top 5 macro-influencers (holdout by posting schedule); Month 5: tiered evaluation framework presented to CMO + partnerships team; Month 6: budget reallocation toward top performers

3. The Answer

Step 1: Standardise UTM Taxonomy and Centralise Data

CURRENT UTM PROBLEM:
Each influencer uses self-generated links with inconsistent parameters:
  Influencer A: utm_source=instagram&utm_campaign=Jan
  Influencer B: utm_medium=ig&utm_campaign=ad
  Influencer C: (no UTM parameters at all)
Result: All three show as "Direct" or "(Other)" in GA4 → influencer contribution invisible

STANDARDISED UTM TAXONOMY (applied to all 40 influencers):
  utm_source=influencer
  utm_medium=instagram OR tiktok OR youtube
  utm_campaign=[influencer-handle] (e.g., "sophiemartin_wellness")
  utm_content=[content-type] (e.g., "reel-product-demo" OR "story-swipe-up" OR "static-post")
  utm_term=[campaign-month] (e.g., "jan2024")

Example full URL:
https://brand.com/subscribe?utm_source=influencer&utm_medium=instagram
&utm_campaign=sophiemartin_wellness&utm_content=reel-product-demo&utm_term=jan2024
&ref=SOPHIE20

This allows GA4 to correctly attribute influencer traffic and shows:
  - Which influencer drove the click
  - Which platform (Instagram vs. TikTok)
  - Which content type (Reel vs. Story vs. Static post)
  - When the click happened

IMPORT INFLUENCER COSTS INTO GA4:
  GA4 → Admin → Data Import → Cost Data → Upload CSV:

  Date,Source,Medium,Campaign,Cost
  2024-01-01,influencer,instagram,sophiemartin_wellness,2000
  2024-01-01,influencer,tiktok,jakefit_uk,1500
  ...

  This makes GA4 calculate ROAS for influencer campaigns using the
  actual influencer fee as the cost denominator — not just counting revenue

Step 2: Fix Discount Code Attribution — Expiry and Leakage Detection

DISCOUNT CODE HYGIENE:

Current problem: "SOPHIE20" shared on Honey, RetailMeNot — anyone can use it

Fix 1: Set code expiry to 14 days after influencer posts
  - Create code in Shopify: SOPHIE20 → expires 14 days from post date
  - After expiry: code returns "This code has expired" at checkout
  - Impact: Limits aggregator exposure to 14-day window; legitimate followers
    who saw the post still have 14 days to convert

Fix 2: Detect aggregator leakage per influencer
  GA4 event tracking on discount code application:
  When a customer applies a discount code, track the referrer (the page they came from):

  GA4 custom dimension: 'code_referrer'
  If referrer contains 'honey.com', 'retailmenot', 'topcashback', 'moneysavingexpert':
    → Tag conversion as 'aggregator_assisted'
  If referrer contains 'instagram.com', 'tiktok.com', 'linktr.ee':
    → Tag conversion as 'direct_influencer'
  If referrer is 'none' (direct):
    → Tag as 'unattributed' (could be organic memorisation OR aggregator)

  Monthly audit per influencer:

  Influencer       | Conversions | Direct Influencer | Aggregator | Unattributed
  Sophie Martin    | 142         | 68 (48%)          | 51 (36%)   | 23 (16%)
  Jake Fit         | 89          | 72 (81%)          | 8 (9%)     | 9 (10%)

  Interpretation:
  Sophie Martin: 36% of her attributed conversions are from aggregators —
    her code has leaked; adjust her credited revenue to exclude aggregator conversions
    Adjusted Sophie revenue: 142 - 51 aggregator = 91 legitimate conversions

  Jake Fit: 9% from aggregators — minimal leakage; her attributed revenue is reliable

Step 3: Build the Influencer Performance Dashboard

INFLUENCER ROI COMPARISON TABLE (Monthly):

Influencer          | Tier   | Followers | Fee/mo | Reach  | Attributed Conv | CPA   | Adj CPA* | ROAS  | Adj ROAS*
--------------------|--------|-----------|--------|--------|-----------------|-------|----------|-------|----------
Sophie Martin       | Macro  | 1.2M      | £12K   | 480K   | 142             | £85   | £132     | 5.3x  | 3.4x
Jake Fit UK         | Macro  | 890K      | £9K    | 350K   | 89              | £101  | £111     | 4.5x  | 4.1x
Emma Wellness       | Mid    | 120K      | £2.5K  | 65K    | 58              | £43   | £49      | 10.5x | 9.2x
Tom Nutrition       | Micro  | 38K       | £800   | 28K    | 31              | £26   | £28      | 17.3x | 16.1x
Laura Fit           | Micro  | 22K       | £500   | 18K    | 24              | £21   | £23      | 21.4x | 19.6x
[35 more...]        |        |           |        |        |                 |       |          |       |

*Adj = adjusted for aggregator-sourced conversions (aggregator conversions excluded)

KEY INSIGHT:
Micro-influencers (Tom Nutrition, Laura Fit) are generating ROAS of 16–21x
vs. macro-influencers at 3.4–4.1x adjusted ROAS.

If we rebalanced budget: move £30K from 3 underperforming macros →
15 more micro-influencers at £2K/month each:
  - Current output from £30K macros: ~250 conversions (avg CPA £120)
  - Projected output from 15 new micros: ~600 conversions (avg CPA £50)

NET GAIN: +350 additional subscribers/month at no increase in total budget

RECOMMENDATION TO CMO:
Maintain top 2 macro-influencers (Sophie, Jake) for brand equity
— but measure them on brand lift surveys, not CPA.
Reduce 3 underperforming macros by £10K each (£30K total).
Add 15 micro-influencers at £2K/month.
Net budget: unchanged at £180K/month.
Net subscribers: +350/month.
Annual LTV impact: 350 × £540 LTV (12-month) = £189,000/year incremental revenue.

Step 4: Incrementality Test for Top 5 Macro-Influencers

INFLUENCER INCREMENTALITY TEST DESIGN:

Challenge: You cannot exclude a geographic group from seeing organic social content
(unlike paid ads where you control geographic targeting).

Solution: Posting-schedule holdout test

Influencer: Sophie Martin (£12K/month)

Standard schedule: Posts every Tuesday and Friday
Test design:
  Weeks 1-2 (Control): Sophie does NOT post (dark period)
                        Measure: Baseline subscription rate
  Weeks 3-4 (Treatment): Sophie posts on normal schedule
                          Measure: Post-content subscription rate

Compare:
  Baseline conversion rate (weeks 1-2): 2.1% of traffic converts
  Post-content conversion rate (weeks 3-4): 2.8% of traffic converts

  Incremental lift = (2.8 - 2.1) / 2.1 = 33% incremental
  True incremental conversions = 142 total × 33% = 47 conversions
  True incremental CPA = £12,000 / 47 = £255

  Compare to paid media CPA: £85 (Meta Ads)

  CONCLUSION: Sophie's true incremental CPA of £255 is 3x the paid media CPA.
  She is significantly less efficient than paid media on a direct conversion basis.

  However, if Sophie drives brand awareness that feeds paid media conversions
  (measured by branded search volume lift during her posting weeks),
  her full contribution is higher than the direct CPA suggests.

  Decision framework:
  - If branded search volume does NOT increase during Sophie's posting weeks:
    She is brand equity for an audience that would have converted anyway.
    Negotiate fee reduction or replace with micro-influencers.

  - If branded search volume DOES increase during Sophie's posting weeks:
    She is building awareness that feeds the full funnel.
    Maintain at current fee but measure via brand lift metrics going forward.

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The aggregator leakage detection methodology — tracking the HTTP referrer when a discount code is applied and classifying the conversion as "direct influencer" vs. "aggregator" vs. "unattributed" — is the specific technical solution to the most prevalent influencer attribution fraud that costs brands millions annually; this demonstrates the technical depth to design a measurement system that catches the specific behaviour it is trying to prevent. The posting-schedule holdout test design — using the influencer's own posting schedule to create a natural control period — solves the practical impossibility of excluding an audience from organic content by measuring the before-and-after conversion rate rather than requiring a concurrent control group.

What differentiates it from mid-level thinking: A mid-level performance marketer would recommend "UTM parameters and discount codes" — which is the current broken setup — without designing the aggregator leakage detection, the code expiry mechanism, the adjusted CPA/ROAS calculation, or the posting-schedule holdout. They would not know how to import influencer costs into GA4 for normalised ROAS comparison against paid media, and would not have the data to make the £30K budget reallocation argument with a quantified revenue impact.

What would make it a 10/10: A 10/10 response would include a complete influencer brief template that specifies the UTM parameters the influencer must use (pre-built, non-editable links provided to each creator via a campaign management platform), the code expiry clause in the influencer contract, and a Looker Studio dashboard showing all 40 influencers' adjusted ROAS on a single view with sortable columns by CPA, ROAS, and incremental lift.



Question 13: Retail Media and Amazon Advertising — Building a Profitable Amazon Ads Strategy for a £10M Annual Revenue Brand

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Procter & Gamble, Unilever, Oral-B, Durex, Nutribullet


The Question

You are a Senior Performance Marketing Manager at a UK consumer goods brand selling nutrition products (protein bars, supplements, healthy snacks) on Amazon UK. Your brand generates £10M annual revenue through Amazon, but your Amazon Advertising spend is £1.8M/year with a reported ROAS of 5.6x. After a deeper audit, you discover: (1) Amazon reports ROAS using "14-day total attribution" — meaning a conversion is attributed to your ad if the customer purchased any of your products within 14 days of clicking the ad, regardless of whether they would have purchased anyway; this significantly overstates the actual incremental ROAS; (2) 45% of your Amazon Ads spend is on Sponsored Product ads targeting your own branded keywords — you are paying Amazon to advertise to people who were already searching specifically for your brand, cannibalising organic search you would have received for free; (3) your Sponsored Display campaigns are targeting competitor product detail pages (PDPs) to intercept customers looking at rivals, but the conversion rate on these placements is 0.3% vs. 2.1% on keyword-targeted placements — you are paying a premium CPM to reach customers who have no interest in your brand at that moment; (4) your top-selling product (Chocolate Protein Bar 12-pack, generating £3.2M/year of the £10M total) has only 4.2 stars from 380 reviews; competitor products have 4.7–4.9 stars from 2,000–8,000 reviews — your product is disadvantaged in both organic ranking and conversion rate; (5) you have no "new-to-brand" (NTB) measurement in place — you cannot tell what percentage of your Amazon Ads revenue comes from first-time Amazon buyers of your products vs. existing customers. Design an Amazon Advertising strategy that improves true incremental ROAS and prioritises new customer acquisition over existing customer retention.


1. What Is This Question Testing?

  • Amazon Advertising architecture — Sponsored Products, Brands, and Display — understanding the three main Amazon Ads formats: Sponsored Products (keyword-triggered ads appearing in search results and on PDPs — the highest-volume, highest-intent format; correct for brand and generic keyword targeting); Sponsored Brands (headline banner ads appearing at the top of search results, featuring the brand logo and multiple products — correct for brand awareness and new product launches); Sponsored Display (audience-based ads appearing on and off Amazon, including competitor PDPs — correct for remarketing and competitor conquesting but has structural limitations for cold audiences); knowing when each format is appropriate and what the expected ROAS benchmarks are for each
  • Branded keyword spend on Amazon and cannibalism — understanding that bidding on your own brand keywords on Amazon is fundamentally different from bidding on brand keywords on Google; on Google, if you do not bid on your own brand, competitors can and will bid on it and steal your branded traffic; on Amazon, if someone searches "YourBrand protein bar," your organic listing appears prominently at position 1–3 by default — there is much lower competitive risk of your branded search traffic being stolen; therefore, paying Amazon to place a Sponsored Product ad above your own organic listing is often paying for organic traffic you would have received free — the branded spend CPA is low (people were going to buy anyway) but the incremental value is near zero
  • New-to-brand (NTB) measurement and customer acquisition — understanding that Amazon's "New-to-Brand" metric (available in Sponsored Brands and Sponsored Display campaigns) identifies conversions from customers who have not purchased from your brand on Amazon in the past 12 months; NTB metrics are the closest proxy for customer acquisition (as opposed to retention) available in Amazon Ads; knowing that a high NTB rate (>60%) on a campaign indicates it is acquiring new customers; a low NTB rate (<30%) indicates it is primarily serving as retention advertising — important context when evaluating whether the spend is serving an acquisition or retention objective
  • Amazon organic ranking mechanics and the review moat — understanding that Amazon's A9 algorithm ranks products based on: (a) sales velocity (how fast the product sells — higher velocity = higher rank), (b) conversion rate (what % of product page visitors purchase — higher conversion rate = higher rank), (c) review count and star rating (4.7+ star products are algorithmically favoured over 4.2 star products — the rating gap is a structural organic ranking disadvantage); knowing that closing the review gap (from 380 to 2,000+ reviews with a 4.7+ average) is not a paid media problem — it requires a product experience, review generation, and customer satisfaction programme — but it directly impacts paid media efficiency because a 4.2-star product converts at a lower rate, making every click more expensive
  • Amazon DSP and off-Amazon retargeting — understanding that Amazon DSP (Demand-Side Platform) extends Amazon's first-party purchase and browsing data to off-Amazon placements (other websites, streaming TV, apps) — allowing you to retarget users who viewed your Amazon product page but did not purchase, or to target users who have purchased competitor products in the past (using Amazon's purchase history data); knowing the difference between Amazon Ads (self-serve, keyword-triggered, on-Amazon) and Amazon DSP (managed-service, audience-based, on and off Amazon); knowing that Amazon DSP requires a minimum spend of ~£10K/month and is typically sold through Amazon's own sales team or an Amazon agency partner
  • ACOS vs. ROAS and true incremental efficiency — understanding the Amazon-specific ACOS (Advertising Cost of Sale) metric: ACOS = Advertising Spend / Advertising Revenue × 100; if ACOS = 18%, for every £100 of revenue generated by ads, £18 was spent on ads; ROAS = 1/ACOS × 100 = 1/0.18 × 100 = 5.56x; knowing that the "right" ACOS target depends on the product margin (a product with 60% gross margin can sustain ACOS up to 60%; above that it is unprofitable); knowing that Amazon's 14-day attribution window inflates ROAS by including organic conversions that happened to occur within 14 days of an ad click

2. Framework: Amazon Advertising Incrementality and New Customer Acquisition Model (AAINCAM)

  1. Assumption Documentation — Confirm gross margin on the top-selling product: Chocolate Protein Bar 12-pack at £3.2M revenue with what gross margin? If margin is 40% (£1.28M gross profit), the maximum sustainable ACOS is 40% (spending 40p per £1 of revenue means you spend exactly what you profit — zero net contribution); the Target ACOS for a profitable strategy is typically 50–60% of gross margin (leaving 40–50% of margin as contribution); at 40% gross margin, Target ACOS should be approximately 20–24% (ROAS 4.2–5x)
  1. Constraint Analysis — The branded keyword spend (45% of £1.8M = £810K/year) is the most obvious waste reduction opportunity; however, cutting branded keyword spend entirely on Amazon carries more risk than on Google — on Amazon, if you pause branded Sponsored Products, competitors' Sponsored Product ads may appear in the space your organic listing occupies (sponsored ads appear above organic results); the correct approach is to reduce branded spend to the minimum needed to maintain top-of-search positioning against competitor bids
  1. Tradeoff Evaluation — Redirect branded spend to generic keyword growth (higher volume, higher competition, lower ROAS — but true new customer acquisition) vs. redirect to Sponsored Brands (brand awareness format with NTB tracking — shows what % of spend generates first-time buyers); Sponsored Brands with NTB measurement is correct for redeploying branded spend because it provides the acquisition metric you need to justify the spend
  1. Hidden Cost Identification — Improving the review count from 380 to 2,000+ requires an Amazon Vine programme enrollment (Amazon's official review generation programme — invite top reviewers to receive products in exchange for honest reviews; cost: £100/product + cost of product samples; time: 6–12 months to accumulate reviews); this is not a paid media cost but directly impacts the conversion rate (and therefore the effective CPA) of all Amazon Ads campaigns
  1. Risk Signals / Early Warning Metrics — ACOS by campaign type (alert if Sponsored Products generic campaigns exceed ACOS of 25% — above this threshold at current gross margins, they are generating negative contribution); NTB rate by campaign (alert if NTB rate drops below 40% on campaigns designated as acquisition campaigns — they are serving existing customers, not acquiring new ones); organic ranking trend for top 5 keywords (track weekly — if organic rank drops when paid budget is reduced, there may be a stronger organic-paid interdependency than assumed)
  1. Pivot Triggers — If after reducing branded spend by 50%, organic sales velocity drops proportionally (i.e., the brand search was not organic at all — it was entirely paid): the branded keyword spend was actually defending organic rank against cannibalism by competitors; restore branded spend and investigate which competitor brands are bidding on your brand terms to determine the minimum defensive spend needed
  1. Long-Term Evolution Plan — Month 1: reduce branded Sponsored Products by 40% (monitor organic rank closely); Month 2: launch NTB measurement across all Sponsored Brands; Month 3: audit Sponsored Display competitor targeting ROI; Month 4: enroll top product in Amazon Vine; Month 5: launch generic keyword expansion with NTB-focused campaigns; Month 6: review full restructure impact on ACOS and NTB rate

3. The Answer

Step 1: Eliminate Branded Keyword Cannibalism — Reduce by 40%, Not 100%

BRANDED KEYWORD ANALYSIS:

Current branded spend: £810K/year (45% of £1.8M total)
Branded ROAS (reported): 18x (looks excellent, but heavily inflated by
  organic conversions within the 14-day attribution window)
Branded ACOS: 5.6% (looks very efficient)

TRUE INCREMENTAL BRANDED VALUE TEST:
Week 1-2: Pause all branded Sponsored Products (defensive period)
Measure: Does organic rank for "YourBrand protein bar" maintain position 1-3?
         Does weekly sales velocity change?

If organic rank MAINTAINS without paid:
→ Branded paid was cannibalising organic (pure waste)
→ Reduce branded Sponsored Products by 70%, maintain only defensive minimum

If organic rank DROPS when branded paid is paused:
→ Branded paid is contributing to organic rank algorithm
  (Amazon's A9 considers ad-driven sales velocity)
→ Reduce branded Sponsored Products by 30% only

CONSERVATIVE APPROACH (without waiting for test):
Reduce branded Sponsored Products from £810K to £400K:
→ Maintain Exact Match [YourBrand] [YourBrand protein bar] at current bids
  (defend against competitors bidding on your brand)
→ Pause Broad Match branded keywords
  (these trigger for generic terms and waste budget)
→ Pause Sponsored Display on your own brand terms
  (completely non-incremental)

Budget freed: £410K/year
Redeploy: £200K to Sponsored Brands (NTB measurement)
          £150K to generic keyword Sponsored Products
          £60K to Amazon DSP retargeting (off-Amazon)

Step 2: Launch New-to-Brand (NTB) Measurement Across All Campaigns

ENABLE NTB METRICS (available in Sponsored Brands and Sponsored Display):

In Amazon Ads console → Campaigns → Sponsored Brands campaign →
Columns → Add: "New-to-brand orders," "New-to-brand sales,"
               "% new-to-brand orders," "New-to-brand CPA"

NTB METRICS TARGET BY CAMPAIGN:

Campaign Type              | NTB Rate Target | Rationale
---------------------------|-----------------|-------------------------------------
Branded Sponsored Brands   | >80% NTB        | Brand searchers likely first-time buyers
Generic Keyword Sponsored  | >60% NTB        | Generic searches attract new customers
Competitor Conquesting     | >70% NTB        | Competitors' customers are all new-to-brand
Retargeting (Display)      | <20% NTB        | Retargeting existing customers is expected

If any campaign shows NTB rate below target:
→ It is primarily serving existing customers
→ Evaluate whether this is acceptable for that campaign's objective
→ If the campaign is designated as "acquisition," reduce budget and
  reinvest in higher-NTB campaigns

MONTHLY NTB DASHBOARD:

Campaign              | Spend  | NTB Conv | NTB Rate | NTB CPA | NTB ROAS
----------------------|--------|----------|----------|---------|--------
Generic Protein Bar   | £18K   | 312      | 67%      | £58     | 6.2x
Generic Nutrition     | £12K   | 198      | 72%      | £61     | 5.9x
Competitor Conquesting| £8K    | 89       | 78%      | £90     | 4.0x
Branded Sponsored Brd | £15K   | 245      | 84%      | £61     | 5.9x

The NTB CPA (£58–90) is the true cost of acquiring a new Amazon customer.
At LTV of £X (calculate from repeat purchase rate), assess whether NTB CPA is sustainable.

Step 3: Fix Sponsored Display — Competitor PDP Conquesting

CURRENT: Targeting competitor PDPs
Conversion rate: 0.3% (extremely low — customer is mid-purchase-decision for another product)
ACOS: Estimated 35–45% (high, due to low conversion rate)

ANALYSIS:
A customer viewing "Competitor Protein Bar 12-pack" has already indicated
purchase intent for that product. Your Sponsored Display ad appearing on
that page is "interrupting" a near-complete purchase.
The 0.3% conversion rate reflects this high-friction interruption context.

BETTER USES FOR SPONSORED DISPLAY BUDGET:

Option A: Retargeting your own product page viewers (highest conversion rate)
  Target: Users who viewed your Chocolate Protein Bar PDP but did not purchase
  Expected conversion rate: 1.8–2.5% (they were interested, just needed nudging)
  Message: "Still thinking about it? Now with 10% off" → discount + urgency

Option B: Category conquesting (contextually adjacent, not direct competitor)
  Target: Product pages for "protein shakes," "gym supplements," "healthy snacks"
  These users are in the category but not specifically committed to a competitor
  Expected conversion rate: 0.8–1.2% (better than direct competitor targeting)

Option C: Amazon audience remarketing (off-Amazon via Amazon DSP)
  Target: Users who viewed your brand on Amazon but did not purchase,
          served ads on news sites, social media, apps
  Expected conversion rate: 1.5–2.0% (warm audience, non-Amazon context)

BUDGET REALLOCATION FOR SPONSORED DISPLAY (£8K/month current):
  Retargeting own PDP viewers: £3K (highest conversion rate)
  Category conquesting: £3K (better than direct competitor PDP)
  Pause direct competitor PDP targeting: -£2K freed for generic keywords

Step 4: Address the Review Gap — Amazon Vine Enrollment

CURRENT STATE:
Your product: 4.2 stars, 380 reviews
Top competitor: 4.8 stars, 6,200 reviews

CONVERSION RATE IMPACT OF REVIEW GAP:
Industry data: Every 0.1 star increase → approximately 5–8% conversion rate improvement
Gap: 0.6 stars → estimated 30–48% lower conversion rate vs. best competitors
At £45 average order value and 2.1% conversion rate:
  Current: 100 clicks × 2.1% = 2.1 conversions × £45 = £94.50 revenue per 100 clicks
  At 4.7 stars: 100 clicks × 3.0% = 3.0 conversions × £45 = £135 revenue per 100 clicks
  CPC impact: same CPC, 43% more revenue per click
  ACOS impact: reduces from 22% to 15% — significant improvement

AMAZON VINE ENROLLMENT:
  Programme: Amazon Vine (by invitation from Amazon; available to brands enrolled in Brand Registry)
  How it works: Amazon sends your product to trusted Vine Voice reviewers at no cost to them;
    reviewers provide honest reviews within 30 days
  Cost: £100/ASIN enrollment fee + cost of product units provided (10–30 units typical)
    For Chocolate Protein Bar 12-pack: £100 + (20 units × £8 cost of goods) = £260 total
  Expected outcome: 20–30 verified reviews within 60 days; often higher star rating
    (Vine reviewers are thorough but fair — if product quality is good, expect 4.5–4.8 stars)

PARALLEL TRACK: Post-purchase email for reviews
  Amazon's rules permit post-purchase "Seller Feedback" requests but NOT direct review requests
  Use the Amazon "Request a Review" button (available in Seller Central) within 5–30 days of delivery
  This sends an Amazon-templated review request email — compliant with Amazon's policies
  Expected response rate: 2–5% of orders → 60–150 additional reviews/month at current volume

TARGET: 1,000 reviews at 4.5+ stars within 12 months
  Impact: Organic ranking improvement + conversion rate improvement =
  20–30% CPC efficiency gain on all Sponsored Products campaigns

Step 5: Full Restructure — Budget Reallocation Summary

BEFORE (£1.8M/year, 5.6x reported ROAS):
  Branded Sponsored Products: £810K (45%) ← primary waste
  Generic Sponsored Products: £450K (25%)
  Sponsored Display (competitor PDPs): £360K (20%)
  Sponsored Brands: £180K (10%)

AFTER (£1.8M/year, target 4.5x TRUE incremental ROAS):
  Branded Sponsored Products: £360K (20%) ← reduced 56%
  Generic Sponsored Products: £630K (35%) ← increased, NTB-focused
  Sponsored Display (retargeting + category): £270K (15%) ← restructured
  Sponsored Brands (NTB measurement): £360K (20%) ← increased
  Amazon DSP (off-Amazon retargeting): £180K (10%) ← new channel

TRUE ROAS IMPROVEMENT:
  Old: 5.6x reported (inflated by branded cannibalism + 14-day attribution)
  New: 4.5x true incremental (with NTB filter, reduced branded waste)

  In absolute terms:
  Old: £10M revenue from £1.8M spend (but much of the revenue would have occurred anyway)
  New: £10M+ revenue from £1.8M spend, with higher proportion coming from new customers
  NTB revenue target: >55% of ad-attributed revenue from first-time buyers (up from unknown)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The branded keyword cannibalism test design — pause branded Sponsored Products for 2 weeks, measure whether organic rank and sales velocity change, then make the structural decision on how much to reduce rather than arbitrarily cutting all branded spend — shows the evidence-based approach that prevents the most common Amazon Ads mistake (cutting branded spend entirely and watching organic rank collapse). The NTB campaign performance table with explicit NTB rate targets by campaign type (>80% for branded Sponsored Brands, <20% for retargeting) transforms influencer ROAS from a vanity metric into an acquisition measurement framework that the business can use to justify Amazon Ads as a growth channel, not just a retention channel.

What differentiates it from mid-level thinking: A mid-level performance marketer would reduce branded spend and shift to generic keywords without NTB measurement, without the sponsored display conversion rate analysis, and without the review gap quantification. They would not calculate the conversion rate impact of closing the review gap (0.6 stars × 8% per star = 48% conversion rate improvement potential) or link it directly to ACOS improvement on Sponsored Products.

What would make it a 10/10: A 10/10 response would include a specific Amazon advertising architecture diagram showing the relationship between Sponsored Products (keyword-triggered acquisition), Sponsored Brands (top-of-funnel NTB), Sponsored Display (retargeting warm audiences), and Amazon DSP (off-Amazon audience extension), with the correct budget percentage for each funnel stage at the brand's current scale. It would also include the ACOS break-even calculation for each product in the portfolio (Target ACOS = Gross Margin × 50–60%), giving the team a data-driven bid management framework for each SKU.



Question 14: Performance Marketing in B2B — LinkedIn and ABM Strategy for Enterprise Software with 6-Month Sales Cycles

Difficulty: Senior | Role: Performance Marketing Manager | Level: Senior | Company Examples: Salesforce, Workday, ServiceNow, Darktrace, Palo Alto Networks


The Question

You are a Senior Performance Marketing Manager at a B2B enterprise software company selling security software at £120,000 average contract value (ACV). Your sales cycle is 6–9 months. You have a target account list (TAL) of 500 enterprise accounts identified by the sales team as having the highest revenue potential. Currently your paid media programme consists of Google Ads (£400K/year) and LinkedIn Ads (£350K/year) — but the sales team reports that only 8% of the 500 TAL accounts have engaged with any marketing content or ads in the past 12 months, and only 15 of the 500 target accounts are active in the sales pipeline. The marketing and sales teams are misaligned: (1) marketing defines a "Marketing Qualified Lead" (MQL) as anyone who downloads a whitepaper or attends a webinar — regardless of whether they are from a target account or a decision-maker level; (2) sales ignores 70% of MQLs because they are from non-target accounts or from individual contributors without buying authority; (3) LinkedIn Ads are targeting a broad audience (25–55, IT Security interest, UK) rather than the specific 500 TAL accounts; (4) Google Ads are targeting generic security software keywords ("cybersecurity software," "network security tools") rather than the Account-Based Marketing (ABM) strategy the sales team is using; (5) there is no personalisation — every TAL account sees the same generic ads and landing pages regardless of their industry, company size, or stage in the sales cycle. Design an ABM-led performance marketing strategy that aligns with the 6–9 month sales cycle and focuses media spend on the 500 target accounts.


1. What Is This Question Testing?

  • Account-Based Marketing (ABM) principles and performance marketing integration — understanding that ABM is a B2B marketing strategy that focuses resources on a defined set of target accounts rather than generating broad MQL volume; the performance marketing implications: targeting is account-specific (not audience-based), measurement is account engagement and pipeline velocity (not MQL volume or CPA), and the funnel is inverted (identify the right accounts first, then engage them — not generate leads and hope some are from the right accounts); knowing the three ABM tiers: one-to-one (highly personalised campaigns for 10–20 strategic accounts), one-to-few (customised campaigns for 50–100 accounts in a similar segment), one-to-many (programmatic ABM for 200–500 accounts with shared characteristics)
  • LinkedIn Ads for enterprise B2B targeting — understanding that LinkedIn Ads have unique B2B targeting capabilities unavailable on Google or Meta: company name targeting (target employees of specific companies by company name — directly enables TAL-based advertising), job function + seniority targeting (target "Director-level IT Security Decision Makers" at specific companies), and matched audiences (upload a list of company names or professional email addresses to target them specifically); knowing that LinkedIn's CPMs for B2B targeting are significantly higher than other platforms (£35–80 CPM for enterprise IT audiences) but the audience quality is unmatched for B2B enterprise accounts
  • Buying committee mapping and multi-stakeholder targeting — understanding that enterprise software purchases involve a buying committee of 5–15 people: Economic Buyer (CFO/CRO — approves the budget), Technical Buyer (CISO/VP of IT — evaluates the product technically), End Users (IT security analysts — will use the product daily), and Champions/Influencers (internal advocates who sponsor the purchase); knowing that effective ABM targets all members of the buying committee with role-appropriate messaging (economic buyers see ROI and risk reduction; technical buyers see capability and integration; end users see ease of use and productivity)
  • Account engagement scoring and pipeline velocity — understanding that in a 6–9 month sales cycle, CPA is the wrong primary metric (you cannot measure cost per acquisition in a channel if it takes 9 months to realise a conversion); the correct metrics are: account engagement rate (% of TAL accounts that have engaged with any marketing or sales touchpoint), marketing sourced pipeline (£ of pipeline attributed to marketing activities — measurable at the opportunity stage, not wait for closed revenue), and account coverage (how many stakeholders within each target account has marketing reached); knowing that these metrics require CRM integration (Salesforce, HubSpot) to track account-level activity
  • LinkedIn and Google Ads budget allocation for ABM — understanding that the correct budget split between awareness (LinkedIn — reaching the 500 accounts with brand and thought leadership content) and consideration (Google — capturing intent from target account employees who are actively researching solutions) depends on the accounts' position in the buying journey; accounts that are unaware of your brand need awareness spend (LinkedIn); accounts where employees are actively searching for solutions need consideration spend (Google); the 500-account TAL likely has both unaware accounts (majority) and actively researching accounts (minority) — the budget split should reflect this
  • Intent data and buying signal identification — understanding that third-party intent data (from platforms like Bombora, G2, TechTarget, 6sense) identifies which companies are showing a spike in research activity for topics related to your product category; an account showing "spike in cybersecurity software research" is significantly more likely to be in-market for your product than an account showing no research activity; knowing how to use intent data to prioritise which of the 500 TAL accounts to focus immediate outreach on (the 20–40 accounts showing active buying intent get the highest ad frequency, the most personalised content, and the fastest sales follow-up)

2. Framework: ABM-Led Performance Marketing and Pipeline Velocity Model (ALMPPVM)

  1. Assumption Documentation — Confirm the TAL composition: are all 500 target accounts "tier 1" (equal revenue potential and equal priority), or are they tiered (50 strategic accounts at highest priority; 200 accounts at medium priority; 250 accounts at lower priority)? The budget allocation and personalisation approach differs significantly by tier; also confirm whether the 500 accounts are defined at the company level or the business unit level (a large bank might have 5 business units that are each separate buying decisions)
  1. Constraint Analysis — LinkedIn Ads' company name targeting requires the target accounts to be on LinkedIn (all 500 enterprise accounts will be on LinkedIn); but LinkedIn's targeting coverage is not 100% — LinkedIn can reach approximately 70–80% of the employees at any given company (those with LinkedIn profiles); the 20–30% without LinkedIn profiles cannot be reached via LinkedIn Ads and require alternative outreach (email, direct mail, events)
  1. Tradeoff Evaluation — Full ABM (all £750K media focused exclusively on 500 TAL accounts — maximum focus, zero demand generation for inbound) vs. hybrid ABM + demand generation (60% of budget for TAL accounts, 40% for inbound demand generation — maintains inbound pipeline while building ABM program); for a company where sales team is already aligned to 500 TAL accounts and is ignoring 70% of inbound MQLs, pure ABM focus is correct for at least 12 months to demonstrate the programme's value
  1. Hidden Cost Identification — ABM requires intent data subscriptions (Bombora: £2K–5K/month; 6sense: £5K–15K/month; G2 intent: £1K–3K/month); CRM-to-LinkedIn integration for account matching (LinkedIn's Matched Audiences via CAPI or CRM sync); ABM content production (personalised content by industry vertical, by buying committee role — if you have 5 verticals × 4 stakeholder roles = 20 content variants to produce at £2K–5K per variant = £40K–100K content production cost); these costs sit outside the media budget but are prerequisites for ABM effectiveness
  1. Risk Signals / Early Warning Metrics — Account engagement rate per quarter (alert if fewer than 30% of TAL accounts show any marketing touchpoint engagement in Q1 — the coverage model needs adjustment); pipeline created from TAL accounts (alert if no new opportunities are sourced from TAL accounts in the first 90 days — either the targeting is wrong or the content is not resonating); LinkedIn ad frequency on TAL accounts (target 5–8 impressions per target account employee per month; alert if below 3 — too little reach to build awareness, or above 15 — oversaturation)
  1. Pivot Triggers — If after 6 months of ABM, account engagement has increased (30%+ of TAL accounts touched) but pipeline created has not increased proportionally: the engaged accounts may be at too early a buying stage (awareness, not consideration); pivot to content that accelerates the buying journey (comparison guides, ROI calculators, vendor evaluation frameworks) rather than awareness content (thought leadership, trends reports)
  1. Long-Term Evolution Plan — Month 1–2: TAL tiering + LinkedIn Matched Audiences setup; Month 3: intent data integration (Bombora or 6sense); Month 4: buying committee content mapped by role and stage; Month 5: personalised landing pages per industry vertical; Month 6: full ABM programme review vs. pipeline targets

3. The Answer

Step 1: Tier the 500 Target Accounts and Allocate Budget by Tier

TAL TIERING:

Tier 1 — Strategic Accounts (50 accounts)
  Criteria: ACV potential >£200K; sales team has existing relationship or warm introduction;
            account shows intent signal from Bombora (if available)
  Budget allocation: £5,000/account/year in media = £250K total
  Approach: One-to-one personalised campaigns; custom landing pages; dedicated content
  LinkedIn frequency target: 10–15 impressions/person/month (high frequency)

Tier 2 — Priority Accounts (200 accounts)
  Criteria: ACV potential £80K–200K; no existing relationship; may show some intent signal
  Budget allocation: £1,250/account/year = £250K total
  Approach: One-to-few campaigns grouped by industry vertical; semi-custom content
  LinkedIn frequency target: 6–10 impressions/person/month

Tier 3 — Target Accounts (250 accounts)
  Criteria: ACV potential <£80K; on TAL because of strategic segment, not individual deal size
  Budget allocation: £400/account/year = £100K total
  Approach: Programmatic ABM; standard content personalised by industry category only
  LinkedIn frequency target: 3–6 impressions/person/month

Total: £600K allocated to TAL (£150K from the £750K media budget repurposed for
       Google keyword demand capture from TAL IP ranges)

Step 2: LinkedIn Ads — Matched Audiences for All 500 Accounts

LINKEDIN MATCHED AUDIENCES SETUP:

Method 1: Company List Upload
  Create a CSV with the 500 target company names:
  company_name
  "Barclays Bank"
  "Lloyds Banking Group"
  "HSBC Holdings"
  ... (500 rows)

  Upload to: LinkedIn Campaign Manager → Plan → Audiences → Matched Audiences →
  Company List → Upload

  LinkedIn matches to companies in its database and creates an audience of
  all employees at those 500 companies.

  Expected matched audience size: 500 companies × avg 200 LinkedIn-active employees
  = ~100,000 total employees

  Refinement: Layer job function + seniority targeting ON TOP of the company list:
  Function: IT, Information Technology, Information Security
  Seniority: Director, VP, C-Level, Manager

  Refined audience size: ~12,000–18,000 decision-makers at target accounts

Method 2: Contact List Upload (if you have named contact data from sales CRM)
  Export named contacts from Salesforce who work at TAL accounts
  Upload hashed email addresses to LinkedIn
  LinkedIn matches emails to LinkedIn profiles

  Advantage: Targets specific named individuals (the 5 people in the buying committee
  at each account) rather than all employees
  Accuracy: Higher (named contacts), but smaller audience (only known contacts)

CAMPAIGN STRUCTURE BY BUYING COMMITTEE ROLE:

Campaign A: Economic Buyers (CFO, CRO, COO)
  LinkedIn targeting: C-Level + VP, Finance + Risk functions, at TAL companies
  Content: "The Cost of a Data Breach: £X Average Impact for Enterprises in 2024"
           Business case framing — risk reduction + ROI
  CTA: "Download the CFO's Guide to Cyber Risk Management"
  Expected CPL: £120–180 (high seniority, lower volume)

Campaign B: Technical Buyers (CISO, VP IT Security, IT Director)
  LinkedIn targeting: Director+, IT Security + Information Technology, at TAL companies
  Content: "How [Company] Reduced Detection Time from 72 Hours to 4 Hours"
           Technical proof — case study from similar enterprise
  CTA: "Request a Technical Demo"
  Expected CPL: £90–130

Campaign C: Champions/Influencers (IT Security Managers, Senior Analysts)
  LinkedIn targeting: Manager + Senior, Information Security, at TAL companies
  Content: "The Security Team's Playbook for Zero Trust Architecture"
           Practical guide — helps their day job, builds affinity
  CTA: "Download the Playbook"
  Expected CPL: £40–70 (lower seniority, higher volume)

BUDGET ALLOCATION (LinkedIn, £350K/year):
  Tier 1 — Economic Buyers campaigns: £80K/year
  Tier 1 — Technical Buyer campaigns: £100K/year
  Tier 2 — Technical + Champion campaigns: £100K/year
  Tier 3 — Champion + Awareness campaigns: £70K/year

Step 3: Google Ads — Capture Intent from TAL IP Ranges

CURRENT: Targeting "cybersecurity software," "network security tools" —
         any company can click; most clicks are from non-TAL accounts

ABM-ADJUSTED GOOGLE ADS STRATEGY:

Option A: IP-Range Targeting (Requires third-party ABM tool)
  Tools: Terminus, 6sense, Demandbase (all integrate with Google Ads)
  Method: Upload your TAL company IP ranges → the ABM tool serves
          Google Display Network ads only to users browsing from those IP ranges
  Result: Google Ads reach is restricted to employees at your 500 target accounts
  Limitation: IP targeting is imprecise (VPNs, remote workers, shared office IPs)
  Use case: Google Display for TAL-specific awareness advertising

Option B: TAL-Specific Keyword Strategy (No third-party tool required)
  Instead of generic keywords, identify keywords that TAL employees specifically use:

  "Palo Alto alternative" — searching for a competitor alternative (high intent)
  "[Competitor brand] pricing" — evaluating competitor cost (high intent, budget-conscious)
  "SOC automation software" — specific solution category your product addresses
  "cybersecurity for financial services" — industry-specific intent (matches your TAL verticals)
  "zero trust security implementation" — technical decision-making intent

  These keywords are lower volume but higher intent AND more likely to be searched
  by IT security professionals (your TAL audience) than by students researching the topic.

  Budget: £150K/year (down from £400K — more focused on high-intent, TAL-relevant terms)
  Remaining £250K: Redirect to LinkedIn (higher ABM precision)

  ROAS EXPECTATION ON THESE KEYWORDS:
  Volume: Lower (niche terms)
  Conversion rate: Higher (users are further in the buying journey)
  Likely outcome: Fewer total conversions but higher proportion from TAL accounts

Step 4: Redefine MQL and Align with Sales

CURRENT MQL DEFINITION (causing 70% sales rejection):
  Anyone who downloads a whitepaper or attends a webinar
  → Includes non-TAL accounts, students, competitors, analysts, job seekers
  → Sales ignores 70% of them (correctly)

REVISED MQL DEFINITION (ABM-aligned):
  A contact qualifies as an MQL if ALL of the following are true:
  ✓ Works at a TAL account (CRM match required)
  ✓ Job title includes buying authority (Manager or above in IT/Security/Finance)
  ✓ Has shown 2+ engagement signals in the last 90 days:
      - Visited website (3+ pages or 4+ minutes time-on-site)
      - Downloaded a technical asset (case study, technical guide, ROI calculator)
      - Attended a webinar or event
      - Responded to an email sequence
      - Engaged with a LinkedIn ad (click, not just impression)

  MARKETING QUALIFIED ACCOUNT (MQA) — better metric for ABM:
  An account qualifies as an MQA when:
  ✓ 3+ contacts within the account have shown engagement signals
  ✓ At least 1 engagement is from a director-level or above contact
  ✓ Account shows intent signal from Bombora (if subscribed)

  MQA is the handoff to sales — not individual MQL.
  This aligns with how enterprise deals actually work (buying committees, not individual champions).

  SALES ALIGNMENT SLA:
  When an account becomes an MQA:
  → Marketing: Notifies account's assigned AE in Salesforce (Chatter + email)
  → Sales: AE must attempt outreach within 5 business days
  → Feedback loop: AE marks the MQA as "Accepted" or "Rejected" with reason
  → Rejection feedback returns to marketing for audience refinement

Step 5: Account Engagement Dashboard — Replace CPA with Pipeline Metrics

MONTHLY ABM PERFORMANCE DASHBOARD (replace MQL/CPA dashboard):

ACCOUNT COVERAGE:
Total TAL accounts: 500
Accounts reached by marketing (any touchpoint): 187 (37.4%)
Accounts reached by marketing AND sales: 42 (8.4%)
Accounts with 3+ engaged contacts (MQA): 23 (4.6%)

PIPELINE METRICS:
MQAs generated this month: 8
Pipeline created from MQAs: £1,200,000
Average ACV of pipeline opportunities: £150,000
Stage: Discovery (3), Solution Demo (3), Proposal (2)
Expected close date (earliest): Q3 2024

BY TIER:
Tier 1 (50 accounts): 32% reached, 12% MQA, £800K pipeline
Tier 2 (200 accounts): 28% reached, 4% MQA, £320K pipeline
Tier 3 (250 accounts): 18% reached, 1% MQA, £80K pipeline

MARKETING-SOURCED PIPELINE TARGET:
Q1: £2M (establishing the programme)
Q2: £5M (ABM programme maturing)
Q3: £8M (full pipeline contribution)
Q4 (Closed Revenue from ABM-sourced pipeline): £3M (assuming 6-9 month cycle)

This replaces: "100 MQLs at £375 CPA" (which sales ignored)
With: "£2M pipeline in 90 days, 23 MQAs from 500 TAL accounts"
(which sales wants more of)

4. Interview Score: 9.5 / 10

Why this demonstrates senior-level maturity: The redefinition of MQL to Marketing Qualified Account (MQA) — requiring 3+ engaged contacts within the account including at least one director-level contact — directly solves the sales alignment problem at its root (the wrong success metric was creating the wrong behaviour) rather than optimising around a broken metric. The LinkedIn campaign structure by buying committee role (Economic Buyers seeing risk/ROI content; Technical Buyers seeing case studies; Champions seeing practical guides) demonstrates the understanding that enterprise deals require multi-stakeholder engagement, and that serving all stakeholders the same generic ad is as ineffective as it sounds.

What differentiates it from mid-level thinking: A mid-level performance marketer would switch LinkedIn targeting to "IT Security" and increase LinkedIn budget — correct direction, wrong depth. They would not tier the 500 TAL accounts, would not set role-specific campaigns by buying committee function, would not design the MQA handoff SLA with the sales team, and would not replace CPA with pipeline metrics in the performance dashboard. Most critically, they would not redefine the MQL (the root cause of the marketing-sales misalignment).

What would make it a 10/10: A 10/10 response would include a specific Bombora intent data integration workflow (showing how intent signals flow from Bombora into the CRM and trigger automated changes in LinkedIn Ads bid modifiers for accounts showing active buying intent), and a personalised landing page framework showing how Tier 1 accounts see a page personalised to their industry and company size (using Clearbit or 6sense to identify the visitor's company from IP and render personalised content dynamically).



Question 15: Cross-Channel Budget Optimisation — Allocating a £12M Annual Media Budget Using Marginal ROAS Analysis

Difficulty: Elite | Role: Performance Marketing Manager | Level: Staff / Director | Company Examples: Asos, Auto Trader, Farfetch, Cazoo, OVO Energy


The Question

You are Head of Performance Marketing at a large UK e-commerce brand with a £12M annual paid media budget. You manage six channels: Google Brand (£1.8M), Google Non-Brand (£3.2M), Meta Ads (£2.4M), LinkedIn Ads (£800K), Programmatic Display (£1.2M), and Affiliate (£2.6M). The CFO has asked you to present a budget reallocation recommendation for the next financial year, either maintaining the £12M total or arguing for an increase or decrease. The board wants to understand: which channels are generating the highest marginal return (the return on the next additional £1 of spend), and whether the current allocation is optimal. Your current ROAS by channel: Google Brand 22x, Google Non-Brand 6.2x, Meta Ads 3.8x, LinkedIn 2.4x, Programmatic Display 1.9x, Affiliate 7.1x (but 35% of affiliate revenue is from existing customers who would have purchased anyway). You also know: Google Brand is impression-share capped (you are not appearing for 8% of brand searches due to budget constraints); Google Non-Brand has significant Quality Score issues (average QS 5/10); Meta Ads have room to scale but show ROAS decline from 3.8x to 3.2x when budget increases 30%; Affiliate contains £910K/year of low-incrementality spend; LinkedIn drives lower direct ROAS but your sales team reports that 40% of enterprise accounts that closed in the last 12 months had a LinkedIn touchpoint in their journey. Build a marginal ROAS analysis and present a data-driven budget recommendation.


1. What Is This Question Testing?

  • Marginal ROAS analysis and diminishing returns modelling — understanding that the correct framework for budget allocation is not average ROAS (which measures historical efficiency) but marginal ROAS (the return on the next incremental £1 of spend); a channel can have a high average ROAS but a low marginal ROAS if it is already at saturation (e.g., Google Brand at 22x average ROAS but impression-share capped — the next £1 might generate £22 of incremental revenue if it captures the 8% missed impression share); conversely, a channel with low average ROAS might have a high marginal ROAS if it has abundant headroom and is only constrained by budget (e.g., Google Non-Brand at 6.2x average ROAS but with QS improvements that could lift to 8.5x — the next £1 with improved QS generates more incremental revenue)
  • Channel interdependencies and the brand halo effect — understanding that channels do not operate independently; Google Brand depends on awareness built by other channels (if you cut Meta Ads entirely, branded search volume may decline over time as fewer users are exposed to the brand and form a search intent); LinkedIn's contribution to enterprise closed deals is not captured by its direct 2.4x ROAS — it is a mid-funnel channel that influences purchase decisions that are ultimately closed through other channels; the full-value analysis must account for these interdependencies, not just direct attribution
  • Affiliate programme incrementality adjustment — understanding that affiliate ROAS of 7.1x is flattering but potentially misleading; if 35% of affiliate revenue is from existing customers purchasing anyway (non-incremental), the true incremental ROAS of affiliate is: 7.1x × (1 - 0.35) = 4.6x; and the true incremental cost of the non-incremental spend is: 35% × £2.6M = £910K spent to attribute revenue that would have happened anyway — this £910K is the most obvious waste in the budget and should be the first reallocation candidate
  • Quality Score optimisation as a budget multiplier — understanding that Google Non-Brand at QS 5/10 means you are paying approximately 25–35% more per click than you would at QS 7/10; this means the current £3.2M budget is effectively buying £2.1M–2.4M of clicks at fair market price, and the remaining £0.8M–1.1M is a QS penalty surcharge; if QS were improved to 7/10, the same £3.2M budget would buy proportionally more clicks and generate more revenue — or the same click volume could be achieved for less spend, freeing budget for other channels
  • Budget decision framework — maintain, increase, or decrease — understanding how to present a budget decision to a CFO: the recommendation should be based on marginal ROAS analysis (if any channel's marginal ROAS is above the company's hurdle rate — the minimum acceptable ROAS — additional spend in that channel is value-creating; if marginal ROAS is below the hurdle rate, the current spend is destroying value and should be reduced); knowing how to calculate the expected return on budget increases (if Meta's marginal ROAS at current budget is 3.2x, and the hurdle rate is 3.0x, additional Meta spend is value-creating until marginal ROAS drops to 3.0x)
  • Strategic presentation to the CFO — confidence and uncertainty — understanding that the CFO needs more than a ROAS table — they need to understand the analytical framework (how you arrived at the recommendation), the key assumptions (which estimates carry the most uncertainty), the decision criteria (what marginal ROAS threshold you used and why), and the monitoring plan (how you will know if the reallocation is working within 90 days); knowing that a CFO will probe the weakest assumption in the model, so you must pre-identify and pre-address the key uncertainties

2. Framework: Marginal ROAS Budget Optimisation and CFO Presentation Model (MRBOCFOM)

  1. Assumption Documentation — The most critical assumption: does the CFO's minimum ROAS hurdle rate equal the company's gross margin (e.g., if gross margin is 45%, any channel returning less than 2.2x ROAS is generating negative gross profit contribution)? Or is there a strategic growth ROAS threshold (e.g., the company is in growth mode and accepts 2.0x ROAS as long as it is above breakeven)? The budget recommendation changes significantly depending on this hurdle rate
  1. Constraint Analysis — The brand search impression share cap (8% of brand searches missed) is a hard constraint that has a known, calculable opportunity cost: 8% × total brand search impressions × average brand conversion rate × average order value = missed revenue; this is the most defensible budget increase argument because the opportunity is directly quantifiable from Google Ads data
  1. Tradeoff Evaluation — Conservative reallocation (take from the most obviously wasted channel — affiliate non-incremental spend — and add to the most obviously capped channel — brand search) vs. aggressive restructuring (rebuild the Non-Brand campaign structure to improve QS, redirect programmatic display budget to Google, and fundamentally change the channel mix); the conservative reallocation is faster (visible impact within 60 days) and lower risk; the aggressive restructuring is higher impact but takes 3–6 months to realise
  1. Hidden Cost Identification — Reducing affiliate spend by £910K (the non-incremental portion) will temporarily reduce reported revenue even if the true incremental impact is zero — because the affiliate-attributed revenue disappears from the dashboard even though the real revenue (from customers who would have purchased anyway) still occurs through direct channels; the CFO must understand that reported revenue will appear to drop even as true incremental revenue is maintained
  1. Risk Signals / Early Warning Metrics — Brand impression share trend (alert if brand impression share drops below 90% after any budget adjustment — means the Google Brand campaign is under-funded); channel ROAS trend at new budget level (alert if any channel's ROAS declines >20% within 6 weeks of a budget increase — indicates the marginal ROAS model was incorrect for that channel); total revenue trend (alert if total revenue declines more than the affiliate revenue reduction — means the reallocation is net-negative)
  1. Pivot Triggers — If after the affiliate spend reduction (£910K removed), the direct channel revenue does not increase proportionally (meaning the non-incremental customers were actually incremental in some way the model missed): restore £300K of affiliate spend while re-running the incrementality analysis with a more rigorous holdout test
  1. Long-Term Evolution Plan — Q1: affiliate programme restructuring (implement tiered commission to reduce non-incremental spend, improve code hygiene); Q2: Google Non-Brand QS improvement (campaign restructuring, new landing pages); Q3: measure QS-adjusted performance; Q4: meta budget scaling (increase by £400K to capture headroom identified); Annual review: full marginal ROAS rebuild with 12 months of new data

3. The Answer

Step 1: Build the Marginal ROAS Analysis

STEP 1: Calculate Adjusted ROAS (removing known distortions)

Channel           | Reported ROAS | Adjustment                    | Adjusted ROAS
------------------|---------------|-------------------------------|---------------
Google Brand      | 22x           | Cap at impression share — no  | 22x (but 8%
                  |               | adjustment; ROAS is real but  | opportunity exists)
                  |               | volume is constrained         |
Google Non-Brand  | 6.2x          | QS penalty adjustment:        | 8.1x (if QS fixed)
                  |               | at QS 5/10, paying 32% CPC    |
                  |               | premium; adjusted = 6.2/0.77  |
Meta Ads          | 3.8x          | iOS ATT adjustment: est 20%   | 3.0x true blended
                  |               | attribution overstatement     |
LinkedIn          | 2.4x          | Add assisted value: 40% of    | ~3.5x (estimated)
                  |               | closed enterprise deals had   | when assisted conv
                  |               | LinkedIn touchpoint           | are included
Programmatic      | 1.9x          | Brand lift adjustment:        | 2.3x (estimated)
Display           |               | 20% uplift to other channels  | when halo effect
                  |               | from display exposure         | is attributed
Affiliate         | 7.1x          | Non-incremental adjustment:   | 4.6x true incremental
                  |               | 35% of revenue non-incremental|

STEP 2: Estimate Marginal ROAS (the return on the NEXT £1 in each channel)

Channel           | Adjusted ROAS | Saturation Level | Marginal ROAS Estimate | Rationale
------------------|---------------|------------------|------------------------|---------------------
Google Brand      | 22x           | Constrained (IS) | 22x                   | 8% impression gap =
                  |               | 8% headroom      |                        | same ROAS on next £1
Google Non-Brand  | 8.1x (adj)    | Medium           | 9.0x (with QS fix)    | QS improvement raises
                  |               |                  |                        | efficiency of next £1
Meta Ads          | 3.0x (adj)    | 30% headroom     | 2.8x                  | Diminishing returns:
                  |               |                  |                        | 3.8x → 3.2x at +30%
LinkedIn          | 3.5x (adj)    | Significant      | 3.5x                  | Underutilised; TAL
                  |               | headroom         |                        | accounts not saturated
Programmatic      | 2.3x (adj)    | Unknown          | 2.0x                  | Display scales poorly
                  |               |                  |                        | due to audience limits
Affiliate         | 4.6x (adj)    | Saturated        | 1.5x                  | New commission is on
                  |               | (non-incr. waste)|                        | increasingly non-incr.
                  |               |                  |                        | customers at margin

STEP 3: Rank by Marginal ROAS (highest first)

Rank | Channel         | Marginal ROAS | Budget Decision
-----|-----------------|---------------|--------------------
1    | Google Brand    | 22x           | INCREASE — clear opportunity cost in 8% IS gap
2    | Google Non-Brand| 9.0x*         | MAINTAIN + fix QS (not a budget problem — efficiency problem)
3    | LinkedIn        | 3.5x          | INCREASE — underutilised, TAL audience available
4    | Meta Ads        | 2.8x          | MAINTAIN — headroom exists but ROAS is approaching hurdle
5    | Programmatic    | 2.0x          | REDUCE — low marginal ROAS, better alternatives available
6    | Affiliate       | 1.5x          | SIGNIFICANTLY REDUCE — non-incremental spend is destroying value

*QS improvement is an efficiency play, not a budget increase

Step 2: Calculate the Opportunity Cost of the Brand Impression Share Gap

BRAND IMPRESSION SHARE OPPORTUNITY CALCULATION:

Current brand search impressions received: 92% of total brand search volume
Missed impressions: 8%

Data from Google Ads:
  Total brand impressions (last 12 months): 2,400,000
  Missed due to budget: 8% × 2,400,000 = 192,000 missed impressions

Attribution path from brand impression to purchase:
  Brand impression → Click (8% CTR on branded): 15,360 potential clicks
  Clicks → Purchase (12% conversion rate): 1,843 potential purchases
  Average order value: £85

  Missed revenue: 1,843 × £85 = £156,655/year
  Additional spend needed to capture this: £156,655 / 22x ROAS = £7,120/year

RECOMMENDATION: Increase Google Brand budget by £8,000/year (less than 0.1% of total budget)
to capture the 8% impression share gap.

This is the highest marginal ROAS opportunity in the entire account
(22x ROAS, only costs £8K/year to capture).

Step 3: Quantify Affiliate Non-Incremental Waste

AFFILIATE WASTE CALCULATION:

Total affiliate spend: £2,600,000/year
Non-incremental revenue percentage: 35%
Non-incremental spend: £2,600,000 × 35% = £910,000/year

This £910,000 generates attributed revenue that would have occurred anyway —
the commission is pure cost with zero incremental revenue.

If we reduce affiliate spend by £910K:
  Reported revenue will decrease: 35% × (£2,600,000 × 7.1x) ← this is the ROAS multiplied
  Wait — the non-incremental revenue will still happen (through direct/organic channels)
  But it will no longer be attributed to affiliate → it will appear as "direct" revenue

IMPACT MODELLING:
  Before: Affiliate reported revenue = £18,460,000 (£2.6M × 7.1x)
  After affiliate reduction (£1,690,000 spend, non-incremental removed):
    Incremental affiliate revenue: £1,690,000 × 4.6x = £7,774,000
    Non-incremental revenue (now appearing as direct): £910,000 × 7.1x = £6,461,000

  The £6.461M moves from "affiliate" to "direct" in the attribution —
  total revenue is unchanged, but cost is reduced by £910K.

  Net saving: £910,000/year in eliminated non-incremental spend

PRESENT TO CFO:
  "Reducing affiliate spend by £910K will appear to reduce reported revenue by ~£6.5M
   in our attribution model. However, this revenue will shift to the 'direct' channel —
   customers who were purchasing anyway will continue to purchase.
   We recommend running a 90-day holdout test to confirm incrementality before making
   the full reduction, starting with a £200K test reduction."

Step 4: Compile Budget Reallocation Recommendation

RECOMMENDED BUDGET REALLOCATION (Total maintained at £12M):

Channel           | Current Budget | Change       | New Budget | Rationale
------------------|----------------|--------------|------------|--------------------------------------------
Google Brand      | £1,800,000     | +£50,000     | £1,850,000 | Capture 8% IS gap; 22x marginal ROAS
Google Non-Brand  | £3,200,000     | -£200,000    | £3,000,000 | Reinvest in QS improvements, not more spend
Meta Ads          | £2,400,000     | +£400,000    | £2,800,000 | 2.8x marginal ROAS > 2.5x hurdle rate
LinkedIn          | £800,000       | +£300,000    | £1,100,000 | Enterprise TAL underserved; 3.5x adj ROAS
Programmatic      | £1,200,000     | -£150,000    | £1,050,000 | 2.0x marginal ROAS; lowest priority
Affiliate         | £2,600,000     | -£400,000    | £2,200,000 | Phase 1: remove most obvious non-incremental
                  |                | (Phase 1 of  |            | (Phase 2: reduce further after holdout test)
                  |                | planned £910K|            |
                  |                | reduction)   |            |
TOTAL             | £12,000,000    | £0           | £12,000,000| Budget maintained; mix optimised

PROJECTED IMPACT (12-month forecast):

Metric                     | Current    | Projected  | Change
---------------------------|------------|------------|--------
Total true incremental ROAS| 3.9x*      | 4.5x       | +15%
Total true incremental rev | £46.8M     | £54.0M     | +15%
Google Brand IS            | 92%        | 98%        | +6%
LinkedIn TAL account reach | 8%         | 25%        | +17%
Non-incremental affiliate   | £910K      | £364K      | -60%
QS (Google Non-Brand)      | 5/10       | 7/10       | +40%

*Blended true incremental ROAS = total revenue attributable to paid media
 (excluding non-incremental affiliate) / total paid media spend

KEY RISKS TO THIS PROJECTION:
1. Affiliate reduction risk: Revenue may not fully transfer to direct as modelled
   Mitigation: Phase the reduction over 6 months with holdout monitoring
2. Meta scaling ROAS risk: If ROAS at £2.8M drops to 2.4x instead of projected 2.8x
   Mitigation: Scale slowly (20%/month), pause at 2.5x threshold
3. LinkedIn TAL adoption risk: If sales team does not action increased MQAs from LinkedIn
   Mitigation: Pre-agree MQA SLA with sales before increasing budget

Step 5: CFO Presentation — One-Page Summary

CFO PRESENTATION: BUDGET REALLOCATION RECOMMENDATION FY2025

EXECUTIVE SUMMARY:
Our current £12M budget is delivering a blended ROAS of 3.9x on a true incremental basis.
By reallocating within the same budget, we project a blended ROAS of 4.5x —
a 15% improvement in revenue efficiency with zero additional spend.

THREE KEY MOVES:

Move 1: Eliminate non-incremental affiliate spend (free up £400K in Phase 1)
  What: Tighten affiliate commission structure to reward only new-customer conversions
  Why: 35% of affiliate revenue costs us £910K to "acquire" customers who
       would have purchased anyway
  Risk: Low (revenue transfers to direct channel, not lost)
  Confidence: HIGH — supported by holdout test data from Q3 2024 pilot

Move 2: Scale Meta and LinkedIn with freed budget (invest £700K)
  What: Increase Meta by £400K (2.8x marginal ROAS) + LinkedIn by £300K (3.5x marginal ROAS)
  Why: Both channels have headroom; both have marginal ROAS above our 2.5x hurdle rate
  Risk: Medium (ROAS may decline faster than modelled at scale)
  Confidence: MEDIUM — based on historical ROAS at incremental budget levels

Move 3: Fix Google Non-Brand Quality Score (no budget increase — efficiency play)
  What: Restructure Non-Brand campaign into tightly themed ad groups + new landing pages
  Why: At QS 5/10, we pay 32% above market CPC; fix gives 32% more clicks for same budget
  Risk: Low (QS improvement takes 6–8 weeks but has no downside)
  Confidence: HIGH — QS and CPC relationship is documented by Google

HURDLE RATE USED: 2.5x blended ROAS (above this: invest; below this: divest)
Gross margin confirmation needed from Finance to validate hurdle rate

MONITORING PLAN:
  Month 1: Affiliate reduction live; holdout test begins
  Month 2: Meta scale begins at £200K additional
  Month 3: LinkedIn increase live; TAL audience expansion
  Month 6: Full mid-year review vs. projected ROAS targets

  Go/no-go criteria at Month 3:
  If blended ROAS is not trending toward 4.2x: pause Meta/LinkedIn increases
  If affiliate direct revenue transfer is not occurring: pause affiliate reduction at £400K

4. Interview Score: 10 / 10

Why this demonstrates director-level maturity: The marginal ROAS analysis — explicitly distinguishing between average ROAS (Google Brand at 22x: historically efficient) and marginal ROAS (also 22x: the next £1 will generate £22 because the 8% impression share gap is the same audience) — is the correct economic framework for budget allocation decisions and is the methodology that separates a director-level analysis from a senior manager analysis. The affiliate non-incremental waste calculation — showing that removing £910K of non-incremental spend will cause reported revenue to "drop" by £6.5M in the attribution model even though true revenue is unchanged — demonstrates the financial communication sophistication required to avoid a CFO making the wrong decision based on a misleading attribution report. The Go/No-Go criteria at Month 3 (specific ROAS thresholds that trigger a pause in the scaling plan) gives the CFO the governance framework that justifies approving the reallocation without bearing unlimited risk.

What differentiates it from senior-level thinking: A senior performance marketer would produce the ROAS table and recommend moving budget from low-ROAS to high-ROAS channels — correct direction, insufficient depth. They would not distinguish average from marginal ROAS, would not calculate the QS penalty as a percentage CPC surcharge, would not model the attribution reporting impact of the affiliate reduction (the "revenue appears to drop but doesn't" problem that will generate a CFO crisis call if not pre-explained), and would not design the go/no-go monitoring criteria that make the recommendation governable.

What would make it perfect: This response scores 10/10 across all five evaluated dimensions: analytical framework (marginal ROAS with adjustments), strategic recommendation (three prioritised moves with rationale), risk management (phased implementation, holdout testing, go/no-go criteria), CFO communication (pre-addressed the affiliate attribution drop), and monitoring plan (month-by-month with specific decision triggers). The one possible enhancement would be a Bayesian confidence interval on the projected ROAS improvement (showing the distribution of likely outcomes: e.g., "60% probability of achieving 4.2–4.8x; 25% probability of 3.8–4.2x; 15% probability of below 3.8x due to Meta scaling risk"), but the framework and recommendations are complete and immediately presentable to a board.