Wednesday, 6 May 2026

Media Planners & Buyers May Need to Start Allocating Test Budgets for ChatGPT Ads











For performance marketers, media planners, media buyers, and growth strategists, self-serve buying, CPC bidding, and better measurement are changing how ChatGPT ads can actually be evaluated

For the last few months, ChatGPT ads have mostly felt like a controlled experiment sitting outside the normal paid media ecosystem.

Interesting to watch? Definitely.

Easy to scale? Not really.

Access was limited, reporting was still evolving, and most advertisers couldn’t realistically treat it like a serious performance channel yet. It felt closer to an early-stage monetisation test than something you would confidently compare against Google, Meta, or even newer retail media platforms.

This latest rollout changes that quite a bit.

OpenAI is now expanding ChatGPT ads with:

  • Self-serve buying
  • A beta Ads Manager
  • CPC bidding
  • Pixel-based tracking
  • Conversions API integration
  • Expanded agency and tech partnerships

And honestly, this feels like the point where ChatGPT ads start moving from “interesting AI ad experiment” toward something media teams may actually have to account for in future planning cycles.

The self-serve Ads Manager is probably the biggest shift

Earlier access felt heavily partner-driven and limited to select advertisers and agencies. That naturally restricted participation to larger brands or companies with early access relationships.

A self-serve Ads Manager changes accessibility completely.

Now advertisers can:

  • Register directly
  • Add payment methods
  • Set budgets
  • Control bidding and pacing
  • Upload creatives
  • Launch campaigns
  • Track performance inside the platform

That lowers the barrier significantly for:

  • SMBs
  • Startups
  • Independent brands
  • Smaller performance teams

And this is usually how most advertising platforms evolve.

Phase 1:
High-touch managed pilot.

Phase 2:
Self-serve scale.

ChatGPT is clearly entering that second phase now.

And from a paid media perspective, this matters a lot more than just “another dashboard launch.”

Because once advertisers can access inventory directly, adoption usually accelerates much faster.

CPC bidding changes how performance teams will evaluate ChatGPT

This is the second major shift.

Until now, ChatGPT ads mainly operated on CPM pricing. That makes sense in an early pilot phase where a platform is still trying to understand inventory demand, delivery behaviour, and advertiser interest.

But CPM-only buying creates a problem for performance teams.

You can justify awareness budgets with impressions.

You can’t justify long-term performance budgets that way.

Introducing CPC changes the conversation because advertisers can now align spend with actual user action instead of visibility alone.

And the context inside ChatGPT makes this especially interesting.

Most conversations inside ChatGPT are not passive.

People are:

  • Comparing options
  • Exploring products
  • Researching services
  • Looking for recommendations
  • Trying to make decisions

That’s very different from traditional display environments.

In many cases, the user is already in an active consideration phase. So clicks inside these conversations could become very strong intent signals if the platform can maintain quality and relevance at scale.

That’s the part performance marketers and media buyers are going to watch closely.

Measurement was one of the biggest weaknesses before this

One of the hardest things about early ChatGPT ads was proving value properly.

Without real measurement infrastructure, optimisation becomes difficult very quickly. Most performance marketers are not going to scale spend based on impressions and broad engagement assumptions alone.

That’s where the rollout of:

  • Pixel-based measurement
  • Conversions API
  • Aggregated conversion tracking

becomes extremely important.

Now advertisers can start measuring downstream actions like:

  • Purchases
  • Leads
  • Sign-ups
  • Other meaningful conversions

That doesn’t magically solve attribution, but it moves ChatGPT much closer to the performance frameworks advertisers already use across existing platforms.

And OpenAI is clearly positioning privacy as part of the pitch here.

No sharing of conversations.
No advertiser access to personal chats.
Only aggregated performance insights.

That balance between measurement and privacy is going to become a major discussion point as the platform scales further.

The ecosystem expansion is another strong signal

One thing I found particularly interesting is the expansion of agency and technology partnerships.

The platform is already integrating with large agency groups and ad tech ecosystems because adoption becomes much faster when advertisers can work through tools and workflows they already use daily.

That matters more than people think.

Platforms scale much faster when buyers do not have to rebuild operational processes from scratch.

The easier it becomes to integrate ChatGPT into existing paid media workflows, the faster budgets will follow.

This is starting to look like platform building, not experimentation

When you step back and look at everything together:

  • Self-serve infrastructure
  • CPC bidding
  • Conversion tracking
  • Partner ecosystem expansion
  • Campaign management tools
  • Measurement capabilities
  • Privacy positioning

…it becomes pretty clear this is not a simple monetisation test anymore.

This is platform infrastructure being built for long-term scale.

Still early? Absolutely.

There are still major unanswered questions around:

  • CPC efficiency at scale
  • Inventory quality
  • Long-term user behaviour
  • Conversion performance
  • Pricing inflation as competition increases
  • Measurement expectations from enterprise advertisers

But the direction is obvious now.

ChatGPT ads are moving much closer to mainstream performance media workflows.

Bottom line

This does not mean advertisers should suddenly move large budgets into ChatGPT tomorrow.

The platform still needs to prove itself consistently in real-world performance environments.

But it also no longer makes sense to treat it like a small experimental AI placement sitting outside the media plan.

The infrastructure being built now strongly suggests OpenAI is preparing for long-term advertising scale.

And for performance marketers, media planners, and media buyers, that’s probably the most important signal of all.

→ Beta self-serve Ads Manager rolling out
→ CPC bidding now introduced alongside CPM
→ Pixel tracking + Conversions API added
→ SMBs and startups can now access the platform more easily
→ ChatGPT ads moving closer to mainstream performance workflows

Still early-stage, but clearly moving beyond a limited pilot environment.

Tuesday, 5 May 2026

Media Planners & Buyers Should Start Evaluating Snapchat’s AI Sponsored Snaps as a Paid Channel

 












With 950B+ chats, 500M+ AI interactions, and early conversion lifts (+22%) with ~20% lower CPA, chat is becoming monetisable inventory

If you’re planning or buying paid media, this is one of those updates that doesn’t look big at first, but actually changes where inventory is coming from.

Snapchat is bringing ads directly into chat through AI Sponsored Snaps. Not as a placement around conversations, but inside them. Brands can show up as AI agents that users can interact with in real time.

That’s a very different setup from what we’re used to.

And the scale here is not small.

Snapchatters sent over 950 billion chats in Q1 2026, and more than 500 million users have already interacted with AI on the platform. This isn’t new behaviour being introduced. It’s behaviour that already exists, now being monetised.

Why this matters from a paid media perspective

Most paid formats still interrupt something.

You scroll, you watch, you search, and then you see an ad.

Here, the ad is part of the interaction itself.

Instead of pushing users out to a landing page immediately, the experience happens inside chat:

  • Ask questions
  • Explore products
  • Get recommendations
  • Move closer to a decision

All without leaving the conversation.












Source: Snap Newsroom

That changes how you think about both creative and intent.

There are already performance signals

This isn’t starting from zero.

Sponsored Snaps are already delivering:

  • +22% conversions
  • ~20% lower cost per action
  • 2x more conversions per full-screen ad view vs other inventory

Now layer AI interaction on top of that.

The logic is straightforward. If people are already using chat to ask questions and make decisions, placing an interactive brand experience inside that flow should increase engagement.

The question is how that translates into consistent, scalable performance.

Where this fits in a media plan

This doesn’t replace search or paid social.

But it does start to compress the funnel.

Instead of:

  • Awareness → click → landing page → conversion

You get:

  • Awareness → interaction → consideration → action

All happening in one environment.

For planners and buyers, that’s both an opportunity and a complication.

Because now you’re not just measuring clicks and sessions. You’re trying to understand:

  • Depth of interaction
  • Quality of engagement
  • How conversation translates into conversion

That’s a different optimisation model.

What changes for media buyers

A few things stand out immediately:

  • Creative needs to work in a conversational format, not just as static or video assets
  • Targeting will rely more on context and user interaction rather than just audience segments
  • Measurement will need to go beyond CTR and focus on engagement quality
  • AI agents effectively become part of the media setup, not just the product layer

And importantly, this opens up full-funnel interaction inside a single placement, from discovery to action.

Still early, but direction is clear

This is launching in alpha with partners like Experian, so it’s far from a scaled channel.

There are still open questions around:

  • Consistent performance benchmarks
  • Measurement standardisation
  • Scalability across markets and verticals

But the direction is consistent with what we’re seeing more broadly.

Ads are moving:

  • From placements → into environments
  • From impressions → into interactions
  • From clicks → into conversations

Bottom line

This is not something you shift budgets into immediately.

But it’s also not something to ignore.

It’s still experimental, and it needs to prove itself in real performance scenarios. But the combination of scale, existing engagement, and early conversion signals makes it worth evaluating.

Especially before it becomes another competitive, expensive line item in your media plan.

If you’re responsible for paid media, this is one more signal that conversations are becoming part of the inventory you’ll eventually have to plan for.

→ 950B+ chats in a single quarter
→ 500M+ users already interacting with AI
→ +22% conversions and ~20% lower CPA in Sponsored Snaps
→ 2x more conversions per full-screen view
→ AI agents now entering the ad experience

Still early-stage, but clearly moving toward a more interactive, conversation-driven paid media environment.

Wednesday, 29 April 2026

Media Planners & Buyers: ChatGPT Just Became Measurable

€2.8–€4.6 CPCs and Ads Manager rollout bring real ROI conversations into play

For media planners and buyers, ChatGPT is no longer just an interesting AI tool sitting outside the media plan.

It is starting to look like a real performance channel.

OpenAI’s move from a mainly impression-led model toward CPC ads changes the conversation. Until now, ChatGPT advertising looked closer to an early branding environment. Useful for testing visibility, but harder to compare directly with performance channels.

With CPC pricing now being tested, advertisers can finally think in a familiar framework:

  • What did I pay?
  • What did I get?
  • How does it compare with Google Search, YouTube, Display, or PMax?

That shift alone moves ChatGPT from “experimental” to “comparable.”









Why the CPC shift matters

The reported CPC range of around €2.8 to €4.6 per click is critical because it gives something tangible to benchmark against existing channels.

It also explains the timing.

ChatGPT CPMs have reportedly dropped from around €55 closer to €23. That’s a significant compression. Moving to CPC allows OpenAI to tie revenue to measurable actions instead of declining impression value.

For advertisers, this means ChatGPT is no longer just selling reach. It’s selling outcomes.

And the moment a platform does that, it starts competing directly with Google for performance budgets.

Where this fits in a real media plan

The current structure for most performance setups is still predictable:

  • Search captures demand
  • Paid social creates demand
  • Video supports consideration
  • Retargeting converts

ChatGPT doesn’t sit cleanly in any of these.

It sits between demand generation and demand capture.

Users are not typing keywords. They are explaining problems, comparing tools, asking for recommendations. That creates a different kind of entry point into the funnel.

Not pure intent like search. Not passive discovery like social.

But high-consideration interaction.

If ad placement aligns well with that moment, the quality of traffic could be strong. That’s the hypothesis that needs to be validated.

The Ads Manager signal is just as important

Until recently, advertisers were working with very limited infrastructure. Reporting was basic, often delayed, sometimes just weekly CSV-style outputs.

Now, a new Ads Manager interface is being tested.

That includes:

  • Real-time campaign control
  • Ability to run and optimise campaigns directly
  • Better visibility into performance

This is not a small update.

This is the difference between a beta ad product and a scalable platform.

When you combine that with increasing ad inventory already being spotted from brands like Best Buy and Expedia, it’s clear monetisation is ramping up quickly.

The real challenge: proving intent

Search works because intent is explicit.

ChatGPT has to prove that conversational context can produce equally valuable clicks.

Advertisers will not treat this differently. They will benchmark:

  • CPC vs Search
  • Conversion rate vs Search
  • Cost per lead / cost per acquisition vs existing channels

If ChatGPT cannot compete on these, budgets won’t move beyond testing.

If it can, even partially, it becomes a serious line item in performance planning.

What this means for media planners and buyers

This is not about shifting budgets overnight.

But it does change what needs to be on the roadmap:

  • Testing conversational intent vs keyword intent
  • Adapting messaging for recommendation-style environments
  • Evaluating CPC efficiency vs Search and PMax
  • Understanding how early-stage vs mid-stage intent behaves

And most importantly, getting in early before costs rise.

Because they will.

Bottom line

This is not just a new ad format.

This is the early stage of a new performance environment where intent is expressed through conversation instead of keywords.

For media planners and buyers, that means one thing:

You can’t ignore it anymore.

Thursday, 23 April 2026

Microsoft Advertising Update: AI Max, Copilot Ads & UCP — What Media Planners & Buyers Need to Change in Execution

 








Keyword builds, Copilot inclusion, UCP feeds, Audience Generation, and AI performance tracking

 

Microsoft rolled out AI Max for Search, Copilot ads, Offer Highlights, Audience Generation, AI Performance in Bing Webmaster Tools, UCP-based feeds, and Copilot Checkout.

Quick Summary (What Actually Changes)

Area

Before

After (Microsoft Updates)

What You Actually Do

Search (AI Max)

Large keyword builds, manual control

AI expands queries based on intent

Use fewer keywords, add negatives, use search term reports

Placements (Copilot Ads)

Ads on search results page

Ads inside Copilot answers

Make messaging use-case specific and clear

Offers (Offer Highlights)

Offers inside ad copy

Offers shown directly in AI responses

Structure delivery, discounts, availability clearly

Targeting (Audience Generation)

Manual filters (job title, industry)

Natural language audience building

Define real problems, test audience inputs

Measurement (AI Performance)

CTR, CPL, conversions

Visibility in AI answers

Fix content gaps vs competitors

Feeds (UCP)

Basic product titles

Structured, AI-readable data

Add specs, use cases, pricing (UCP-compliant)

Checkout (Copilot Checkout)

Multi-step funnel

In-chat purchase

Ensure pricing, stock, offers are clear upfront

 

1. AI Max for Search → You stop overbuilding keywords

You’re setting up a running shoes campaign.

Before
You:

  • create 150–200 keywords
  • split into tight ad groups
  • still miss queries like
    → “best running shoes for flat feet under €150 for marathon”

After
You:

  • launch with a smaller keyword base
  • AI Max matches long queries automatically

What you actually do differently

  • Add negative keywords early:
    → kids, cheap under €20, irrelevant categories
  • Use brand inclusions/exclusions
  • Set messaging constraints
  • Improve product inputs:
    → “flat feet support”, “marathon use”, price bands
  • Use Search Term Reporting from day one to refine

2. Ads Inside Copilot → You’re not optimizing for position anymore

You launch a laptop campaign.

Before
You:

  • optimize for top position
  • write generic ads

After
User asks:
→ “which laptop is good for video editing under €1200?”

Only a few options show.

What you actually do differently

  • Rewrite product titles:
    → “Video Editing Laptop, 16GB RAM, RTX GPU, Under €1200”
  • Fix landing pages
  • Make specs clear and visible

3. Offer Highlights → Your offer has to be obvious

You review your ads.

Before
You:

  • mention “fast delivery” in copy

After
User asks:
→ “best phones under €300 with fast delivery”

Offer appears directly:
→ “Free next-day delivery”

What you actually do differently

  • Structure delivery, discount, availability clearly
  • Align feed + ads + landing page

4. Audience Generation → You describe the problem

You’re building a B2B campaign.

Before
You:

  • select industries, job titles

After
You input:
→ “mid-sized manufacturing companies facing supply chain delays”

What you actually do differently

  • Write multiple audience inputs
  • Test variations based on problems

5. AI Performance in Bing Webmaster Tools → You see where you’re missing

You check performance.

Before
You:

  • rely on CTR, CPL

After
You see:

  • where your brand appears in AI answers
  • where competitors show up

What you actually do differently

  • Rewrite content to match real queries
  • Improve clarity of problem-solution messaging

6. Universal Commerce Protocol (UCP) → Feed decides visibility

You check product performance.

Before
Feed:

  • “X200 Headphones”

After (UCP-compliant)
Feed:

  • “Wireless Headphones, 40hr Battery, Travel-Friendly, €180”

What you actually do differently

  • Add specs, use cases, pricing
  • Build UCP-compliant feeds
  • Remain merchant of record even with AI-driven transactions

7. Copilot Checkout → Shorter path to purchase

You review funnel drop-offs.

Before

  • Ad → site → cart → checkout

After (Copilot Checkout)

  • Discovery to purchase inside Copilot

What you actually do differently

  • Ensure pricing, stock, offers are clear upfront
  • Reduce reliance on long funnels

Final Thought

Nothing here means rebuilding everything.

But your focus shifts:

  • less keyword micromanagement
  • more importance on inputs, constraints, and structure
  • more visibility driven by how well the system understands your campaigns

You’re still doing media planning and buying. You’re just making it easier for Microsoft’s system to interpret and select your campaigns.

 

Tuesday, 21 April 2026

Media Planners and Buyers: Make Better Budget Decisions with GA4 Predictive Metrics

 

Media Planners and Buyers: Make Better Budget Decisions with GA4 Predictive Metrics









How to allocate spend based on probability, not assumptions

Most media plans still answer one thing really well
→ where the budget went

But the real question is always
→ where should the next euro go

That’s where Google Analytics 4 predictive metrics actually become useful for planners and buyers.

Not as a fancy feature
Not as something you screenshot for a report

But as a practical way to decide who deserves budget and who doesn’t

What this actually changes in your day-to-day work

Normally, planning and buying decisions are based on:

→ past performance
→ channel benchmarks
→ audience assumptions

Predictive metrics add one more layer:

who is more likely to convert next

That’s it. Keep it simple.

You’re not replacing your strategy
You’re improving how you prioritize

Think of it like this

You already have:

→ high-intent users
→ mid-intent users
→ low-intent users

GA4 just helps you identify them faster and more reliably











So instead of treating remarketing as one big bucket
You start treating it like tiers of probability

Before you even start (this is where most people get stuck)

Predictive metrics don’t just “appear” in GA4.

You need:

→ at least 1,000 users who triggered the event (purchase)
→ at least 1,000 users who did NOT trigger it
→ within a rolling 28-day window

If you don’t meet this:

→ predictive audiences won’t show up
→ or they’ll disappear later

Also:

→ GA4 needs time to train the model
→ if you just fixed tracking yesterday, nothing will work immediately

So if someone says “this doesn’t work”
Most of the time, it’s just not eligible yet

Critical setup most people miss

Before anything else, check this:

→ Reporting Identity in GA4 must be set to Blended

If it’s set to Observed:

→ non-consented users are ignored
→ audience sizes shrink significantly
→ Google Ads delivery gets limited

This alone can break your entire setup without you realizing it.

Data freshness reality (don’t ignore this)

Predictive metrics are not real-time.

→ GA4 processing lag ≈ 24 hours
→ Google Ads audience sync ≈ another 24 hours

So you are working with signals that are:

→ roughly 48 hours old

What this means in practice

  • Works well for:
    → always-on campaigns
    → ongoing remarketing
    → steady-state optimisation
  • Does NOT work well for:
    → flash sales
    → short 1–3 day campaigns
    → real-time decisioning

Where this fits in your workflow

Break it into three parts:

→ Planning
→ Allocation
→ Buying

Planning: stop treating all users the same

Most plans still look like:

→ Prospecting
→ Remarketing

That’s too broad.

With predictive metrics, your remarketing becomes:

→ High probability users (likely to purchase soon)
→ Medium probability users
→ Low probability or churn risk users

Now your plan starts to reflect actual conversion likelihood, not just audience size.

What this changes

  • Your audience definitions become sharper
  • Your projections become more realistic
  • Your budget split becomes intentional

Example

Instead of saying:

→ “We’ll put 30% into remarketing”

You start saying:

→ “We’ll prioritise high-probability users first, then expand outward”

That’s a very different planning mindset.

Allocation: where the real impact happens

This is where most teams either win or waste money.

Scenario

You have limited budget
You have multiple audiences
You need to decide where to push spend

What most teams do

→ Spread budget evenly
→ Optimise later

What you should do instead

Use predictive signals to decide:

→ where to be aggressive
→ where to stay efficient
→ where to pull back

Practical way to think about it

  • High probability users
    → push harder
    → allow higher CPC/CPA
    → prioritise impression share
  • Medium probability users
    → test messaging
    → control spend
    → optimise for movement down funnel
  • Low probability / churn risk
    → reduce exposure
    → exclude where needed
    → move to cheaper channels if at all

Now your budget isn’t just “allocated”
It’s weighted based on likelihood to convert

Buying: this is where most people stay shallow

This is the part where “use it as a signal” gets thrown around without clarity.

Let’s make it real.

Search campaigns

Don’t just add audiences and forget them.

Do this:

  • Monitor performance split:
    → predictive vs non-predictive users
  • Adjust based on intent:
    → high intent keywords + high probability users
    = go aggressive

→ generic keywords + low probability users
= stay conservative

You’re basically stacking:

→ keyword intent
→ user probability

That combination is where efficiency comes from.

YouTube and Display

This is where predictive signals work really well if you actually use them properly.

For high probability users

→ use urgency
→ use direct response messaging
→ push conversion

Example:

→ “Still thinking about it?”
→ “Limited availability”
→ “Complete your purchase”

For mid probability users

→ focus on trust
→ explain benefits
→ reduce friction

Example:

→ testimonials
→ product USPs
→ comparisons

For low probability users

→ don’t burn budget

Either:

→ exclude them
or
→ move them into low-cost awareness campaigns

Most accounts waste money here without realizing it.

The churn play (this is where easy efficiency sits)

Most teams ignore this completely.

If a user is predicted to churn:

→ they are unlikely to come back
→ they are unlikely to convert

So spending on them is usually inefficient.

What to do

  • Create “Likely to churn” audience
  • Use it as:

→ exclusion in remarketing
→ exclusion in Performance Max
→ even exclusion in broad prospecting where overlap exists

This alone can clean up a lot of wasted spend.

Value vs volume (this is where most people mess up)

High probability does not mean high value.

Example:

→ user likely to buy €10 item
→ user likely to buy €200 item

Both are “likely to purchase”
But they are not equal.

What to do

Combine:

→ Likely to purchase
→ Predicted revenue (top segment)

Now you get:

→ users likely to convert AND worth more

That’s where you can justify:

→ higher CPC
→ higher CPA targets
→ more aggressive bidding

Bid strategy reality check

This part gets misunderstood a lot.

Inside Google Ads:

→ Smart bidding (tCPA / tROAS) is already using signals

So what do predictive audiences actually do?

→ they act as signals, not rules

Practical execution

  • Smart Bidding (tCPA / tROAS)
    → use predictive audiences in observation mode
    → let the system interpret them
  • Manual / ECPC
    → you can use targeting to push spend harder into high-probability users

The risk

If you force targeting inside Smart Bidding:

→ you choke reach
→ you lose scale

Performance Max reality (don’t get this wrong)

Performance Max does not strictly target your audience.

If you add a high-probability audience:

→ it uses it as a signal
→ then expands beyond it

What this means

  • It’s a hint, not control
  • You can’t force PMax to only target that audience

If you want stricter control

→ use these audiences in Search or Standard Shopping
→ use targeting where needed

Audience overlap hygiene (this is critical)

If you’re running multiple audience tiers:

→ you must exclude higher tiers from lower tiers

Example:

→ Medium probability campaign
must exclude
→ High probability audience

Why this matters

If you don’t:

→ campaigns compete against each other
→ CPCs inflate
→ reporting gets messy

Creative strategy most people get wrong

Not all users should get the same incentive.

High probability users

→ they are already convinced
→ don’t waste discounts

Use:

→ new arrivals
→ low-stock alerts
→ urgency

Medium probability users

→ they need a push

Use:

→ offers
→ incentives
→ limited-time discounts

You’re basically:

→ protecting margin on high intent
→ using incentives only where needed

Attribution and data reality

Predictive metrics rely on first-party data.

So:

→ users with strong tracking = better predictions
→ users with limited tracking = more modeled

This means:

→ predictions are directional, not perfect

Treat them as:

→ decision support
→ not absolute truth

Reporting: how do you know this is even working?

Don’t just trust the model blindly.

Inside GA4, check:

Model Quality score

If it’s:

→ High → you can trust the signal more
→ Medium → test carefully
→ Low → don’t base major budget decisions on it

Also:

→ compare predictive vs non-predictive segments
→ look at CPA, CVR, and revenue differences

If there’s no clear lift:

→ don’t scale blindly

Traffic floor (silent failure point)

Predictive modeling needs consistent traffic.

If your property drops below:

→ ~700 ad clicks in 7 days

You may see:

→ predictive audiences stop populating
→ sudden drop in performance

This is often misdiagnosed as “campaign issue”
But it’s actually a data issue

Seasonality warning (this breaks more than people expect)

Predictive models are based on historical patterns.

So during:

→ Black Friday
→ flash sales
→ heavy discount periods

User behavior changes fast.

Which means:

→ yesterday’s “low probability” user
→ might be today’s buyer

What to do

  • Pause or relax predictive exclusions
  • Do this at least 48 hours before major promotions
  • Let campaigns run broader

Otherwise:

→ you risk missing high-intent spikes

Audience size reality in Google Ads

When you push audiences to Google Ads:

→ the size will usually shrink

Why:

→ consent signals
→ Google Signals dependency
→ user match limitations

What to check immediately

  • “Eligible for Search”
  • “Eligible for Display”

If audience size is too small:

→ campaigns won’t serve properly
→ even if the logic is perfect

What to realistically expect

This is not magic.

You’re not suddenly doubling performance overnight.

What you will see if done properly:

→ cleaner spend distribution
→ fewer wasted impressions
→ more stable CPA
→ faster optimisation cycles

Where it breaks (and people get frustrated)

You need volume

If your account is small:

→ this won’t even activate properly

It’s not real-time

Data refreshes roughly every 24 hours

So don’t expect instant reaction

It’s a signal, not a switch

If you try to:

→ restrict campaigns only to predictive users

You’ll kill scale very quickly

It depends on clean tracking

No proper purchase events
No consistent data

→ no useful predictions

Simple as that.

How to start without overthinking it

Don’t build 10 audiences on day one.

Start with one:

→ “Likely to purchase in 7 days”

Then:

→ push it to Google Ads
→ use it in one or two campaigns
→ compare performance vs baseline

If you see improvement:

→ expand into exclusions
→ layer in churn strategy
→ combine with value tiers
→ integrate into planning and allocation

Final thought

For media planners and buyers, this isn’t about learning a new feature.

It’s about answering one question better:

→ who deserves budget right now

If you can answer that more accurately than before
You don’t need more budget

You just need better decisions on where to put it.

Practical example: Fashion eCommerce (how this actually plays out)

Let’s say you’re managing media for a fashion eCommerce brand.

Typical challenges:

→ high browsing, low immediate conversion
→ strong consideration phase (users don’t buy instantly)
→ heavy reliance on remarketing
→ constant pressure on CPA and ROAS

Unlike grocery, users don’t “need” to buy right now.
They decide to buy, which makes timing and intent far more important.

Step 1 → Build actual usable audience tiers

Inside Google Analytics 4, you don’t just create one remarketing audience.

You break it down:

  • High probability buyers
    → Likely to purchase in 7 days
    → Viewed product multiple times or added to cart
  • Medium probability users
    → Browsed category or product pages
    → Some engagement, but no strong buying signal
  • Low probability / churn risk
    → Visited in the past but inactive
    → Low engagement or long gap since last session

Now your remarketing is no longer one pool
It’s three different budget decisions

Step 2 → Plan budget based on behavior, not assumptions

Instead of:

→ 30% remarketing / 70% prospecting

You start thinking:

  • High probability users
    → protect and prioritise
    → ensure high coverage during peak decision window
  • Medium probability users
    → nurture with messaging and offers
    → push them closer to decision
  • Low probability users
    → minimise spend
    → avoid over-investing in low-return traffic

This alone improves efficiency before you even touch campaigns.

Step 3 → Buying execution across channels

Search

  • Brand / product-specific queries + high probability users
    → maximise impression share
    → accept higher CPC
  • Generic fashion queries + low probability users
    → control bids
    → avoid overpaying

YouTube / Display

  • High probability
    → urgency + decision triggers
    → “Still thinking about that jacket?”
    → “Only a few left in your size”
    → “Complete your purchase”
  • Medium probability
    → inspiration + reassurance
    → “See how others styled it”
    → “Top picks this season”
    → “Customer favourites”
  • Low probability
    → either exclude
    → or run cheap awareness only

Performance Max

  • Feed high probability audiences as strong signals
  • Let the system prioritise users closer to purchase
  • Do not expect strict targeting control

Step 4 → What actually improves

When done properly, you typically see:

→ higher conversion rate from remarketing
→ reduced wasted impressions on low-intent users
→ more stable CPA across campaigns
→ better creative performance due to intent alignment

Nothing complicated. Just better prioritisation.

Step 5 → What most teams still get wrong

Even in fashion, teams still:

→ treat all remarketing users the same
→ ignore churn exclusions
→ overspend on low-intent audiences

That’s where most inefficiency comes from.

At the end of the day, for a business like this, success is not about reaching more people.

It’s about reaching the right users when they are closest to making a decision, and applying the right level of pressure.

That’s exactly where predictive metrics start making a difference.