For a long time, Facebook Ads felt like a system you could
reason with.
If you worked in performance marketing around 2014–2018, you
were trained to believe that good planning and disciplined buying were
the primary drivers of results.
You defined audiences.
You separated funnels.
You controlled bids, placements, and budgets.
When performance moved, it usually moved in ways you could
explain.
That belief held because Facebook Ads executed
instructions deterministically.
Human decisions sat at the centre of delivery.
Under the hood, delivery logic was simple and predictable.
Audience rules constrained who could see an ad, bids resolved competition, and
budgets capped exposure. Planning reduced uncertainty up front, while buying
corrected deviations after launch.
For context, imagine a Germany-based ecommerce fashion brand
that started running Facebook Ads around 2015. The team was small,
performance-led, and selling across Germany first, then expanding into Austria,
Switzerland, and the wider EU. Over time, they built a familiar Meta growth
engine: acquisition for Germany, incremental expansion for DACH, then EU scale
once fulfilment, returns, and customer service matured.
The
Facebook Ads Era: How Planning and Buying Actually Worked
To ground this shift properly, imagine a European ecommerce
fashion brand that began advertising on Facebook around 2015.
They sold apparel across multiple EU markets and scaled
gradually.
In this Germany-based version, early traction came from predictable,
repeatable patterns: seasonal collections, sale cycles, and product drops that
naturally aligned with Meta’s demand curve. The account evolved in phases:
Germany-first prospecting, Germany retargeting for cart abandoners, then
separate EU market ad sets once shipping and returns were stable.
Media
planning for this brand meant
- defining
target customers by age, gender, and interests
- building
country-specific interest stacks
- creating
lookalike ladders from purchasers
- separating
prospecting and retargeting
- structuring
campaigns by funnel stage
The plan assumed that audiences defined demand. If
the audience was correct, delivery would follow.
In practice, Germany planning was often more conservative
and structured: tighter exclusions to protect frequency in DE, separate spend
buckets for prospecting versus remarketing, and strict market splits because
performance reporting and merchandising were run market by market.
Media
buying for this brand meant
- manual
bid changes to control CPA
- placement
selection to protect brand quality
- daily
budget reallocation toward winners
- frequent
intervention to stabilise performance
Buying assumed that execution controlled outcomes.
When results drifted, buyers stepped in and corrected them.
This worked because the platform behaved like a rules
engine.
If the planner defined who, the buyer decided how much, and
Facebook delivered inside that box.
The important point is that planning decisions
constrained delivery early, and buying decisions resolved performance
late. Humans controlled both ends of the system.
For Germany, this also meant predictable operating rhythms:
Monday catch-up optimisations after weekend sales, heavier budget pushes around
salary cycles and seasonal moments, and strict CAC guardrails because return
rates and margins in fashion are operationally sensitive.
When the
Same Brand Started Seeing Something Break
From around 2022 onward, this same fashion advertiser
began experiencing patterns that no longer matched the old logic.
- Broad
campaigns outperformed segmented ones
- Bid
changes stopped moving performance reliably
- Creative
variety mattered more than audience precision
- Scaling
felt unstable despite “best practice” setups
The product had not changed.
The markets had not changed.
The team had not forgotten how to plan or buy media.
Operationally, instability showed up in subtle but
consistent ways. Learning phases reset without obvious changes. Identical
audiences behaved differently week to week. Ads that had not degraded
creatively lost delivery without warning. Cost volatility appeared even when
competition and demand were stable.
What changed was how Meta decides what to show.
A few “in-the-account” signals made the shift obvious to
anyone watching closely:
- broad
targeting began beating previously reliable interest stacks
- simplified
structures started outperforming complex segmentation
- creative
fatigue accelerated, with concepts burning out faster than before
Those weren’t random trends. They were symptoms of a new
delivery architecture.
In Germany, the shift was amplified by the reality of signal
quality and privacy constraints. As tracking clarity weakened and consent
dynamics varied by browser and environment, deterministic audience assumptions
became less reliable. What looked like “random volatility” in the dashboard
often reflected the system simply having less deterministic input to work with.
The Real
Shift: Meta No Longer Executes Your Plan
This is the most important mental reset.
Meta advertising no longer starts with your targeting.
It starts with its own interpretation of relevance and
probability.
The role of the advertiser shifted from decision-maker
to signal provider. Targeting, bids, budgets, and objectives stopped
acting as instructions and started acting as inputs the system interprets.
This is why media planning and media buying feel
fundamentally different today. Decisions that used to be made by humans are now
resolved earlier, automatically, and at a scale humans cannot intervene in.
For the Germany-based fashion brand, this was the turning
point: the team could still “do everything right” according to the old
playbook, yet performance would not respond in the same predictable way. The
platform was no longer executing their plan. It was interpreting their signals.
What
Meta’s AI-Powered Ads System Actually Is
Meta advertising today is not a traditional auction engine
with automation layered on top.
It is an AI-powered ads delivery system designed to
make decisions probabilistically, at the impression level, using prediction and
feedback rather than fixed rules.
Instead of executing a predefined media plan, the system
continuously answers four questions:
- which
ads are relevant enough to consider
- which
of those ads is most likely to deliver value
- what
happened after the ad was shown
- how
future decisions should change based on that outcome
This loop runs continuously, across all surfaces, markets,
and users.
To make this work at scale, Meta separated delivery into two
distinct decision layers.
This separation is also why Meta Ads is no longer an “open
manual optimisation environment” in the way it used to feel. Performance
increasingly depends on understanding how the system evaluates inputs and
learns patterns over time, not on how fast a human can tweak levers.
For a German ecommerce business, that learning loop also has
an operational consequence: when you scale spend, you are not just scaling
reach. You are scaling training data. That makes the interaction between
conversion quality, returns, margins, and signal depth far more important than
it used to be.
The Two
Core Engines Inside Meta’s AI-Powered Ads System
The system is anchored around two real models, each
responsible for a different part of the decision process.
They operate sequentially, not in parallel.
This is not a detail. It is the core reason media planning
and buying feel different.
In the Facebook Ads era, the “auction” was the main story.
In the AI era, the auction is still there, but it is no longer the main
decision-maker.
Most meaningful decisions now happen before the
auction and inside the models.
A simple way to understand the two-engine structure is:
- Engine
1 decides eligibility
Which ads from your account are even allowed to compete for a user in a
moment
- Engine
2 decides priority and sequence
Which eligible ad should be shown now, and how the next impression should
be shaped based on what the system learns
If you only see Meta as “auction plus bidding,” modern
performance will feel random.
If you understand eligibility, priority, and sequence, modern performance
becomes explainable.
What matters is that these engines are not isolated. They
form a loop:
→ Andromeda retrieves what can be considered
→ GEM decides what should happen next
→ outcomes train GEM’s predictions
→ GEM’s predictions feed back into what Andromeda retrieves in future moments
So “retrieval” and “ranking” are not two separate
optimisations. They are one connected decision cycle.
For the Germany-based fashion advertiser, this is why the
same creative can behave differently across time and markets: the engine is not
just scoring an ad, it is learning sequences, context, and downstream outcomes
based on the signals it receives from each market environment.
Andromeda:
The Eligibility and Retrieval Engine
Andromeda is responsible for retrieval, not
optimisation.
Its job is to decide which ads from a single advertiser are
relevant enough to be placed into a shortlist for a user, right now.
That sounds simple. It is not.
Because what Andromeda is doing is replacing the old
question, “Which audience did the advertiser select?” with a new question:
Which of this advertiser’s active ads best match what this
user appears to care about in this moment?
Andromeda is effectively building a live shortlist from your
account, impression by impression.
What
“eligibility” actually means in practice
Eligibility is not a binary rule like “in audience” or “not
in audience.”
It is a probability-weighted inclusion decision.
An ad can be:
- retrieved
very frequently for a certain context
- retrieved
rarely for another context
- retrieved
almost never if the signals are weak or repetitive
So even inside one advertiser account, not all creatives
have equal access to delivery.
This is why advertisers often say:
“Meta is not spending on my ad even though the targeting is fine.”
The targeting can be fine. The ad is not being retrieved.
Inside the
fashion brand’s account
At any moment, the brand may be running:
- a
seasonal collection campaign
- a
sustainability-led message
- a
price-driven promotion
- a
styling or lifestyle narrative
In Germany, this might map to very practical business
moments: winter outerwear drops, back-to-office cycles, sale periods, or
premium capsule launches where brand tone matters as much as conversion rate.
Andromeda evaluates those creatives and asks if each one
should enter consideration for the specific user.
To do that, it analyses:
- the
visual meaning of the creative
- the
semantic meaning of the copy
- product
category and usage cues
- recent
on-platform behaviour
This is not just “who likes fashion.”
It’s closer to:
- what
style signals is the user responding to
- what
shopping mode the user appears to be in
- what
content pattern the user is engaging with
- what
intent-like behaviour is emerging right now
What
Andromeda changed in delivery
This was Meta’s first major AI overhaul because it flipped
the logic from audience-first to creative-first matching.
With Andromeda, Meta increasingly evaluates:
- visuals,
themes, hooks, language
- format
signals (what type of asset this is)
- prior
engagement patterns tied to similar signals
That is why the platform began rewarding setups with a
larger opportunity pool. Broader campaigns with more creative inputs give the
retrieval layer more options to match users and still satisfy campaign goals.
Creative
Diversification as a Retrieval Mechanism
For the ecommerce fashion brand, this explains why creative
variety became a structural requirement.
A trend-led visual, a sustainability narrative, a
discount-driven message, and a styling-focused creative are not small
variations.
They are distinct retrieval signals.
Each signal opens a different pathway into eligibility.
- trend-led
signals connect to discovery and novelty behaviour
- styling
signals connect to consideration and “how it looks” behaviour
- sustainability
signals connect to values-based evaluation behaviour
- discount
signals connect to price sensitivity and urgency behaviour
In Germany, diversification also protects the business from
over-dependence on a single conversion narrative. If one message saturates, the
account still has other eligibility corridors that can continue retrieving
demand without forcing a full reset.
If the brand repeats one dominant message:
- retrieval
narrows
- exploration
stalls
- delivery
plateaus
If the brand supplies multiple, clearly differentiated
concepts:
- retrieval
expands
- more
demand pockets open
- delivery
stabilises
Creative diversification is not experimentation.
It is how the system maps demand.
This also explains why “creative fatigue” started to feel
faster for many advertisers. If the retrieval layer learns a concept is
saturated for the audience contexts it maps to, eligibility can decline sooner
than teams expect, even before classic performance indicators visibly degrade.
GEM: The
Ranking and Decision Engine
Once Andromeda produces a shortlist of eligible ads, GEM
(Generative Ads Model) takes over.
GEM’s role is to decide which ad should actually be shown, impression
by impression.
GEM is not looking at one ad in isolation.
It is comparing eligible options and estimating which one
creates the best expected value right now.
What GEM
is actually ranking
GEM is essentially running a probability and value calculation
for each eligible ad.
It blends:
- Estimated
Action Rate
- expected
value of the action
- creative
and format performance patterns
- user
experience signals
- pacing
and competitive context
The key shift is that the system is not obeying fixed rules.
It is making probabilistic tradeoffs based on what it
predicts will happen.
What
“ranking” really means
In the old model, buyers assumed:
“If I set the audience correctly and bid correctly, delivery follows.”
In the ranking model, delivery is conditional on predicted
outcomes.
Even if your bid is strong, the system can deprioritise you
if it predicts:
- low
action probability
- weak
value signal
- inconsistent
feedback history
That is why “perfect” buying control can fail.
Why GEM
was a bigger shift than Andromeda
Andromeda answers: what can be shown.
GEM answers: what should be shown next.
That “next” is the part that changes media buying
psychology. GEM is not only picking a winner in isolation. It is learning
patterns across:
- ad
sequences
- formats
- messaging
arcs
- what
users do organically versus after ad exposure
- what
tends to work in combination across time
So ads are increasingly evaluated inside broader contextual
journeys, not as single isolated impressions.
GEM also feeds predictions back into retrieval. In practical
terms, the ranking system learns what combinations and sequences work, and that
learning influences what Andromeda is more likely to retrieve for similar
contexts in the future.
For the Germany-based fashion advertiser, this is why
constant small edits started producing weaker results. If the system is
learning longer-term journey patterns, frequent resets interrupt the very
pattern recognition it is trying to build.
How the
Two Engines Changed Planning and Buying Together
This separation of retrieval and ranking is the structural
change most teams miss.
Planning now influences eligibility through creative and
signal design.
Buying now influences ranking stability through pacing and restraint.
In the Facebook Ads era:
- planning
defined who could see an ad
- buying
decided which ad would win
In the current system:
- planning
shapes which ads can be considered
- buying
protects the system’s ability to rank accurately
If planners don’t supply diverse eligibility signals,
ranking has no good options.
If buyers over-control delivery, ranking loses the room to learn.
This is also why fast testing cycles and constant edits feel
less reliable now. When long-term patterns matter more, frequent resets can
interrupt pattern recognition and cause the system to relearn basic
relationships instead of building momentum.
Declared
Intent vs Latent Intent
Traditional planning assumed intent was declared:
- interests
- demographics
- funnel
stages
Meta’s AI-powered system optimises for latent intent:
- inferred
readiness
- contextual
timing
- behavioural
signals
This explains why interest stacking weakened, funnel splits
fragment learning, and impression-level optimisation outperforms stage-based
planning.
Planning moved from asking who the user is to understanding
where the user is in intent space.
The
Feedback Loop That Determines Scale
GEM learns only from outcomes it can observe.
For European ecommerce advertisers, this increasingly
depends on:
- Conversions
API
- server-side
purchase confirmation
- first-party
transaction data
- value
signals
If feedback is delayed, incomplete, or flattened,
probability estimates degrade. That degradation shows up as volatile
performance, unstable scaling, and sudden delivery drop-offs.
This is not a tracking detail.
It is the quality of the learning signal.
This is also why conversion volume and consistency matter
more than ever. Without enough stable outcome data, the system struggles to
detect trends, sequence effects, and pattern reliability.
False
Efficiency: The New Failure Mode
By default, the system optimises toward Estimated Action
Rates.
For an ecommerce fashion brand, this means that without
strong value signals, the system may prioritise:
- discount
buyers over full-price buyers
- low-margin
orders over high-margin ones
- one-time
purchasers over repeat customers
Platform metrics can improve while business quality declines.
This is false efficiency.
The system is doing exactly what it is trained to do.
Budget
Liquidity and Learning Stability
Probabilistic systems require room to explore.
When the fashion brand fragments budgets by country, funnel
stage, or product category, learning density collapses. Each split reduces the
system’s ability to compare outcomes and adjust probabilities reliably.
When budgets are consolidated, eligibility improves, ranking
stabilises, and scale becomes sustainable.
Liquidity is not about spending more.
It is about allowing the system enough freedom to learn.
Budget also behaves like a signal. If daily budgets are too
low relative to the conversion event, the system cannot generate enough
consistent outcome data per learning cycle to detect patterns. High-intent
events like purchases generally require more spend per learning cycle than
upper-funnel actions like clicks or engagements, simply because the outcome is
rarer and noisier.
For the Germany-based advertiser, this is often where
finance and performance collide: low budgets feel “safe” in the short term, but
they restrict the system’s ability to learn, which can make performance less
stable and scaling harder when the business actually needs growth.
What Media
Planning and Media Buying Are Now
Media planning now means
- defining
outcomes the system should learn toward
- supplying
diverse, intent-rich creative signals
- ensuring
clean, value-aware feedback
- designing
structures that preserve liquidity
Media buying now means
- pacing
rather than steering
- restraint
rather than control
- protecting
learning rather than forcing outcomes
Execution skill matters less than system understanding.
In practice, this shift also changed how experienced teams
read performance:
- less
focus on daily spikes
- more
focus on rolling windows (several days)
- less
reactive editing
- more
emphasis on stable learning periods where the system can recognise
patterns rather than constantly resetting them
The
Reality
Meta advertising is no longer a platform where humans guide
delivery directly.
It is a probabilistic system where:
- Andromeda
decides eligibility
- GEM
decides prioritisation
- outcomes
train future decisions
Media planning and media buying still matter.
They now operate above the system, not inside it.
That is the structural shift most teams are still struggling
to internalise.