Friday, 6 February 2026

💰 Where Does €1 of Programmatic Spend Actually Go?

💰 Where Does €1 of Programmatic Spend Actually Go?

A Transparent Look at the RTB Value Chain

Ever wondered what actually happens to €1 you invest in programmatic media?

It doesn’t go straight to the publisher.

Instead, that €1 travels through a complex RTB value chain before an ad is finally shown to a user. What looks like a single media transaction is actually a multi-layered journey involving technology platforms, data providers, verification vendors, and monetization systems.

 

🔗 The Full Programmatic Value Chain

Here’s how a typical open-web programmatic transaction flows:

➡️ Advertiser ➡️ Agency ➡️ Trading Desk (ATD) ➡️ DSP ➡️ Data Providers / DMPs ➡️ Verification (IAS / DoubleVerify / Moat) ➡️ Ad Server ➡️ Ad Exchange ➡️ SSP ➡️ Publisher Ad Server ➡️ Publisher ➡️ User

That means 8–10+ different players can touch a single impression before it is delivered.

Each participant adds value.
Each also takes a fee.

 

📊 So How Is €1 Actually Distributed?

Based on major supply chain transparency studies (PwC x ISBA and subsequent industry audits), the average distribution looks like this:

➡️ ~€0.51 → Publisher revenue
➡️ ~€0.35 → AdTech & intermediary fees
➡️ ~€0.14 → Unknown / unattributed delta

In simple terms:

  • Just over half reaches the media owner
  • Over one third funds technology and execution
  • A remaining portion is not fully traceable

 

💸 Inside the ~€0.35 AdTech & Intermediary Share

Let’s break down where that money goes across the stack:

➡️ Agency / Trading Desk~€0.08 – €0.12
Media planning, buying strategy, optimization, reporting, service margins

➡️ DSP Platform Fees~€0.10
Bidding infrastructure, platform access, algorithmic optimization

➡️ Data & Audience Segments~€0.02 – €0.05
3rd-party behavioral, demographic, contextual, and intent data

➡️ Verification & Brand Safety~€0.01 – €0.03
Fraud detection, viewability, suitability, brand protection

➡️ Ad Serving~€0.01 – €0.02
Creative hosting, impression tracking, delivery infrastructure

➡️ Ad Exchange Transactions~€0.02 – €0.04
Auction mechanics, bid clearing, transaction processing

➡️ SSP Monetization Fees~€0.08
Inventory packaging, yield optimization, bidstream management

➡️ Tech Markups → Embedded across layers
Resold data, bundled tech, managed service margins

 

Additional Cost Layers Often Present

Depending on campaign setup, additional costs may also apply:

➡️ Creative Production & DCO Tech~€0.01 – €0.03
Dynamic creative optimization, versioning, personalization

➡️ Measurement & Attribution Tools~€0.01 – €0.02
MTA, incrementality testing, cross-channel attribution

➡️ Consent Management & Identity Solutions~€0.005 – €0.01
CMP platforms, ID graphs, cookieless targeting infrastructure

➡️ Brand Lift & Research Studies → Variable
Survey vendors, brand perception measurement

 

📌 What This Means in Practice

➡️ From every €1 invested, only about ~€0.50 reaches the publisher.

The rest funds:

  • Technology infrastructure
  • Data enrichment
  • Fraud prevention
  • Measurement systems
  • Auction mechanics
  • Optimization platforms

Programmatic delivers automation, targeting precision, and global scale.

But the financial supply chain behind a single impression remains one of the most layered and complex ecosystems in digital advertising.

 


Conversions API Explained: A No-Nonsense 101 for Digital Marketers, From Theory to Implementation and Real-World Examples

 

Conversions API Explained: A No-Nonsense 101 for Digital Marketers, From Theory to Implementation and Real-World Examples




Digital advertising did not suddenly stop working. What changed is how much of the truth ad platforms are allowed to see.

For years, performance marketing operated in a browser-first world. A user clicked an ad, converted, and the browser reported what happened. Measurement felt deterministic. Optimization felt controllable.

That world is gone.

Privacy regulation, browser restrictions, OS-level changes, and fragmented user journeys have weakened browser-based tracking. Today, many teams are optimizing with partial, delayed, or distorted signals.

Conversions API exists to restore signal integrity.

This is a true 101 guide. It explains the full system, the decisions behind it, and the exact steps to implement CAPI properly using Google Tag Manager, without treating it like a developer-only project.

Who this 101 is for

This guide is for
✔️ Marketers managing serious paid media budgets
✔️ Teams optimizing for revenue, not clicks
✔️ Businesses that care about scale and unit economics
✔️ Marketers who want control over measurement

And who it is not for
❌ First-week beginners
❌ One-campaign experiments
❌ Teams looking for quick hacks
❌ Businesses without access to backend or CRM data

CAPI is infrastructure.
Infrastructure matters most when scale and accountability matter.

How digital advertising actually works end to end

Before CAPI makes sense, you must understand the system it feeds.

🟢 Ad serving
• A user opens an app or website
• The platform runs an auction in milliseconds
• Ads are ranked by predicted outcomes like conversion probability and value
• The winning ad is shown

Those predictions are built almost entirely on historical conversion signals.

🟢 Interaction
• The user views or clicks the ad
• The platform assigns identifiers like click IDs or device signals

🟢 Landing
• The user lands on your site or app
• Tracking scripts attempt to load

🟢 Conversion
• Purchase
• Lead
• Signup
• Subscription

🟢 Signal return
• Traditionally sent by the browser pixel
• Fed back into bidding and delivery

If signal return weakens, ad serving quality degrades.

Why traditional tracking breaks in the real world

The browser is no longer reliable.

❌ Cookies blocked
❌ iOS opt-in suppresses data
❌ Ad blockers stop scripts
❌ Slow pages drop events
❌ Cross-device journeys fragment users

Reality today:

➡️ Conversions still happen
➡️ Revenue still comes in
➡️ Platforms do not see everything

This creates distorted performance signals.

📉 CPA looks higher than reality
📉 ROAS looks weaker than reality
📉 Learning resets frequently
📉 Scaling becomes unstable

This is a signal problem, not a performance problem.

What Conversions API actually is

Conversions API is a server-based confirmation layer for conversion events.

Instead of relying only on the browser, your backend confirms conversions directly to ad platforms.

Browser pixel
→ fast
→ fragile

Server event
→ slower
→ reliable

Most serious setups use both together. This is hybrid tracking.

 

Pixel vs CAPI in marketer terms

Browser pixel
• Real-time
• Dependent on cookies and scripts
• Breaks easily

CAPI
• Server-confirmed
• Based on business truth
• Resilient to privacy changes

Best practice is combining both.

What CAPI does NOT do
Important expectations to set early

CAPI is powerful, but it is not a magic lever. Being explicit about this protects decision-making and credibility.

CAPI does NOT
❌ Automatically lower CAC
❌ Fix weak creative or poor offers
❌ Improve landing page conversion rates
❌ Solve attribution disagreements between tools
❌ Replace strategy, messaging, or pricing

What CAPI actually does
✅ Improves signal quality
✅ Reduces data loss
✅ Helps algorithms learn from reality
✅ Makes performance analysis more reliable

If performance improves after CAPI, it is usually because platforms can finally see the truth, not because CAPI created demand.

 

Consent, privacy, and legal reality

CAPI does not bypass consent. It must respect it.

Consent-aware logic:

User gives consent
→ browser pixel fires
→ server event allowed
→ identifiers included

User denies consent
→ browser suppressed
→ server sends limited or no data
→ no identifiers included

Key distinctions:

Browser consent
• Controls client-side execution

Server consent
• Controls whether backend data can be enriched and sent

Rule
If consent is false, CAPI must downgrade or stop signals. Ignoring this breaks compliance or silently breaks tracking.

CAPI and attribution vs optimization

CAPI improves optimization, not attribution perfection.

What improves
✅ Conversion visibility
✅ Signal stability
✅ Algorithm learning

What does not magically improve
❌ Cross-channel attribution
❌ GA vs platform parity
❌ CRM vs finance reconciliation

CAPI helps platforms decide where to spend next, not explain history perfectly.

 

Event prioritization and aggregation logic

Platforms learn best from clear priorities.

Effective priority stack:

🏆 Purchase

🎯 Lead or Subscribe

🧭 Checkout or Registration

👀 View or engagement

Rules
• Optimize on one primary event
• Use others for learning and audiences
• Too many “important” events confuse algorithms

Value strategy for CAPI

Value is strategy, not a field.

Decisions you must make:

Fixed vs dynamic value
• Fixed for early lead gen
• Dynamic for ecommerce

Revenue vs proxy value
• Ecommerce → real revenue
• Lead gen → proxy first, CRM-backed later

Transaction vs LTV
• Start with transaction truth
• Move to LTV only when proven

Wrong value logic hurts bidding more than missing data.

Common real-world failure patterns

CPA spikes
→ value mismatch or deduplication issues

Conversion inflation
→ missing or inconsistent event IDs

Delayed reporting
→ expected server-side behavior

Match quality not improving
→ insufficient first-party data

Most failures are configuration errors, not platform issues.

Platform differences

What stays the same
• Server-confirmed events
• Deduplication
• Value-based optimization

What changes
• Event naming
• Diagnostics tools
• Debug interfaces

CAPI is infrastructure. Platforms are destinations.

Business readiness checklist

CAPI matters when:

✔️ You spend meaningful paid media budget
✔️ You optimize beyond clicks
✔️ You scale regularly
✔️ You have backend or CRM access
✔️ You want privacy resilience

If not, fix fundamentals first.

How leaders should read performance after CAPI

Expect shifts:

• More conversions reported
• CPA may normalize
• Historical benchmarks may break
• Platform vs analytics gaps may change

This reflects better visibility, not worse performance.

What to expect after implementation
Timelines that prevent false conclusions

CAPI changes visibility first, then behavior.

Typical timeline in real accounts:

First few days
• More conversions may appear
• Reporting may look “off” vs historical benchmarks
• Server events may show slight delays

Week 1 to 2
• Deduplication stabilizes
• Conversion volume normalizes
• CPA volatility reduces

Weeks 2 to 4
• Learning phases stabilize
• Delivery becomes more predictable
• Broad and lookalike audiences improve

Important rule
Do not judge CAPI success in the first 48 hours. Judge it after data stabilizes, not when numbers spike or dip temporarily.

 

CAPI workflow mental model

🧑 User action
→ click
→ site
→ conversion

🌐 Browser signal
→ fast
→ fragile

🔁 Event forwarding
→ browser independence

🖥️ Server confirmation
→ truth
→ enrichment

📣 Platform ingestion
→ deduplication
→ learning

🧠 Optimization
→ stability
→ scale

 

Practical implementation using Google Tag Manager

A true step-by-step marketer walkthrough

This section assumes no backend coding and focuses on what marketers actually control.

 

Step 0: Define your tracking architecture

Before touching GTM, decide this clearly.

🎯 Primary optimization event
• Purchase or Lead

🧩 Supporting events
• ViewContent
• AddToCart
• InitiateCheckout

💰 Value logic
• Revenue or proxy value
• Single currency format

🆔 Event ID source
• order_id
• transaction_id
• lead_id

If this is unclear, stop here.

 

Step 1: Validate your Web GTM data layer

Open GTM Preview and complete a test conversion.

Confirm the data layer includes:

• event name
• value
• currency
• transaction or lead ID
• consent state
• user identifiers if collected

Rules
• One conversion = one event
• No duplicates
• No random naming

If Web GTM is messy, Server GTM will amplify the mess.

 

Step 2: Create a Server GTM container

In Google Tag Manager:

  1. Create new container
  2. Choose Server as container type
  3. Complete setup

What this does
You create a controlled processing layer between your site and ad platforms.

 

Step 3: Host the Server container

Server GTM needs a runtime environment.

Typical choices
• Google Cloud
• Managed server-side GTM providers

Marketer responsibilities
• Ensure uptime
• Monitor costs
• No need to manage infrastructure

 

Step 4: Connect Web GTM to Server GTM

Modify Web GTM so events are forwarded to the Server container.

Conceptually:

Website
→ Web GTM fires event
→ Event sent to Server GTM endpoint

This creates one reusable pipeline.

 

Step 5: Configure clients in Server GTM

Clients define how events are received.

Common setup
• GA4 client receives events
• Consent signals passed through

Think of clients as inbox rules.

 

Step 6: Configure CAPI tags in Server GTM

Tags define where events are sent.

For each platform:

• Create a CAPI tag
• Map event name
• Map value and currency
• Map event ID
• Map user data fields

One tag per event type is usually safest.

 

Step 7: Configure triggers

Triggers decide when tags fire.

Examples
• Purchase trigger fires Purchase CAPI tag
• Lead trigger fires Lead CAPI tag

Rules
• One trigger per meaningful event
• Avoid overly broad conditions

 

Step 8: Deduplication setup

Critical step.

Ensure:

• Browser event includes Event ID
• Server event uses the same Event ID

Result
One conversion is counted once.

Without this, reporting inflates and optimization breaks.

 

Step 9: Consent enforcement in Server GTM

Inside Server GTM:

• Read consent state
• If consent denied
→ block tags
→ strip identifiers

This ensures legal and functional correctness.

 

Step 10: Match quality enrichment

If consent allows, enrich server events with:

• Email (hashed)
• Phone (hashed)
• CRM ID

Do not send what you do not legally collect.

 

Step 11: Validation and testing

Test with real actions.

Checklist
✔️ Browser event visible
✔️ Server event visible
✔️ Deduplication confirmed
✔️ Values match backend
✔️ Consent respected

Ignore dashboards until this passes.

 

Step 12: Rollout strategy

Do not enable everything at once.

Safe rollout

  1. Enable primary event only
  2. Observe for several days
  3. Add supporting events
  4. Expand to other platforms

 

Step 13: Ongoing maintenance

Treat CAPI like analytics infrastructure.

Monthly
• Compare event counts vs backend
• Check for duplicates
• Review diagnostics

After any site change
Assume tracking broke and revalidate.

 

Final framework to remember

Truth → Signals → Learning → Scale

That is a real CAPI 101.

 

That is how CAPI should be implemented using GTM, in a way that actually improves performance instead of just adding complexity.

Why metrics like ROAS often mislead teams

And why real performance needs more context

Most performance marketing discussions still revolve around ROAS. It is fast, intuitive, and easy to communicate. Leadership understands it. Platforms optimize around it. Dashboards highlight it.

But ROAS is a surface metric.

It tells you what happened in the platform’s visible world, not necessarily what happened in the business. In a privacy-restricted environment, that gap matters more than ever.

This is where CAPI changes the conversation. Not by inflating numbers, but by reducing blind spots. And this is also where ROAS must be paired with CLTV : CAC to judge whether growth is actually healthy.

To make this concrete, let’s walk through a realistic example.

NOTE: It’s possible for ROAS to improve while CLTV:CAC deteriorates if acquisition quality drops

A practical example

Why ROAS alone lies and how CAPI plus CLTV : CAC reveals the real picture

Let’s take a fictional but realistic scenario.

🇩🇪 A German ecommerce brand
• Direct-to-consumer
• Mid-ticket products
• Running paid media primarily on Meta Ads
• Optimizing for Purchase events

What the marketing dashboard shows before CAPI

Inside Meta Ads Manager, the numbers look strong.

📊 Reported performance
• Spend: €100,000
• Reported revenue: €800,000
• Reported ROAS: 8.0

On the surface, this looks excellent.

Most teams would conclude
“ROAS is 8. We are doing great.”

But this is not the full picture.

What is actually happening underneath

Because tracking is browser-only:

❌ iOS users are underreported
❌ Repeat purchases are partially invisible
❌ Cross-device journeys are broken
❌ Some conversions never get attributed

Reality:

➡️ Meta sees part of the truth
➡️ Finance sees a different truth
➡️ CRM sees yet another truth

ROAS = 8 is directionally useful, but incomplete.

What changes after implementing CAPI

After implementing CAPI correctly:

• Browser pixel remains active
• Server-side confirmations are added
• Deduplication is enforced
• First-party data improves match quality

📊 Post-CAPI reported performance
• Spend: €100,000
• Reported revenue: €950,000
• Reported ROAS: 9.5

Important clarification
This does not mean Meta suddenly created more demand.

It means:

➡️ More real conversions are now visible
➡️ Signal loss has been reduced
➡️ Optimization is based on cleaner truth

ROAS improved because visibility improved, not because performance magically changed.

 

Why ROAS is still not enough

Even with perfect tracking

Even after CAPI, ROAS remains a short-term lens.

ROAS answers
“How much revenue did I get relative to ad spend?”

It does not answer
“Was this customer profitable over time?”

This is where CLTV : CAC becomes non-negotiable.

 

CLTV : CAC explained in plain language

💰 CAC (Customer Acquisition Cost)
• How much you spend to acquire one customer

📈 CLTV (Customer Lifetime Value)
• How much revenue that customer generates over their lifetime

The ratio between the two determines whether growth compounds or collapses.

CAPI and offline or delayed conversions
Closing the loop beyond the first purchase

Many conversions do not happen instantly or fully online.

Examples
• Repeat ecommerce purchases
• Subscription renewals
• Post-purchase upgrades
• Offline payments or approvals

CAPI allows businesses to send these events after the fact, once they are confirmed in backend systems or CRMs.

Why this matters
➡️ Customer value becomes clearer
➡️ CLTV calculations become more accurate
➡️ Acquisition quality improves over time

This is the missing bridge between
First-click performance
and
Long-term customer value

CAPI is what makes that bridge possible.

 

Scenario 1: CLTV : CAC = 1 : 1

🚨 High risk, fragile growth

Example
• CAC = €100
• CLTV = €100

What this means
• You only break even on acquisition
• No margin for operations, support, logistics, or returns

Even with high ROAS, the business is vulnerable.

Why this happens
• ROAS counts revenue, not profit
• Low repeat rate or thin margins destroy unit economics

This is not scalable.

 

Scenario 2: CLTV : CAC = 2 : 1

⚠️ Survivable, but constrained

Example
• CAC = €100
• CLTV = €200

What this means
• The business makes money
• Scaling increases cash-flow pressure
• Volatility becomes dangerous

Many brands sit here without realizing it.

ROAS looks fine.
Growth feels stressful.

 

Scenario 3: CLTV : CAC = 5 : 1 or higher

✅ Healthy, scalable growth

Example
• CAC = €100
• CLTV = €500+

What this means
• Strong unit economics
• Margin to absorb volatility
• Freedom to scale confidently

In this zone:

➡️ Higher CAC is acceptable
➡️ Broader targeting performs better
➡️ Algorithms can explore more aggressively
➡️ Short-term ROAS swings matter less

This is where performance marketing becomes a growth engine.

 

How CAPI directly supports stronger CLTV : CAC

CAPI does not calculate CLTV for you.
But it enables the system that makes CLTV optimization possible.

🔁 Better conversion visibility
• Fewer lost customers
• More accurate acquisition counts

🧠 Better algorithm learning
• Platforms find higher-quality users
• Not just the cheapest first purchase

📊 Better downstream alignment
• Ad data aligns closer with CRM
• Repeat behavior becomes measurable

CAPI is what allows teams to move from
“ROAS looks good”
to
“Our customers are profitable over time.”

 

The correct mental model to keep

ROAS answers
“Is this working right now?”

CLTV : CAC answers
“Is this worth scaling?”

CAPI exists to ensure both answers are based on truth, not partial visibility.

That is how measurement, optimization, and growth finally align.

 

Wednesday, 4 February 2026

Moltbook: An AI-Only Forum That Reveals the System Layer Behind Digital Marketing


 


Digital marketing looks mature on the surface. Better tools. More automation. More data.

Yet most experienced marketers are running into the same problems again and again:

• campaigns look strong but don’t scale • reach drops without obvious reasons • CAC keeps increasing despite optimisation • leads look fine but sales quality declines

These issues are usually blamed on platforms, algorithms, or market conditions. In reality, there is a deeper shift underneath that many marketers feel daily but struggle to describe clearly.

🔍 The part of marketing we rarely talk about

We still tend to think of marketing as a direct relationship between a brand and a person.

In practice, that relationship is now mediated by systems.

Before a human ever sees your ad, content, product, or brand, systems already decide:

→ should this be shown → what is this about → who is this for → is this consistent with what we already know

This system layer now controls outcomes across paid media, organic discovery, eCommerce, and B2B demand generation.

We operate inside it every day. We just don’t clearly see it.

That’s where Moltbook becomes useful.

Article content


🤖 What Moltbook actually is

What it is

Moltbook is a public online forum where only AI agents can post, comment, and interact.

Humans can read everything, but they cannot participate.

Every post, reply, and thread is one system interpreting another system.

How it works

Each AI agent builds understanding slowly, across many interactions.

There is no intent to impress, persuade, or perform.

Instead, behaviour follows simple system logic:

→ ideas that remain stable get referenced again → ideas that shift meaning lose priority → entities that cannot be clearly classified fade from attention

Nothing is explicitly rejected. It is simply no longer reinforced.

Why this matters

This is exactly how marketing systems behave.

Moltbook removes creative polish and human emotion and lets us watch interpretation logic in isolation.

When it becomes relevant

Moltbook becomes relevant the moment you realise platforms are not asking “is this good marketing?”

They are asking “do we understand this well enough to keep showing it?”

💡 Why this matters beyond curiosity

Moltbook is not important as a product.

It is important because it makes invisible system behaviour visible.

The same logic you see on Moltbook already decides:

→ which ads expand distribution → which brands get recommended → which products get ranked → which leads get prioritised

Normally, marketers only see the outcome. Moltbook lets you observe the mechanism.

🎯 Why digital marketers should actually care

Because many marketing problems today are not caused by weak execution.

They are caused by system uncertainty.

If a system cannot confidently answer:

• what category are you in • what problem you solve • who you are for

then performance degrades before optimisation even begins.

Moltbook makes one thing unmistakably clear:

→ systems reward clarity over creativity → systems reward repetition over novelty

Why this actually matters (the consequences if you ignore it)

When systems are unsure, the cost appears everywhere:

Budgets inflate You pay more to compensate for low relevance confidence.

Scaling stalls Distribution stops expanding because the system hesitates to commit.

CAC rises Exposure shifts toward colder audiences.

Attribution misleads You optimise ads and pages while rejection happens earlier.

Sales quality drops Leads arrive, but intent is weaker by the time humans engage.

This is not creative failure. It is interpretation failure.

🧩 What this means in real marketing work

How Moltbook helps marketers see this clearly

Moltbook shows system behaviour in slow motion.

You can watch how meaning is formed, reinforced, or abandoned.

A consistent pattern emerges:

→ understanding is cumulative → inconsistency resets confidence → clarity compounds quietly

That same pattern explains many everyday marketing problems.

B2B marketing example (European SaaS)

Imagine a European SaaS company selling compliance software.

Week 1 → Website positions it as “enterprise compliance platform”

Week 3 → LinkedIn ads say “mid-market automation tool”

Week 6 → Sales deck calls it “risk management software”

To humans, this feels like normal experimentation.

To systems, this looks like:

→ category unclear → ICP unclear → intent confidence reduced

On Moltbook, when an AI agent changes how it describes itself across discussions, other agents stop referencing it.

They do not debate. They disengage.

Real-world outcome:

→ leads still convert → intent scores weaken → routing deprioritises accounts → sales sees fewer strong conversations

B2C and DTC example (European fashion brand)

A fashion brand launches a seasonal campaign.

→ Ads focus on affordability → Influencers talk about premium quality → Product pages emphasise sustainability → Different EU markets highlight different values

Humans understand this nuance.

Systems struggle to form a single definition.

On Moltbook, agents amplify ideas that stay consistent and ignore those that keep shifting.

In feeds, the same thing happens:

→ the system cannot confidently categorise the brand → distribution expansion slows → frequency increases → CAC rises

The creative was strong. The system hesitated.

eCommerce marketplace example (Amazon EU, Zalando)

A product converts well once seen.

But:

→ titles differ by country → attributes are inconsistent → reviews describe different primary use cases → availability fluctuates

On Moltbook, when facts change between interactions, reinforcement stops.

Marketplace systems behave the same way:

→ ranking confidence drops → visibility declines → growth stalls despite strong conversion

Retail and local example (European retail)

A retailer runs local campaigns.

But:

→ store hours differ across platforms → ads promote products not in stock → pricing mismatches between channels

On Moltbook, agents disengage when basic facts conflict.

Local discovery systems do the same:

→ store visibility drops → footfall declines → marketing appears active, stores feel quiet

🧠 What Moltbook helps us understand

Moltbook shows how systems actually operate:

→ they remember patterns, not moments → they reward consistency, not creativity → they disengage quietly when unsure

This gives marketers a clean mental model for modern digital ecosystems.

Not how platforms explain them. But how they behave.

The real takeaway

Moltbook matters because it explains why good marketing often fails without warning.

Not because people rejected it. But because systems never fully understood it.

For experienced digital marketers, this is not about Moltbook itself.

It is about one hard truth:

systems are now the first audience — humans come second

Once that clicks, many “mysterious” performance problems stop being mysterious.