How to Run Your Own Incrementality Experiments (Without Relying on Meta)
Why Attribution Alone Isn’t Enough
Platforms like Meta often claim credit for conversions → but not all of them are caused by ads. Some users would’ve bought anyway.
Let’s quickly recap how incrementality works in the real world.
Nikita, the manager of a Superstore decided to increase her Sales by hiring an external sales person Greg. To track Greg’s sales, she gave him a Code to share with all customers referred by Greg.
After one week, she sees that Greg is contributing an astounding 100 Customers per day. But her total customers per week had stayed flat at 220 per day.
The results made no sense.
She paid Greg’s commission, but she decided to visit the store to see what was happening. What she saw shocked her. Greg was standing near the checkout counter and giving his code out to people who were already waiting in line to complete their purchase.
Greg was getting lots of Sales credited to him, but Greg was giving no incremental results. That’s where incrementality comes in. It doesn’t ask who clicked. It asks:
“What is the incremental value being driven by these Ads? Which users are here only because of the Ads?”
It is a difficult question to answer.
At any point of time there are a ton of things running in parallel. Ads, Influencers, Social Media, TV, Billboards. Isolating the effect of one campaign can seem impossible.
So we experiment. We ask a different question:
“What would’ve happened if I didn’t run this campaign?”
To answer that, we need to run an experiment with two groups - One that saw the campaign with one that didn’t, while everything else stays the same. This is how you can isolate true lift - the actual impact of your efforts.
Let’s walk through how to run these experiments yourself, without needing to trust any platform’s black box.
The Core Idea
Split your audience into a test and control groups.
Run your campaign only on the test group.
Compare the results to see what improvement happened because of the campaign.

That’s it.
Easy.
Now let’s look at two real-life examples.
Example 1: Incrementality Test on Meta Ads (Using Geo or Time Splits)
How to run the Experiment:
Split by Geography or Time:
Geo Split: Run the campaign in Region A, but not in Region B.
Time Split: Run the campaign for Month 1, pause it for Month 2.
Run the Ad Campaign in One Group Only:
Show ads only in Region A (or during Month 1).Use Your Own Tracking:
Measure conversions using your own setup. This can be a CRM, Mixpanel, GA4, BooleanMaths - anything independent of Meta’s attribution.Compare Results:
If Region A sees a 4.8% conversion rate and Region B sees a 3.2% conversion rate, the incremental lift is 1.6% conversions, or ~50%.
This reflects the real impact of your ads, not just what Meta reports.

Why This Matters:
Meta may claim 300 conversions, but your test shows only 100 were actually driven by ads. The rest would have happened regardless—this helps you allocate your budget smarter and trust your ROI numbers.
Example 2: Incrementality Test on Email Campaigns
Goal: Test if your retargeting campaigns are driving purchases as opposed to abandonment emails.
Step-by-step:
Split abandoned cart users into two groups:
Test group gets the email flow.
Control group doesn’t.
Trigger the campaign.
Track purchases from both groups.
Compare the lift.
If 18% of the test group purchases vs. 6% of the control group, that’s a 200% lift.

Why this matters:
If you’re running retargeting Ads along with emails without testing, you might be overestimating both Ads effectiveness and their ROI, while also causing real Ad Fatigue.
Tips for Better Experiments
Use random splits to avoid bias.
Measure success outside the platform—use your own backend or analytics tools.
Don’t rely on single-day results. Run the test long enough (at least a week) for meaningful data.
Make sure your Experiments have sufficient data to be of statistical significance.
What About Meta’s Incremental Attribution Setting?
Meta recently introduced a modeled "Incremental Attribution" view that estimates causal lift using machine learning. It’s helpful for quick insights, but:
You don’t control the model.
You don’t know the assumptions.
You can’t customise it to your business.
Think of Meta’s setting as a shortcut—but not a substitute for running real experiments.
Bottom Line
If you want to:
Avoid wasting budget
Know what actually works
Build confidence in your strategy
…then start running your own incrementality tests.
Attribution tells you what happened.
Incrementality tells you what mattered.
In case you want to read more about Meta’s Incremental attribution, we have got you covered.