Now that ads.txt is getting traction, with about 44% (and growing) of the top 10,000 domains using it, you’ve got some homework in your future.
Here’s the assignment: Pull together your campaign data at the source (e.g. somedomain.com) and “source/source” i.e. where you actually bought the inventory. At the source / source/source level gather the following:
- Impression volume
- Attributed sales (both MTA and last-click)
- Attribution source (view-through, click-through)
- Fraud data from your fraud vendors
- Time-of-day data for impressions and clicks
I hope you can see where this is going: You’re now going to check to see which of the authorized sources for publisher data are playing fast and loose with what they’re selling you. You might see it in CTR index rates that are ridiculously high, odd distribution of impressions in the middle of night, and so on.
Two, pre-coffee hypotheses occur to me:
- The more authorized sources in a publisher’s ads.txt file, the greater the likelihood that you’ll see source/sources with bizarre-looking metrics and higher fraudulent traffic.
- The more heavily you optimize for CPM at the publisher level, the more likely your dollars tilt toward the sketchier–but still approved–resellers.
You’ll have your own hypotheses that will vary based on your digital buying strategy. And I think the above hypotheses will be easier to see when you split out your analysis at display vs. video.
As a fellow CMO is fond of saying “The average is a lie.”
Takeaway: Always de-average, even if it takes a long time. Make sure you collect your data in a way that allows you to de-average, even if it takes a long time. This will cost money, which is OK. Then, ABD (Always Be De-averaging). And win.
P.S. Coffee’s kicked in. Check my post on source level triage for some tips on how to use your findings.