Your marketing and UX attribution data is most likely dead wrong, here’s why:
For years, data ‘clearly’ showed that the model for the world was flat. If you sailed off the edge, you were doomed. Merchants firmly believed this and navigated according to this ‘universal truth.’ They were dead wrong.
Today, marketing teams and UX practitioners ‘assume’ their attribution models provide accurate data to guide them in assigning value to user behavior and advertising. They are also dead wrong.
For marketing teams, the data is typically assigned a monetary value based on which campaigns or assets of campaigns appear to be driving conversion. Advertising spending decisions are made by marketing teams based on attribution data as to where to increase or decrease marketing dollars to improve conversion.
For UX teams, attribution data determines whether tasks or interactions are having the desired impact and success in terms of ease-of-use or other heuristics. Decisions are made to modify the user experience to leverage good performance, or optimize poor performance.
But the bad news for both teams is according to a recent article by Jakob Nielsen (Internet Activity Bias Causes Lumpy User Behavior), their attribution data could very well be dead wrong.
His summary states:
“ Dramatic differences in how much people use the web on different days can distort simplistic interpretations of site analytics.
So, once again, this means that you can’t conclude that exposure to a specific stimulus causes a certain behavior, even if you observe increased occurrences of this behavior after the exposure. As I’ve always said, correlation doesn’t prove causation because hidden covariants might exist. We now know that user activity bias is one such covariant — and that it’s very strong.”
Proof of Incorrect Attribution
In the article a study is highlighted (“Here, there, and everywhere: Correlated online behaviors can lead to overestimates of the effects of advertising”) that seeks to determine if exposure to a promotional video for Yahoo! causes users to visit any of the Yahoo! websites.
The article provides a chart (see above) that demonstrates the probability of users visiting a Yahoo! site before, during and after exposure to the promotional video (the blue line).
In this chart, Day 0 is the day in which the users were exposed to the promotional video. A casual observation of this data would strongly suggest that attribution of visits to Yahoo! were caused by exposure to the video. Most likely, a marketing or UX team would conclude the video worked very well in obtaining the desired behavior. They may decide to spend much more money on such videos.
However, a more careful evaluation of the data may suggest that such an attribution of video exposure to site visit is not warranted. Why? Note that the blue line had a steady increase in trend at least six days BEFORE exposure to the video. Not only that, but there was a marked increase in visits three days prior to the video exposure. This increase in visits prior to the video exposure cannot be attributed to the video, thus attribution is suspect.
Control Group Debunks Attribution
A good way to evaluate data is to do so using a both a stimulus group and a control group. In this study, the control group (represented by the yellow line) were displayed an unrelated video that was not about Yahoo! As can be seen by the data, the control group had almost the EXACT same behavior of Yahoo! website visits as the group that was shown the Yahoo! promotional video! How is this possible?
According to Nielsen, this is a form of “activity bias:” some days, people do a lot online; other days, they do very little.
He goes on to state that on days in which people are more active, they are likely to do both activity A and activity B (no matter what activity A or B are). As is displayed in this study, activity A (the Yahoo video exposure group) and activity B (the non-Yahoo video exposure group) had the same activity.
Two More Attribution Studies using Search Ads
To add to this finding, Nielsen provides additional studies that demonstrate the same behavior.
A study by eBay’s Thomas Blake and team tested search engine advertising on Google and Bing.
Prior to the study, eBay had conducted a paid search campaign with multiple types of ads that had good CTR (click through rate) and sales from users who had clicked on the ads.
But by taking into consideration Activity Bias, the results of clicks to sales on the same day may not necessarily have been caused by the ads. It could potentially be attributed to the fact that on that day the users were more active, and thus would probably have bought anyway (without the ad).
To test this, the eBay team turned off all advertising for branded keywords on Google and Bing.
As you would expect, the visits from ads ceased, but users still kept visiting eBay. According to eBay analysis, only 0.5% of the clicks from branded search ads on Bing were lost, Google had a 3% decrease.
So the result of the study is significant: 99.5% of users visited eBay from Bing without the Brand search ad that eBay paid for. Likewise, 97% of users came from Google to eBay without the Branded search terms. Since users had already added the brand term “eBay” to their search query, they were already familiar with eBay the brand. Would removing Branded terms hurt eBay? According to this data, apparently not.
Significantly, many paid search experts will go to lengths to add Branded terms to a campaign to “protect” the Brand from click through loss to competitors. Yet according to the results of this study that’s potentially wasted money. Further, the false attribution of Branded terms in paid search sales is potentially causing firms to overly spend advertising dollars on Brand terms, with the potential harm of under spending on other critical terms.
Non Branded Search Terms and Attribution
To test this same attribution of search terms, but this time with non-branded keywords, Blake and his team turned off paid search ads in 65 randomly selected U.S. cities. For a control group, they selected 68 cities that closely matched the 65 test cities, and left search ads on.
Results of the test were stunning. When comparing the test group of no non branded terms to the control group with non branded terms, the paid search non branded terms only added about .4% of sales (a result that was not statistically significant).
Importantly, eBay noted that sales to ad clickers were much higher than sales for those that did not click ads.
But clearly the results confirm the earlier studies. Data that suggested attribution should be given to paid search terms was incorrect, and the actual amount of influence the paid campaign had was statistically insignificant. Users who were searching for items on eBay clicked on the paid ads not because they were better or more influential than organic results listings, but for other reasons, potentially because they were shorter to read or easier to see. Attribution then was potentially falsely given to ads that were actually not producing the assumed results.
In my own experiences I have seen the same results as well. I have run many A/B tests where the smaller size control (A) and test (B) group had results where there was a strong winner (sometimes A, sometimes B). However, when running the exact same test to the entire population we received the opposite result, or no result at all (statistically not significant results).
I believe this odd behavior I experienced in A/B testing was caused by activity bias.
Length of Time and the Impact on Attribution
Finally, Blake and team evaluated the frequency or length of time between purchases for eBay users and the impact on sales behavior. They evaluated users who had made a purchase in the prior year.
Users who had only made one or two purchases during the year exhibited a small increase in sales from search ads. However, those users who made three or more purchases in the past year had statistically insignificant increases in sales.
From an attribution standpoint the meaning seems clear: search ads have a greater impact on users who are not as familiar with a Brand. Users who are well aware of the Brand are significantly less likely to take action due to search advertising.
Why Marketing and UX Attribution Data is Dead Wrong
As Nielsen points out, this is a significant finding that has far-reaching implications to marketing and UX teams. Marketing and UX attribution data, without careful evaluation of control groups, is prone to false results from activity bias and as such cannot be trusted.
Especially prone to error are common marketing attribution practices of first click, last click, first and last click or any other combination of attribution models that do not incorporate control groups.
The best way to proceed is with continuous testing across time to ensure activity bias is accounted for.
Likewise, UX practitioners that are evaluating behavioral data for task flow or interaction must ensure changes they make are always tested against control groups, to ensure the UX of a website or application is not being compromised by bad data. Again, UX teams should strongly consider on-going testing to ensure activity bias is accounted for.
So much like the navigators of old, do not assume that your view of the world (or view of your data) is always correct. When it comes to attribution of marketing or UX behavior, make sure you are steering your course with truly valid information.
Jakob Nielsen, “Internet Activity Bias Causes Lumpy User Behavior”
Thomas Blake, Chris Nosko, and Steven Tadelis: “Consumer heterogeneity and paid search effectiveness: A large scale field experiment,” (March 8, 2013) PDF.
Randall A. Lewis, Justin M. Rao, and David H. Reiley: “Here, there, and everywhere: Correlated online behaviors can lead to overestimates of the effects of advertising,” (March 28–April 1, 2011) PDF.