Using A/B testing in advertising

A/B testing is not a new concept. It’s been around for decades, and it was first used online, for website optimisation, by Amazon back in the ‘90s. The rest, as they say, is history. Or not, if you talk advertising. Because, believe it or not, there are very few e A/B testing initiatives in the advertising world, excluding paid search.

Back in November last year, together with the folks at SeeSaw, Brainient launched one of the first A/B testing initiatives in the UK video advertising market. We decided to let viewers decide which Hotmail video advert they wanted to watch, with a selection of three ads to choose from – each represented by a thumbnail. If they didn’t select one, we would choose one for them. We generated an average CTR of nearly 9%. That’s, um, nine times higher than the market average. And we did it by A/B testing, tracking and using data: we saw that Thumb 1 (corresponding Video 1) got 60% more clicks than the other two, while Video 3 got 30% more clicks to the Hotmail site than the other two videos. So by simply switching the video behind Thumb 1 with Video 3, we generated nine times more results than the market average. That’s a big return for such a small change.

While speaking to networks, publishers & agencies over the past few months, I’ve spotted a trend towards using data and A/B testing display & video advertising campaigns, but it’s not moving the needle just yet. It may be because more often than not networks & agencies don’t have time & resources to do it, and advertisers rarely get involved into the actual planning of their campaigns, not knowing how big of an impact an A/B test would have on their budget spend. But nine times higher CTR means, almost every time, a ninefold increase in campaign ROI, so maybe advertisers & agencies should revisit a concept that’s bee used in the online world for more than a decade.