Google recently posted a little blurb about The ABCs of A/B Testing. I hate to be a wet blanket, but for most of us lower volume publishers, that’s a pointless exercise with little to gain.
Unless you have very high volume, you simply don’t have enough data to draw any useful conclusions from this kind of test.
Don’t get completely discouraged: sometimes a small change can produce visible results. For example, when I added ads that mixed in with the text (like the one just above or below this paragraph), my revenues increased enough to notice. However, it wasn’t from those ads: it was the column ads that picked up activity.
So, yes, I do track my Adsense. Mostly that’s just idle curiousity: I just like to know where the money is coming from. For example, you see Google ads in three different places on this page, and I know how much money each position makes per day. That’s how I know that adding the middle ad increased the performance of the side column ads.
But what’s interesting (and unfortunate with reference to A/B testing) is that while the monthly overall income is relatively steady, the distribution of that money varies pretty widely. Today the top position ads are the best performing, tomorrow it might be the ones that mix in with the text, and next week Wednesday the ones in the columns take the prize. Why does this happen? Who knows? Daily stats swing too: A new advertiser might pop into the mix and change everything – nothing to do with anything you did. For whatever reason an old ad that attracted nothing in the past suddenly becomes interesting to the visiting audience.. that kind of thing. That means that tracking changes can be confusing and noisy. Unless you run a product like Asrep, you can’t know if you really are comparing similar ads. There are too many variables, and ordinary day to day fluctuations can mask your data. It’s all too easy to be fooled by external influence.
Particular pages might do better with certain ads: a page about “x” attracts advertisers who sell certain products. One of those advertisers has an ad that really works well, but it only fits in a large ad block. On pages about “y”, however, there are advertisers who work better in column ads. The simplistic A/B channel testing can’t show you any of that.
If you are going to do this, you need to pick a page or pages that have a lot of traffic or already have a good CTR and experiment over a period of weeks to even out the noise. However, the longer you run the experiment, the more it is influenced by outside factors: new advertisers, old advertisers leaving, sudden popular interest waxing and waning. You usually cannot easily tell why revenues increased or decreased unless you have a lot of volume.
Even if you do see a change, you need to test that on other pages to see if it works the same way there. As noted above, your change might cause increased performance on some pages and decreased ad activity on others.
Certainly if what you are doing is “all wrong”, A/B testing will find that. But beyond that, your labors may not put much on the table, because tweaking will only cause small changes. An exception might be a site making thousands per day: there even small percentage gains can be worth the time spent. But if you are floating around or under a few hundred clicks per day, a 1% gain might be less than $5.00 per month improvement in earnings – not worth spending much effort for most of us. If your revenues now are down in the dollar per day or less regions, a 1% change is probably completely invisible.
You may just like futzing and tweaking: that’s fine. Just don’t expect ten fold gains from simple changes and realize that any improvement or loss you do see may have come from unrelated factors.
*Originally published at APLawerence.com
A.P. Lawrence provides SCO Unix and Linux consulting services http://www.pcunix.com