Back to Alea | Why Testing Shouldn’t Be Confined to the Lab…

Mits Mistry . News . 14 June 2021 Read Time6 min

Why Testing Shouldn’t Be Confined to the Lab…

Believe it or not, experiments are just as important in marketing as they are in science.

Mits Mistry - Dice

Mits Mistry

Mits.Mistry@Dice-comms.co.uk


Experimentation is the beating heart of science. It is also responsible for the life-saving vaccinations that are slowly releasing the world from the grips of a global pandemic. But it isn’t just science where experimentation plays such a vital role. More often than not, successful pharma marketing is the result of regular and rigorous testing. So, what does testing mean in a marketing sense? And more importantly, how can you use it to your brand’s advantage? 

A tale of two tests

The two most common tests used in marketing are A/B testing, sometimes referred to as ‘split testing’, and multivariate testing. Both can be used to test different versions of landing pages, emails or display ads.  

Google Optimize, Optimizely and VWO are just three of the website testing platforms available to marketers, with A/B testing also built into Google Ads and various email marketing platforms to test acquisition content.

“It isn’t just science where experimentation plays such a vital role. More often than not, successful pharma marketing is the result of regular and rigorous testing”.

A/B testing

A/B testing is an optimisation method that compares the performance of two different versions – version A and version B – of a display ad, landing page or email. Depending on the channel, ‘performance’ could mean open rate, click-through rate, form completion or another key metric. 

Say, for example, you are A/B testing an email. First, a subset of your database would be created. Version A would be sent to 50% of your subset and version B would be sent to the other 50%. You can then compare the performance of each against your KPI and send to the remainder of your database. Keeping a log of performance is useful to inform future campaigns.

A/B testing is typically used to assess the impact of changing a single element. That could be a subject line, page header, call to action (CTA), image, or even the date or time of send.

For example, you may wish to test the impact of removing your navigation from a landing page header. If conversion rates are higher for the page that doesn’t contain your navigation, you will know why. This wouldn’t be the case if you also changed the wording of a CTA, or any other page components. 

Multivariate testing

As the name suggests, multivariate testing is used to compare the performance of multiple versions of a campaign, where a higher number of variables may change.

So, rather than making one change to a page and sending the ‘before’ and ‘after’ versions to two segments of your audience, a multivariate test will typically entail changes to numerous components.

By way of example, you may be looking to identify the cause of low click-through rates on an email campaign. With an A/B test, you may already have an idea of the cause, but in this case, there are various elements to be considered. 

Therefore, you may decide to craft an alternative subject line, change the header image and embed a contact form. All combinations of these elements would then be tested – one version will incorporate all, others will include two, others simply one.

This naturally creates multiple versions of an email, but once the results are analysed, you should in theory be left with a common denominator – i.e. the component, or combination of components, that has the biggest impact on click-through rates.

In the above scenario, you could be looking at eight different versions of the same email.

Top tips for successful testing

While there are tangible benefits to testing, you’ll only get the most out of it by doing it in the right way and following some basic rules.  

Know why you’re testing

Rule number one when it comes to testing is to know why you’re doing it in the first place. All science experiments starts with a hypothesis, and the same applies here. Especially with A/B testing, you should have a good idea what you expect to achieve (or prove) by changing one element of a campaign.

This hypothesis should be defined upfront. For example, “By using an audience-matched header image, the test variant will evoke a sense of familiarity and belonging, in turn boosting conversion rates.” 

Design tests to make a substantial difference

There’s no point conducting a test if it isn’t going to have a significant impact on performance. That’s why it’s important to change key components, rather than making minor adjustments. For example, a change to your value proposition, contact form or subject line is more likely to make a difference than a small grammatical change within the intro copy. 

“There’s no point conducting a test if it isn’t going to have a significant impact on performance.”

Test everything

Don’t be afraid to think offline as well as online. Before the internet, marketers would use A/B test coupons in magazines and count the number of people who came into a store with one coupon compared to another. This method is still used today, so consider what you could test beyond your digital campaigns. For example, why not A/B test the opening sentence that your reps use on sales calls to HCPs?

Nail your UTM tracking

If you can’t accurately assess the results of a test, it will count for nothing. This means your UTM tracking has to be spot-on. UTM (Urchin Tracking Module) is a simple piece of code that can be added to URLs to generate Google Analytics data for digital campaigns. UTM codes can be created through auto-tagged Google Ads campaigns, or using other online tools (simply search ‘UTM builder’), and added to your email, PPC or social campaigns.

UTM codes allow you to attribute traffic from a specific campaign, source, medium, keyword or other content page. These can all be defined within the URL by implementing code such as ‘utm_source=Facebook’ or ‘utm_campaign=Intro_Offer’, and tweaked for each iteration of a page or email. 

For example, if we wanted to test the impact of changes to a PPC campaign for paracetamol, a URL could be set up as follows:

https://www.dice-comms.co.uk/paracetamol?utm_campaign=paracetamol&utm_medium=cpc&utm_source=facebook&utm_content=paracetamol_V2

A different ‘content’ tag would be required for each variant of an advert to ensure accurate tracking, e.g. ‘utm_content=paracetamol_V3’, ‘utm_content=paracetamol_V4’, and so on.

Time it to perfection

Whether it’s an email, ad or landing page, it’s crucial that you send or launch the various versions at exactly the same time, otherwise the data will be skewed. Once launched, let the data gather over a minimum of two weeks, allowing for seasonality. 

If there is no notable top-level difference at the end of the initial testing phase, you should review performance across different dimensions. Does a variant win on mobile devices? How does the acquisition channel impact performance? If there are still no significant differences, look to end the test and evaluate your null result, ideally with a view to either re-running with a more impactful change or moving to a different element. .

“Don’t be afraid to think offline as well as online. For example, why not A/B test the opening sentence that your reps use on sales calls to HCPs?”

Ensure your sample audience is big enough

While platforms such as Google Ads and Facebook Ads Manager make it easy to build a substantial (albeit anonymous) audience for targeted ad campaigns, your email database should only include people who have opted in to receive your emails. This means the audience size may be smaller when testing email campaigns.

You’ll want at least email 100 opens to gather accurate and actionable data, so before conducting a test, consider the size of your current subscriber list and average open rate.

Test, test and test again

As well as delivering incremental gains to your bottom line, regular testing should also multiply the impact. For example, if you see a 5% gain at one part of the customer funnel, it will be felt everywhere. On the other hand, if you don’t make any changes at all, the impact will be nil. 

Various studies show that brands who test regularly enjoy higher conversion rates, so now is your time to put testing into practice. Please get in touch for any further advice, or to find out how we can optimize your campaigns for better sales outcomes.

https://www.dice-comms.co.uk/