7 rookie mistakes in CRO I learned the hard way

Starting from scratch in online marketing is easier than in other areas. You’ve got plenty of resources and experts to take you by the hand from a beginner to intermediate level. You’ve got plenty of help from niche communities and live events.

With CRO you’ve got pros and cons. It’s kind of a new thing, so there’s a lot of buzz, but not so much content as there’s about SEO for example. On the other hand, there’s a lot less bullshit too.

Nevertheless, as with any other branch of online marketing, after two months you’ll think you’ve got a pretty decent idea of what’s CRO and that you’ve got everything under control.

It’s some time later when you start to realise how much you’ve got to learn, and you start blushing thinking on how ignorant you were.

Here are some of them, so perhaps you can learn from them.

1. Listening to good practices.

Good practices are awesome. People with a lot of experience condensate those trends they’ve seen repeated over and over and give them to you, distilled, so you can learn from them without going through all the pain.

But as in life, in CRO we should all learn the hard way.

I love the Nielsen Norman group, the blog posts from conversionXL and the examples from which test won. They’re useful to produce hypotheses, find points that may be optimisable and know what things you should avoid in your new design; but at the end of the day, if you haven’t tested it, you don’t know shit.

Don’t perform UX or conversion audits with second hand wisdom. Always generate a hypothesis, test it, and move on. Never skip the testing.

2. Choosing the page to optimise with your guts

Often the decision to start optimising one page instead or the other was based on a mix of personal experience, sticking to best practices, and whatever your guts tell you.

On the contrary, that decision should be backed by data.

First you’ll have to identify the different templates in your page: you know, home page, categories, product pages, blog posts… and try to create filters within your analytic software to see which template has more visits. All other things equal, those will be the ones you want to optimise.

Secondly, check the average user value per page. The potential outcome of improving a page with a big value per user is much higher than updating a blog post that doesn’t generate almost any revenue.

Third, check bounce rates and exit rates, as they may be a signal of something being a bit off. Generally, the higher the bounce and exit rates are in one page, the more you should be able to improve them.

Do not take this for granted, so mix the data with user feedback and your expertise to see if the the problem is in those pages or comes from somewhere else.

3. Focusing on desktop

Image by Iterate
Image by Iterate

At ThunderMetric we’re dealing with some accounts with up to 80% of traffic coming from tablets and mobile phones. And usually, the conversion rate for these devices is about a quarter of their desktop version.

And still, if you don’t think in terms of data, you may end going through hell to raise the conversion rate for desktop users by 10%.

Meanwhile, if you had spent the same amount of time understanding your mobile users, why they’re not buying, what micro-conversions they can perform and how to grant them a frictionless experience, you could have doubled the conversion rate for 8 of every 10 users.

That’s awesome!

So, if you’re working on UX or CRO for a page, don’t trust the page you usually see from your computer. Always listen to your data, in terms of the device, browser, OS, and resolution.

Never assume everybody sees things from the same perspective or the same screen as you do. Empathy and an open mind are as important for CRO as an analytical mindset.

4. Not listening to users (or listening to them too much)

Once you start relying on data, your next step becomes much clearer.

You get to know which page is more important to edit, which set of users is more important and what behaviours are important to incentivise. You know the question, but many times you don’t know the answer.

Listen to your data, but after you’ve listened, listen to your user. They will give you hints, not only about what’s important, but also why it is important.

Are people who use the search function converting twice as much? Maybe you shouldn’t be making it bigger, but fixing that messy information architecture you’ve got.

People who go to your FAQ page have three times as much value as the page overall? That doesn’t mean you have to put it first, maybe you should be displaying your shopping information somewhere on your homepage.

So, the data tends to answer the what. Users usually answer the why, and many times the how.

To get feedback from users, you can use Usertesting, loop11, trymyui or your loved ones. It doesn’t matter, bring people in and draw conclusions. Then, draw a hypothesis and test it.

Just keep in mind that listening to your users too much may be a problem too. The role of user feedback is to produce a hypothesis to test, not to get actionable information on what to change.

If you skip the testing part you’ll be missing the point.

5. Working in big changes (or in too little changes)

I used to love redesigns. It’s like unwrapping a Christmas present, but better. All is shiny, and new, and to be honest you hate your fucking website after two years of seeing it. Every. Single. Day.

But doing a complete website overhaul it’s an idiotic solution 90% of the time.

First of all, it’s not tested, and basing a redesign on your opinion and good practices may or may not work. Second, if it works, you won’t know why, which will be a serious disadvantage for further improving its conversion rate.

First: the more traffic you get, the smaller details you can test.

If you’re testing a page with a high traffic volume, you can test small things and still get valid data in one or two business cycles. On the other hand, if you’re testing a specific template that only about 1% of your visitors see (or if you’re getting 100 visits a day in your page) you may want to try something completely radical that either wins big or fails miserably.

As you’ll have to keep running every A/B test for a minimum of one or two weeks -to avoid the primacy effect from distorting your data, amongst other things-, you cannot be too granular when choosing changes. You’ll neither want to try to bite off too much, and spend 2 months redesigning and coding a page.

Find a sweet spot in which you can still deliver something fast, learn general lessons about the page, but only change enough things to produce an effect soon. You don’t want a list of 200 items to optimise that will consume your next 3 years to properly test.

6. Not setting A/B testings properly

There are so many things that may go wrong in an A/B testing when setting it up, that they could have their own blog post on them.

Instead, I’m going to focus on newbie errors I kept doing when I started:

  1. Don’t compare with old variations: Your website’s conversion rate isn’t static. Probably it’ll look somewhat like this:


If that’s the case, you cannot compare that test you ran 2 weeks ago with the one you’re currently running. Even if the results looks much better. The only way to know something for sure is comparing 2 variations.

This includes using software like Unbounce to compare a variation that has been running for a long time with a new one.

  1. Don´t draw conclusions too quickly: Wait until you get statistical significance. And then, wait until two business cycles have passed since you started the test. Remember that even with a 95% certainty you’ll be wrong one in every 20 tests you make. Be sure you wait enough.
  2. Make sure everything’s running smoothly! Always do an A/A test before you start with the A/B tests. Always configure a second web analytics software to have a second set of data to compare the one from your A/B testing tool. Make sure the cookies are working as they should and that conversion doesn’t drop dramatically -because, for example, you broke something and now it’s almost impossible to convert-.

Extra tip: Try to compare results without knowing which version is the contender. You’ll secretly want it to win, and tend to have a positive bias towards it.

7. Not prioritising

When you’re performing A/B testing, you must separate what’s important from what’s not. This doesn’t only mean choosing a landing page to optimise, but also choosing what to change in it.

For this you should use the PIE framework.

  1. P for Potential: Basically, how bad is the page at the moment. Does it have an humongous bounce or exit rate? Or, does it simply go against every single rule created by men, gods, and CRO specialists? The more you can improve it, the bigger the lift may be.
  2. I for Importance: This is how many users pass through that page and what’s their value. If you have got a page seen by many users that also has a very high value per visit, better mark it as important.
  3. E for Ease: Keep in mind that the easier to design, code and test a change, the earlier we will have the results, often allocating less resources. Sometimes is better to work on things you can fix quickly and easily, even if you’re not getting the conversion lift you’d get with other pages or changes.

Ideally you’d assign numbers from 1 to 10 to each of these items, get the average and order them from higher to lower.

In a real environment you’ll have to decide what are your objectives and how to prioritise tests. For us generally, ease is much more important at the beginning of a project, when we are starting to roll things out and want to show something to the client as soon as possible.

Once you’ve gained their trust, you can start paying more attention to the Importance to focus on revenue. Sometimes you may even prioritise certain pages that may not be at the top for other variety of reasons.

For example, your blog may be awful, but not easy to fix or important. It is not important because it’s not getting SEO traffic and the value is low. Fixing some of its problems would improve user experience, increasing SEO traffic and value per visitor and setting a snowball in motion: more links, more conversion, better content, more traffic overall. In this case working in the blog may be not the smartest option from a Conversion Optimisation perspective, but it does make sense with the goals of the organisation. Be sure your goals are aligned too.

Specially, prioritising has a whole new meaning if, as in ThunderMetric, your company is in charge of other areas of the digital marketing effort, such as SEO or paid advertising. Knowing when it’s time to tests new meta descriptions and when it’s time to test a new layout it’s important. But is more important to keep in mind that every area of work influences the rest: always align your objectives and KPIs with the your client’s goals. Always try to find synergies.

So the extra lesson, and the closing statement for this post is: Don’t forget that CRO is part of the marketing mix and that you don’t live in a vacuum.