A 5-Step Framework for Conversion Rate Optimization
There is a problem with conversion rate optimization: It looks easy. Most of us with some experience working online can take a look at a website and quickly find problems that may prevent someone from converting into a customer. There are a few such problems that are quite common:
- A lack of customer reviews
- A lack of trust / security signals
- Bad communication of product selling points
The thing is, how do we know for sure that these are problems?
The fact is, we don't. The only way to find out is to test these things and see. Even with this in mind, though, how do you know to test these things that are mainly based on your own gut feeling?
For me, this is where doing a high level of research and discovery is worth the time and effort. It can be far too easy to make assumptions about what to test and then dive straight in and start testing them. Wouldn't it be better to run conversion rate tests based on actual data from your target audience?
I'm going to go into detail on the process we use at Distilled for conversion rate optimization. With the context above, it shouldn't be any surprise that I spend a lot of time talking about the discovery phase of the process as opposed to testing and reviewing results.
For those of you who want the answer straight away and an easy takeaway, here is a graphic of the process:
Before I move on, I wanted to give you a few links that have certainly helped me over the last few years when learning about conversion rate optimization.
- The Definitive How-to for Conversion Rate Optimization by Stephen Pavlovich
- Holy Grail of eCommerce Conversion Rate Optimization by Pancham Prashar
- SEOgadget Guide to Conversion Rate Optimization
Right, let's get into the process.
This entire stage is all about one thing: gathering the data you need to inform your testing. This can take time and if you're working with clients, you need to set expectations around this. The fact is that this is a very important stage and if done correctly, can save you a lot of heartache further down the process.
Step 1: Data gathering
There are three broad areas from which you can gather data. Let's look at each of them in turn.
This is the company / website that you're working for. There is a bunch of information you can gather from them which will help inform your tests.
Why does the company exist?
I always believe in starting with why and I've talked about this before in the context of link building. It is at this point that you can dive right into the heart of the company and find out what makes it different to others. This isn't just about finding USPs, it goes far deeper than that into the culture and DNA of the company. The reason here is that customers buy the company and the message it portrays just as much as the product itself. We all have affinities with certain companies who probably do produce a great product and service, but it's a love for the company itself which keeps us interested and buying from them.
What are the goals of the company?
This is a pretty crucial one and the reasons should be obvious. You need to focus your data gathering and testing around hitting these goals. There are times when some goals may be less obvious than others. These are sometimes called micro-conversions and can include things that contribute to the bigger goal. For example, you may find that customers who signup to your email newsletter are more likely to become repeat customers than those who don't. Therefore, a micro-conversion would be to get people signed up to your email list.
What are the unique selling propositions (USPs) of the company?
What makes the company different in comparison to competitors who sell the same or similar products? Bonus points here if the USP is something that a competitor can't emulate. For example, offering free delivery is something that may help improve conversions, but chances are that your competitors can also offer this.
What are the common objections?
This is where you should be speaking to people within the organisation who are outside the marketing team. One example is to talk to sales staff and ask them how they sell the products, what they feel the USPs are and what the typical objections are to the product. Another example is to talk to customer support staff and see what problems they tend to deal with. These guys will also have input on what customers tend to like the most and what positive feedback / product improvements get suggested.
Another team to speak to is whoever manages live chat for a website if it exists. At Distilled, we've sometimes been able to get access to live chat transcripts and have been able to run analysis to find trends and common problems.
Here, we are focusing specifically on the website itself and seeing what data we can gather to inform our experiments.
What does the sales process look like?
At this point, I'd recommend sitting down with the client and a big whiteboard to map out the sales process from start to finish, including each touch-point between the customer and the website or marketing materials such as email. From here, you can go pretty granular into each part of the process to find where problems can occur.
It is also at this point that you should review funnels in analytics or set them up if they don't currently exist. Try to find where the most common drop-off points are and take a deeper dive into why. Sometimes a technical problem may be to blame for the drop-off in conversions, so make sure you are at the very least segmenting data by browser to try and find problems.
What is the current traffic breakdown?
This involves you taking a deep dive into the existing analytics data that you have from the website. At this point you're just trying to get a better understanding of a few core things:
- How much traffic the website receives: This can impact your testing in that you may discover low traffic numbers which can influence how long it takes a test to complete.
- What demographics the website typically attracts - this may require you to enable extra tracking if you're using Google Analytics.
- What technology users typically use: As mentioned above, looking at browser usage is important. But on top of this, what devices do users tend to use? If you're seeing high numbers of users using mobile devices, you should check how the website renders on a mobile device. If you're seeing very low numbers of visits from mobile devices, that is probably worth investigating too given the growth of traffic from mobile in recent years.
Where do conversions currently come from?
Hopefully, the website will already have some goals or eCommerce tracking enabled which makes this bit a lot easier! If not, then you will need to get them setup as soon as possible so that you can start gathering the data you need. This work needs to be done no matter what because you're not going to be able to measure the results of your CRO tests if you can't measure the conversions!
If you don't have goals setup already, you can use Paditrack which syncs with your Google Analytics account and allows you to apply goals to old data. It also allows you to segment your funnels which, annoyingly, Google Analytics doesn't allow you to do as of writing.
If you do have this data, then you need to try and find patterns in the type of people who convert, as well as where they come from. With the latter, it can be a bit tricky sometimes because quite often, customers will find you via different channels. So you need to make sure that you're looking at multi-channel reports and seeing which ones are most common.
Is there any back-end data you can access?
Although things are changing, many analytics platforms do not integrate offline or back-end data by default, so you may need to go digging for it. One thing that many companies have is data on cancellation or refund rates. Typically this is not included in standard analytics views because it takes place offline, however it can provide you with a wealth of information about products and customers. You can find out what causes customers to cancel a service or what made them ask for a refund.
This can potentially be the most interesting area to gather data from and have the most impact. Here we are gathering information directly from your customers via a number of methods.
What are the biggest objections that customers have?
For me, this is one of the most insightful things to ask because it drills straight into the one core thing that we care about in this process - what is stopping the customer from buying?
There are a number of ways to do this, which I'll give some detail on here.
Google Consumer Surveys
We have used these surveys a few times at Distilled now and they have usually given us pretty good insights. The results can be quite broad and frankly, some responses can be pretty useless! But if you cut out the noise and look for the trends, you can get some good information on what concerns and considerations people have when buying products like yours.
Qualaroo is a cool little survey tool which you've probably seen on numerous websites across the web. It looks something like this:
What I like about Qualaroo is that it doesn't intrude on the user experience and you can use some cool customization settings to make it appear exactly when you want. For example, you can set it to only appear on certain pages or based on user behavior like time on page. You can also set it to appear when it looks like someone is about the close the window.
One neat little tip here is to place the survey on your order confirmation page and ask the question "What nearly stopped you from buying from us today?" - this can give you some low-risk feedback because the user has already purchased from you.
It's also worth mentioning that Qualaroo can now be used on mobile devices, too, so you can tailor your questions to mobile users really well:
Other survey services
If you have a good email list which is reasonably active and engaged, you can run email surveys using something like Survey Monkey. This can be a little more tricky because chances are that the people on your email list may be existing customers who's mindset is a bit different to someone who has never bought from you before. We've also used AYTM in the past for running surveys who offer a few more options in their free version than Survey Monkey.
Again, this is a tool that we often use at Distilled, and we have gotten some good results from it. There have been a few misses too in terms of how useful the user has been, but that happens from time to time. Usertesting.com allows you to recruit users based on certain characteristics (age, gender, interests etc) and then ask them to complete tasks for you. These tasks are usually focused around your website or a competitors and may involve researching and buying a product. As the user works through the tasks, they record a screencast and talk as they are working.
If you want to dive more into this, I really liked this webinar from Conversion Rate Experts which focuses on how they use the service.
Step 2: List hypotheses
Now we need to make the step from information gathering to outlining what we may want to test. Without realising it, many people will jump straight to this step of the process and just start testing what feels right. By doing all the work we outlined in step 1, the rest of the process should be much more informed. Asking yourself the following questions should help you end up with a list of things to test that are backed up by real data and insight.
What are we testing?
Based on all of the information you gathered from the website, customers and the company in step 1, what would you like to test? Go back to the information and look for the common trends. I prefer to start with the most common customer objections and see what is common amongst them. For example, if a common theme of customer feedback was that they place a lot of value in knowing their personal payment details are safe, you could hypothesise that adding more trust signals to the checkout process will increase the number of people who complete the process.
Another example may be if you found that the sales team always get feedback that customers love the money-back guarantee that you offer. So you may hypothesise that making this selling point more obvious on your product pages may increase the number of people who start the checkout process.
Once you have a hypothesis, it is important to know what success looks like and therefore, how to tell if the test result is a positive one. This sounds like common sense, but it's very important to get this clear right from the start so that you reach the end of the test and stand a high chance of having an answer.
Who are we testing?
It is important to understand the differences in the types of people who visit your website, not just in terms of demographic, but also in terms of where their mind is at in terms of the buying cycle. An important example to keep in mind is new vs. returning customers. Putting both of these types of customers into the same test could lead to unreliable results because the mindsets of the customers are very different.
Returning customers (assuming you did a good job!) will already be bought into your company and brand, they will have already experienced the checkout process, they may even already have their credit card details registered with you. All of these things are likely to make them automatically more likely to convert into a customer compared to a brand new customer. One thing to mention here is that you're never going to be able to segment everyone perfectly because analytics data quality is never 100% perfect. There isn't much we can do about this beyond ensuring we're tracking correctly and using best practice when segmenting users.
When you run your test, most pieces of software will allow you to direct traffic to your test pages based on various attributes, here is an example from Optimizely:
Another useful segment as you can see above is the segmentation by browser. This can be particularly useful if you have any bugs with certain browsers and your testing page. For example, if something you want to test doesn't load correctly in Firefox, you can choose to exclude Firefox users from the test. Obviously if the test is successful, the final roll-out will need to work in all browsers, but this setting can be useful as a short term fix.
Where are we testing?
This is a pretty straight forward one. You just need to specify which page or set of pages you're testing. You may choose to test just one product page or a set of similar products at once. One thing to mention here is that if you're testing multiple pages at once, you should be aware of how the buying cycles for those products may differ. If you're testing two product pages with a single test and one of those products is a $500 garden shed and the other product is a $10 garden ornament, then the results of the test may be a bit skewed.
When you list the pages that you're testing, it is also a good time to run through a simple checklist to make sure that tracking code has been added to those pages correctly. Again, this is pretty basic but can be easily forgotten.
Goals of the discovery phase:
- You've gathered data from customers, the website, and the company
- You've used this data to form a hypothesis on what to test
- You've identified who you're targeting with this test and what pages it applies to
- You've checked that tracking code is set up correctly on those pages
This stage is where we start testing! Again, this is a step that people can jump to straight away and not have data to backup their tests. Make sure that isn't you!
Step 3: Wireframe test designs
This step is likely to vary on your specific circumstances. It may not even be necessary for you to do wire-framing! If you're in a position where you don't need to get sign-off on new test designs then you can make changes do your website directly using a tool like Optimizely or Visual Website Optimizer.
Having said that, there are benefits to taking some time to plan the changes that you're going to make so that you can double check that they are in line with steps 1 and 2 above. Here are a few questions to ask yourself as you're going through this step.
Are the changes directly testing my hypothesis?
This sounds basic; of course they should! However it can be easy to get off-track when doing this kind of work. So it's good to take a step back and ask yourself this question because you can easily do too much and end up testing more than you expected to.
Are the changes keeping the design on-brand?
This is likely to be more of an issue if you're working on a very large website where there are multiple stakeholders in the website such as UX teams, design teams, marketing teams etc. This can cause problems in getting things signed off but there are often good reasons for this. If you suggest a design that involves fundamental changes to page layout and design, it's less likely to get sign-off unless you've already built up a serious amount of trust.
Are the changes technically doable?
At Distilled, we've sometimes run into issues where our changes have been a bit tricky to implement and have required a bit of development time to get working. This is fine if you have the development time available, but if you don't, this could limit the complexity of the tests that you run. So you need to bear this in mind when designing tests and choosing which hypotheses to test.
Step 4: Implement design
As mentioned above, the more complex your design, the more work you may need to put the design live. It is really important at this point to make sure you're testing the design across different browsers before putting live. Visual elements can change quite dramatically and the last thing you want to do is skew your results by a certain browser not rendering the design properly.
It is also at this stage that you can choose a few options in terms of who should see the test. This is how this looks in Optimizely:
You can also choose what proportion of your traffic will be sent to the testing pages. If you have high traffic numbers, then this can help offset the risk if a test resulting in conversion rates dropping - it does happen! So only sending 10% of your traffic to the test means that the remaining 90% will carry on as normal.
This is what this setting looks like if you're using Optimizely:
You should also connect Optimizely to your Google Analytics account so that you're also able to determine the average order value for each group of visitors you are sending to your conversion tests. Sometimes, the raw conversion rate for a test may not increase, but the average order value may increase which is obviously a win that you don't want to be overlooking.
Goals of the experiments phase:
- Test variations are live and getting traffic
- Cross-browser testing is complete
- Design has been signed off by client / stakeholders if applicable
- Correct customer segments / traffic allocation has been set
Now it's time to see if our work has paid off!
Step 5: Was the hypothesis correct?
Was statistical significance reached?
Before diving in and assessing if your hypothesis was correct, you need to make sure that statistical significance has been reached. I like this short definition by Chris Goward which helps explain what this is and it's importance. If you want to go a bit deeper and see some examples, this post by Will on the Distilled blog is a great read.
Many split testing tools will actually tell you if significance has been reached or not so this takes some of the hard work out of the process. Having said that, it's still a good idea to understand the theories behind it so you can spot problems if they occur.
In terms of how long it could take to reach statistical significance, it can be hard to predict but this is a cool tool which helps you on this. Evan has another tool in relation to this which allows you to determine how order value differs across two different test groups. This is one of the key reasons to connect Optimizely to Google Analytics as mentioned above.
Was the hypothesis correct?
Yes? Great! If your test was a success and increased conversions then that's great, but what's next? Well firstly you need to look at how to roll out the successful design to the website properly, i.e. not relying on Optimizely or Visual Website Optimizer to display the design to visitors. In the short term, you can send 100% of your traffic to the successful design (if you haven't already) and keep an eye on the numbers. But at some point, you'll probably need help from developers to deploy the changes on the website directly.
When the hypothesis isn't correct
This is going to happen; most conversion rate experts don't talk about their failed tests, but they do happen. One guy that did talk about this was Peep Laja in this article and he went into even more detail in this case study where he said that it took six tests before a positive result was reached.
The important thing here is to not give up and make sure you've learned something from the process. There are always things to learn from failed tests and you can iterate on them and feed the learnings into future tests. Alongside this, make sure you've keeping track of all the data you've gathered from failed tests so that you have a log of all tests which you can refer back to in the future.
Goals of the review stage:
- Know whether a hypothesis was correct or not
- If it was correct, roll out widely
- If it wasn't correct, what did we learn?
- On to the next test!
That's about it! Conversion rate optimization should be an ongoing process because there are always things that can be improved across your business. Look for the opportunities to test everything, follow a good process and you can make a big difference to the bottom line.
A few resources to leave you with which I'd highly recommend:
- Peep Laja's blog
- Conversion rate experts articles
- Wider Funnel blog
- Michael Aagaard's blog
- PRWD list of CRO resources
- Unbounce blog
If you have any feedback or comments, feel free to leave them below!
Acquiring and retaining new customers is tough enough—a leaky conversion funnel will only make things worse. Check out how to build and maintain a cycle from aw...
Do your A/B tests cover your marketing funnel from acquisition to conversion? If not, this might be the Whiteboard Friday for you. Learn methods for testing bot...
Smart CRO and understanding your audience can go a long way toward meeting your goals. See how one team doubled conversions, leads, and sales for a company's in...
Research suggests that chatbots are the future, but they remain mysterious to many brands. How can you get the most out of building and using a bot, and how can...
A few simple tests can ensure your launches are as successful as possible. Chris Dayley details six of them here, with examples of just how much they've helped...
A highly converting landing page is never an accident — it's a carefully constructed masterpiece. In this CRO case study, traditional marketing wisdom combines...
To truly master your buyer's journey and become an ROI-driven CRO mastermind, you need to analyze the psychology of your paid campaigns. You need to dive deeper...
There's more to CRO than meets the eye — and that means you just might be doing it wrong. In this blog post, learn why you should give context to your conversio...
In the quest for better conversions, Optimizely went from one homepage to 26 different versions uniquely personalized to different visitors and increased conver...