Ever wonder if changing a title tag would have a positive or negative impact on your organic search visibility? Have you wondered about changing the meta-description and the impact on click-through rates? Using A/B testing we can confirm whether or not a change we made had a positive or negative impact. In this video, I'm going to walk you through how to set up and run an SEO A/B test and use this powerful tool.
Let's walk through how to run an SEO A/B test. Before we get into running the test, we need to do a little bit of background work so we understand the differences between SEO A/B tests and what we typically see in digital marketing, which is known as CRO or conversion rate optimization. In conversion rate optimization, you make changes on one page and then you show the variance of that one page to a specific group. Then you look at the different outcomes and then you can see whether or not the test was good or bad.
When it comes to SEO, we are making changes on many pages and we have two different groups. We have an experiment group and we have a control group. Based on the interactions or the results we see from those groups, then we can prove whether or not the test was good or bad. Those are the two main differences between CRO and SEO A/B testing.
Before we get into the next step, let's talk about two quick definitions. The two main definitions that you need to understand are variant and control. A variant is a set of pages that you change, or you want to change, or you're making the changes on. Then the control group is the set of pages where no changes were made. Now, when you're creating a test, you want to make sure that it's organized and make sure that you have one clear objective of what you're trying to do.
It's easy to get off into tangents and try to test multiple things, but that's going to make the results muddy and you're not going to know whether or not the changes you made had an impact. The first thing you need to do is have an objective. You need to know what you're trying to achieve as the result of running this specific test. For an example, you could say I want to increase the amount of organic traffic that lands on our location pages or our product pages or our blog pages. Then you need to come up with a hypothesis. This is like a science fair, and you need to go and make an educated guess on what you believe will happen. I believe that changing the title tags on our location pages to have the actual city in it will help improve the local organic traffic to those pages, something along those lines.
Next, you need to create your experiment groups. This is going to be your controls and your variants or your controls and your experiment pages. These are the pages that you're going to keep the same and the pages that you're going to change. Then you're going to see if there was a difference between the two. Next, you need to set a duration period. This is important. You want to make sure that your run it long enough that you can collect the data over a few weeks and search engines can take time to index your pages depending on how large your site is and how often the crawlers come. It can take as long as two weeks to get a page indexed, but you want to build that into the test that way you don't pull it before your pages were indexed and the change was seen by users. So run your test for about four to six weeks. That's a good benchmark.
Finally, you need to have a primary metric. What's the single metric you're going to use to judge the outcome, to know whether or not your test was successful? This could be organic sessions, for example. Or clicks or impressions, things like that, the primary metric. You can also have a secondary metric that can help validate this.
Let's say you do organic sessions, but then you're also going to pull data from search console to see that maybe clicks, impressions, and overall rank also had a positive impact. What can you test? This is a really interesting question that I do get a lot when I'm talking to people. Now for SEO, your tests are going to be elements that impact search visibility. You're not going to want to put a form on there and not have a form on other pages because that's a conversion thing. That's not having to do with search engine optimization.
We want to look at aspects or elements that we know have an impact in search. Test things like title tags, extremely important. Meta descriptions can impact click through rate. H tags can help with the concepts and the connections with our content, different types of content on the page. Maybe you extend the content or shorten the content or have video content. Image alts, optimizing images a little bit further. Or schema markup, you can add it to some pages and not add it to others and see what happens. This is not an exhaustive list by any means, but it should give you a start. So once you have this broken down, I do have a template that you can use to help organize this. Then I'm going to show you how to run this test and how to run the data in the Distilled split testing tool.
You can see whether or not you had a positive or negative result. We created this SEO testing template to help make your tests run a little smoother. It's something that we use internally at the top here. We've got our goal. What's our objective that we're running? What's our hypothesis? What's our experiment groups going to be? You can break those down. Maybe it's going to be our location pages. Maybe it's going to be our product pages. You can set the dimension and then your primary metric and any secondary metrics you want to track. This is what it would look like once you filled this out. The goal of this test was to improve the ranking of pages based on the core service terms. The hypothesis was that if we wrote better title tags, more targeted keywords, we'd improve the organic visibility of those pages.
The experiment groups, we had two different groups here. You could see the experiment pages. Then we had the control pages that we decided to break them into. Our primary metric was going to be organic traffic and our second was ranking. We also decided to look at the title tags here that we had. Then looked at the new title tags that we were going to be using for this specific test. Now we wanted to have this test run for two weeks just to do some initial testing. Then we could use that information. We wanted this test to run for a few weeks. In this case, we had it for two weeks. We ended up letting it run for about six weeks before we were able to have enough information that we could use then to decide whether or not the results were meaningful.
In order to know whether or not our test was successful, we need to have data. This is where analytics is going to come in. We can go back to our template here and see that we wanted our primary metric to be organic traffic. We can use Google analytics to create some segments for us. In order to create a segment, it's not very hard. You can go into any part here of analytics, maybe an overview section. Up at the top, you'll see something right here. This is all users. Then add a segment. By clicking on add a segment, you can see that you can create tons of different ones. There's things right out of the box, but you can also create segments that are around specific groups or pages that you want to see. So for instance, one of the ones I've created is a split test.
You can see it's an SEO split test control group, and it's an experiment group. So if we look at the control group, which was one of the groups we had in that sheet, we can see how this was set up. So the first thing we want to do is have a default channel grouping. Then we want this to be exactly organic search. This is important because this is only the type of traffic that we're testing. Then we want to have our landing pages. The pages that people came in on, and this means that they actually saw our site in search. They clicked on these pages and entered these pages. The landing page needs to be one of each one of the pages here that we had in this group. Once we're done with that, we can go ahead and hit save.
Now you've got the control group. In order to create the experiment group, it's going to be the exact same way. Once again, we'll have the split test group and I can show it here and edit. Just like we did, default landing group. We want this to be exactly matches organic because we want that specifically. Then it needs to be one of those three pages which is in our experiment group and hit save. Right here, we have a DIY split tester from Distilled. In here, we can take our control data and our variant data and allow it to run a forecast. In short, what this test is doing is it's leveraging casual impact, which looks at a time series in this case. It's going to project what the traffic would have been like had no changes been made.
That's what it's going to use as a forecast. Then it's going to take the data, the experiment data, and say, here's the data of what happened.
Then we can compare what could have happened or what would have likely happened with what actually happened. Then we can see whether or not there was a positive impact or an impact that had causation. Casual impact is designed to really show causation and not correlation. That's sometimes where we get confused in a lot of data tests. Just because something's correlated doesn't mean it had a direct impact on the other event. With casual impact, we can see something a lot closer to causation, which will allow us to know, "Hey, this made an impact and we should make this change to other pages as well, because we know that it had a positive impact."
With casual impact, there's a ton of math that goes behind it. It really helps get rid of all the other noise and the potential noise that could happen within a marketing campaign. It allows us to see the things that matter the most. You don't have to know all this math. You just need to know how to copy and paste and also how to format your data correctly. Once you pull these groups here in analytics, you can pull the data straight from analytics and export it. Now, when you do this, you need a hundred days prior to the test in order for this to work correctly. If your test was 14 days long, you need 114 days worth of data. The 14 days of the test, plus a hundred days before. An easy way to find a hundred days before is just use Google, do a hundred days before in the start date that you had.
Now that we've got our two groups here in Google Analytics, we're going to want to pull the data that we can use within the split tester to know whether or not this was a positive or negative result with our test. The best way we can do this that I've found is using the Google Analytics add on for Sheets. You want to go ahead and add that here, just go to get add ons and you can add it, then click create report. There's a few things you're going to want to do to set this up. We can go ahead and call this SEO A/B testing. We're going to select the account that we're working on.
Then we're going to choose metric. In this case, we're looking at sessions and then we've got our segments, which we just created. You can easily find them. Both of those groups in here and hit create report. So once you have the dates in, you have the GA sessions, GA date. GA date is an important one. I didn't add it in with the pane over here, but if we don't have it, it's not going to split the groups out into dates. This is an important one. We're going to go ahead and remove the limit to our testing and everything should be set up the way that we want it to. Now, we're going to go ahead back to add-ons and we are going to have this report run one more time. If the report is run successfully, you'll see all the information right here. We've got the control group data all the way through, and then we've got the experiment data all the way through. In order to add this into the Distilled tool here, we go up to the control group where it starts, pool all the data, and paste.
We want to make sure that the start of the data is set correctly. Here, we're going to do our date, which was February 4th. Now we've got to pool our variant data and paste that in here as well. Then we're going to click forecast. Now, this is running the information from the test a hundred days before. It's building out that model and the forecasting, and it's going to compare it against the actual results that we've seen. When we're looking at this test and as we look at it, how it's run for honestly quite some time, we're going to see the overall results of the data. What we're looking for according to the forecast, which was this blue mark, there was this downward trend. The red is the actual traffic that we saw over those times. How this test works is you want to have the red line higher than the blue line. This test isn't going to show you a statistical significance number.
It's not going to give you that 95% confidence number. It's going to give you the basic information to let you know, yes, this test was positive or no, this test was not positive. Now, as you can see from this graph, it's kind of a mixed bag. There's not a clear difference between the blue line and the red line. Now there are times where the red line's higher, but it's not consistent enough. So honestly, this is a test that we would go back and try again because it wasn't something that we looked at and said, yes, this is pretty awesome.
If we check out this blog by Distilled themselves, when they're showing us how does this tool work, you can read a little deeper into what you're looking for and how it works. You can also take a look at some of the data that they pulled together for us, so you can see how it works and some of the results that you'd want to see.
Here's some sample data that they have. You've got the control group and you've got the variant group, so we could go and I think it's for like 134. Again, a copy of this information into this. We've got their group here, and then we've got the experience or the variant here. This would be 134 days before today. Would be July 3rd. We'll go ahead and have the set date for July 3rd in this case and see how this test performed and hit forecast.
Again, it's doing the same process and seeing if the results are different and see if we can see if there's anything different. In this case, we would say the change was negative because the forecast, the blue line over here is much higher than the red line. We would revert back those tests. You're taking the control group and the variant group. If the red line is higher, it was positive. If the blue line is higher, it was negative. It's not something that you're going to have and say, "I have a 95% confidence that this worked," but you can say, "Hey, there was a positive change. We used casual impact and we know that the change we made had a specific output and it made a specific difference with our site and our visibility."
This is how you run an A/B test by yourself, DIY, no extra costs added. In the future, we're going to do some other videos on AB testing and how we can do this with SEO to perform better and even share some other tools which are getting ready to hit the market, which I think will be extremely helpful and really democratize some of these things that we're doing here as SEOs. If you have any questions about what we covered today, please comment below. Until next time Happy Marketing.
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.