SEO Disasters by Steven van Vessum (Conductor)
Webinar recorded on
Join our weekly live webinars with the marketing industry’s best and brightest. It’s 100% free. Sign up and attend our next webinar.
Join our newsletter
Get access to trusted SEO education from the industry's best and brightest.
Steven van Vessum, cofounder of ContentKing (recently acquired by Conductor), joined the Clearscope webinar for a discussion on SEO disasters and how you can prevent them.
Here are our biggest takeaways from Steven’s talk:
Many SEO issues are self-inflicted (e.g., changing a perfectly crafted page title, deleting an article, redirecting a sold-out product). These are avoidable with an SEO QA policy.
Test thoroughly. Steven recommends testing before, during, and after releases. (Sidenote: don’t push releases on a Friday to save yourself and your team a headache from having to troubleshoot over the weekend.)
Create a backup plan with redundancies in your CMS and your team. Steven shared they have a team on call, including an engineer, SEO, and content marketer. If something goes awry, they’re ready.
Watch the full webinar
And check out the resources Steven shared below:
About Steven Van Vessum:
Steven fell in love with SEO back in 2006. He started his professional SEO career in-house, moved agency-side, co-founded his own agency, and then co-founded ContentKing where he was responsible for SEO and content marketing as VP of Community. After the acquisition by Conductor early 2022, Steven now serves as director of organic marketing.
Follow Steven on Twitter: https://twitter.com/Stevenvvessum
About ContentKing:
ContentKing is a cloud-based service that provides real-time SEO auditing and change management to improve and maintain search engine visibility. Earlier this year, ContentKing was acquired by the Enterprise SEO platform Conductor.
Follow ContentKing on Twitter: https://twitter.com/contentking
Read the transcript
Travis:
We can go ahead and dive on in. So without further ado, today we have Steven, the co-founder of ContentKing, which was recently acquired by Conductor. Steven fell in love with SEO back in 2006, and started his professional SEO career in-house and moved agency side, and then co-founded his own agency, and then finally co-founded ContentKing, which was where he was responsible for SEO and content marketing as VP of community. After the acquisition by Conductor in early 2022, Steven now serves as a director of organic marketing. Steven, the floor is yours if you want to share your screen.
Steven:
Cool. Thank you very much, Travis. Can you see my screen? It should be shared now.
Travis:
Yes.
Steven:
Cool, all right. So welcome everyone. Today we're going to be talking about SEO quality assurance, and how we can prevent SEO disasters from happening. Travis already gave a brief introduction about me. So yeah, obviously I'm Steven, I'm the director of organic marketing at Conductor. You can find me on Twitter, Stevenvvessum, with U-M at the end. Otherwise, Google me, I'm not a hard man to find. So I am the co-founder of ContentKing and as Travis mentioned, we were acquired by Conductor earlier this year. I've been in SEO most of my adult life. I can imagine, I'm looking at this like shit, it's been 17 years in SEO. Getting gray hairs and definitely losing hair. A little bit about ContentKing, for those that don't know. ContentKing is a real-time SEO auditing and monitoring platform, which means it's running 24/7 in the background, and it keeping track of content changes, technical changes and in case of trouble, it alerts you.
Steven:
So for example, if someone pushes a robots.txt change, you're going to be alerted about it. Someone noindexes important pages, like homepage or your most important money pages, you're going to be alerted about that as well. Needless to say, SEO disasters is a super relevant topic to us. It's something that I have a lot of interest in, so I'm very excited to share everything I've learned about that topic with you today. Here you have an example alert. It's like, hey, a bunch of pages became non-indexable and you can configure these alerts, you can reroute them to make sure that they go to different teams, et cetera, so it's super flexible. At the end of the day, you just want to make sure that SEO disasters don't happen and when they do happen, you want to jump in and quickly fix it. All right, without further ado, let's jump in.
Steven:
SEO quality assurance defined. Before we really tied it to SEO, let's look at quality assurance in general. It's a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering products or services to customers, according to Wikipedia. So if you apply this to SEO, SEO quality assurance is a way of preventing mistakes and defects that negatively impact your site's SEO performance. And negative impact of your sites as your performance can be obviously rankings, tanking, which in turn hurts organic traffic, which in turn hurt leads, sales, revenue. And at the end of the day, it's the dollars that count. So you got to make sure that your bottom line's protected, when it comes to SEO or anything really. And a problem we're facing is that there is a lack of SEO quality assurance. It's one of the biggest factors holding back your SEO performance.
Steven:
It's something I'm confident about, but don't take my word for it. We actually ran a massive survey about this. So we hired a research firm to do this survey with us, and last year, we asked over 1250 respondents about their SEO disasters. What cost them, how often they happen, how expensive they are, et cetera, et cetera. And it turned out that 85% of the respondents had at least one moderate to high SEO incident in the past 12 months, and 57% of these incidents lasted longer than seven days. So this means that noindex on a homepage or one of your money pages or both, was there for at least seven days. That's bad. And 45% of these SEO incidents had an impact of $10,000 or more. And I was looking at the data at the time and I saw that some of these disasters had caused millions of dollars in lost revenue. So it can add up. 24 of the respondents said that they spend more than one and a half day a week, let that sink in, one and a half day per week, detecting and fixing SEO issues.
Steven:
So the process of auditing and keeping track of all the changes and whether or not there are SEO issues, is just very inefficient. 40% of the respondents said that they find it difficult to detect these SEO issues as well. So, super interesting info, and we're going to look at this throughout the presentation. Something I always say is that search engines never sleep. So you could have a crawler, you run it once a month on the site and you find some SEO issues that you're looking to tackle, you go in and fix them, et cetera, et cetera. That's nice and all, but ideally you have a monitoring system in place, which keeps track of your site 24/7, because search engines are doing the exact same thing. Search engines aren't coming by your site once a month to look for new content or updated content. They're hitting your site continuously.
Steven:
When you go to bed, Google's crawling your site, and if there's issues or changes that shouldn't have been pushed, Google is picking up on that. It's processing it, and it's going to have an impact on your rankings. So search engines never sleep, and Murphy's law applies to SEO as well. SEO mistakes are made, it's just a fact. And when they are made, and it happens to me as well, you just got to make sure that you fix them before they impact your rankings, and your bottom-line, that revenue we were talking about. And there's lots of moving and parts in SEO and as time continues, there's just more and more moving parts. And this is a quote from three years ago, from Gary Illes, and he said, "Google has a collection of millions of tiny algorithms that work in unison to spit out a ranking score."
Steven:
Here is that there's a whole bunch of stuff going on in Google's algorithms, there's many algorithms, and we know a lot of it is AI-driven. So there's a ton of stuff going on and it's only going to increase from there. So only going to get more and more complex. And they're keeping busy. So they're pushing out update after update. This summer alone, we've seen multiple confirmed updates, plus several unconfirmed ones. So they're busy, and there's just so much stuff going on, and you don't want to algo-chasing. We always say, look at the big picture, look at where the ball's going, not where it's at, and make sure you have all your processes and tooling in place, and you stick to all of the major things that need to be in good shape for your SEO performance to be all right. So look at your technical foundation, create content, links, authority, and getting that organic traffic. Digital teams are basically managing a perfect web storm.
Steven:
Sites are only getting complexer, and there's agile cycles pushing updates all the time. There's multiple teams even, pushing changes to multiple sites. So teams are growing and side structures are becoming more and more complex. And we as SEOs need to adapt, because otherwise we're going to go extinct. It may be a little overstating, but Google, they have so much stuff going on and they're evolving, and there's so many updates, and we as SEOs, we need to adapt to that speed, and we need to make sure we are prepared for all the complexity that we're seeing right now and that's headed towards us in the future. All right, now let's jump into the fun stuff. Couple of SEO disasters we've seen recently. Clients or colleagues going rogue, and it could be that the CMS was telling them to update the CMS, and so they did that on a live production environment. They were updating the theme, all of the plugins, et cetera, and for anyone that's worked on sites and that's ran these updates, these often do not go well, and rolling back these updates is a pain in the ass.
Steven:
So you always got to make sure this is only done on staging environments, testing environments. And only if everything's working properly there, you do it on production environment, and you need to have a backup plan so you can roll it back. So stuff like this is super tricky. Back in my agency days, we always try to educate our clients. Don't hit these update buttons. We try to hide them, but every once in a while, we'd forget and yeah, a customer would just update everything and stuff would break, and at the end of the day, when something like that happens, as an agency, you're always going to be the one that caused the issue, because your client is going to say, "Hey, why didn't you tell me this?" Or, "Hey, why is the button even there?" Blah, blah, blah. It's some discussion you're always going to lose. So you don't want to be in that position.
Steven:
And another one is clients or colleagues tweaking page titles on key pages, without running them past the SEO department. So they're just like, "Oh, Hey, these page titles, they don't look great. I'm just going to update these." And while doing so, they may really decimate your SEO content strategy. You meticulously crafted that these page titles and someone just goes at town with them. Obviously, yeah, that's not great for your rankings and organic traffic. And this is one of my all time favorites, is just folks going into the CMS and they're like, "Oh, this looks like an old page, doesn't look important." Click delete. It's stuff like this that just shouldn't happen, but it does. And when it does, you just got to make sure that you're notified as soon as possible, and you can revert this and restore those pages.
Steven:
Or this one where people will go to town, installing plugins, and really slowing down the site. Especially WordPress is very well known to do this. You add a bunch of plugins and the site nearly grinds down to halt. So again, this is something that's really going to have an impact. And nowadays with core web vitals being very important, this is something you want to know about, as soon as possible, and you want to go, jump in, roll back these changes, educate people on it and implement additional processes, if needed.
Steven:
And for the E-com folks out there, this happens so often. It's like folks just deleting product pages because they're all sold out. It's like, hey, we don't have these products anymore. Why would we need this page? And then the next day a fresh batch comes in of these products and they're like, "Oh shit, I deleted that page yesterday." So if you're unlucky, Google's already picked up on that, and the page is well on its way out of the search engine result pages. And it's really stuff like this that's... You've got your hands full fighting the competition and earning those top spots in Google. You don't want to be having all this stuff going on within your own company. So the competition is fierce enough as it is, you don't need more of it. So this is a good example of something you want to tackle head-on.
Steven:
And if it happens, you want to know about it right away, jump in and fix it. Releases gone bad, when folks prepare some new page templates. This is one of my favorites, when it comes to canonical URLs. So folks were working on rolling out some new templates and they had everything prepared, but what they did is they hard-coded canonicals, and when they pushed those, the hard-code canonicals were still pointing to the development environment, which obviously wasn't accessible to search engines. So for search engines, this is super confusing because you're serving them these pages, but then when they go out and check them, they were canonicalized to a dev environment that they can't even access. So obviously this isn't helping you getting these new pages indexed in Google. So it's little stuff like this that's really not that hard to detect, but you need to have the right processes and tooling in place.
Steven:
And then the robots text. The sort of gatekeeper to keep bots out of certain sections or even certain URLs. Oftentimes when releases are made and the robots text file ends up being copied from the staging environment to the production environment. And while a robots text, this allows slash, isn't a great way to keep bots out of your staging environment, and it doesn't keep users out of it either. So if you're doing important stuff on your staging environment and it's accessible to the world, it's a big no-go. But going back to the bots, if you're carrying over that robots text file from your staging environment to your production environment and it stays on there long enough for search engines to re-crawl it, process it, it's going to be bad day for your SEO traffic. So Google caches your robots text file for 24 hours, and depending on when they last came by, this can really mess up your organic traffic and revenue.
Steven:
This was an interesting one as well, where product prices were dropped to $1 because of an error, but people didn't know. And this is stuff that you as an SEO, are you responsible for this? Maybe not completely, but there's this intersection of product category managers and digital marketers and SEOs who work together on driving traffic and making those sales, and SEOs often have the tools in place to catch stuff like this. So when this happened and the SEOs jumped in, they were able to fix it, and not only did the price drop to $0, the currency was also switched, because it should have actually been euros. The product schema markup changed as well. So a ton stuff went wrong here. For a machine, this is super easy to detect, but for the human eye, this is something you can easily miss, especially if you're working on a site that you know very well, and you kind of create these blind spots for yourself.
Steven:
CMS plugins that contain bugs. So there are popular SEO plugins out there that sometimes contain bugs, and if there is something like a vulnerability in one of them, they sometimes push a forced update, and when these contain bugs, that's not good. So over the last couple of years, there have been a couple of issues like these where these forced updates caused a lot of SEO issues, because of these plugins containing bugs. Stuff just happens, but at the very least you want to know about it, so you can go in and take a look. Okay, was this update, is it okay? Is it causing any issues? Et cetera, et cetera. In this particular case, I think it opened up all of the image attachment URLs on the site. So for instance, for a company blog with a couple of hundred pages created, a couple thousand extra URLs, which isn't great.
Steven:
So now you're wondering, "But hey Steve, there's Google Search Console, there's Google analytics, there's crawlers. How do they help in tackling these issues?" Well, I'm glad you asked. When it comes to Google Search Console, you got to realize that the notifications they're sending are limited and they're often delayed. You cannot rely on this to provide you with real time input on what's going on your site. So when these Google alerts hit your inbox, it's already too late. And the same goes for Google analytics, because if your analytics system is reporting that there is a decrease in your conversions revenue, the shit has already hit the fan, and the room is covered. So this is sort of like a fail safe all the way at the end of the road, but you got to be ahead of these issues. So Google Analytics, yeah, it shouldn't be used in this case, to catch SEO disasters from happening.
Steven:
Rank trackers, basically the same thing. So a lot of rank track trackers do not update daily, but even if they do, if you see that your rankings are affected, the issue has already occurred. So something happened on your site, Google picked up on it and your rankings were lowered. So this is already going to have an impact on your bottom line, and ideally you want to get in front of it, before this happens.
Steven:
All right, so what about legacy crawlers? All of the crawlers we've been using for the last couple of years. Well, those are basically the pink lines. So they are often manually activated and they take basically a snapshot. It's like, hey, this is what your site looked like at this point. These are infrequent and they're great to do spot checks or ad hoc audits, but they shouldn't be used for continuous monitoring. Then the orange lines are search engines. They come in and they check your site often. They look for updates, they look for new content, et cetera, et cetera. And then there's the blue line, in this case ContentKing, 24/7 real-time monitoring solution that just keeps a watchful eye on your site, and looks out for anything that's out of the ordinary. And in case of trouble or anything that may require your attention, it sends out alerts, so you can quickly jump in and fix it.
Steven:
So that means you're kind of like Tom cruise. You can get ahead of the game and prevent these issues from really becoming an issue and having a negative impact on your bottom-line. So how do you implement SEO quality assurance, within your organization? So one of the first things we usually start with is defining an SEO quality assurance policy. So for example, not everyone within the organization needs CMS admin rights, and you got to make sure that among the team it's super clear what is allowed, and what isn't. So for example, do you want developers touching your carefully crafted page titles? Probably not. But if you turn it around, do you want developers touching code? Probably not, because most SEOs don't really know how to write code. So it has to be clear to everyone on the team, what is allowed and what isn't, on the site.
Steven:
And please for the love of God, don't do releases at 5:00 PM on Fridays. So you push out a release, everyone cracks open a beer or whatever, logs off, goes to the beach, and then the site's just, there's issues everywhere, and no one's available, because everyone's started the weekend already. I've seen this oftentimes and within ContentKing, we stopped doing big releases, until Wednesday midday, so we knew that we have two full days and a little bit of spare time, to catch big issues before the weekend, because we didn't want people to work through the weekend because of some issues. So this is something very basic, but yeah, we highly recommend you include it in your SEO QA policy.
Steven:
Something else that's super important, when for example, rolling out redesigns or pushing out new page templates, is that you set up performance budget, and you do this per page template, and you, for example, define that the largest Contentful paint may take X seconds, CLS can only be Y, and first Contentful paint has to be Z or lower, and the overall page performance has to be something like, I don't know, above 60. So this way you ensure that only pages that perform above or basically within these budgets, make it to the site. And this way, you guarantee a great page experience for your users. And we all know that is part of Google's ranking algorithms. And this is a great way to implement that within the team, make people aware of these performance budgets and how they work, and what passes the smell test and what doesn't.
Steven:
You got to communicate well. Just like SEO is a people business, because at the end of the day, there's people working on the site, there's people from different teams, you got to make sure that everyone on the team is kept in the loop about changes and updates going on. There's no such thing as over-communicating. I can't stress this enough, there's no such thing as over-communicating. It's like when people are telling you to shut up, it's like you know you've done well. You got to involved specialists early on. So as an SEO, I've been pulled into too many meetings at the last time, and they told me, it was like, "Hey Steve, we're planning a migration." I was like, "Wait, we have all these other projects lined up. How does that work together with the migration?." It doesn't. Or they tell you, it's like, "Hey, we're changing domain names. We're rebranding."
Steven:
Whenever big changes are coming, you got to involve everyone that's going to be working on this as early as possible, so they can plan, and so they can make sure that they have enough resources available to help you through these projects. The bottom-line is, don't create situations where you need to compromise on quality. And when you push out changes and when you implement your EO quality assurance policy, et cetera, et cetera, you got to have real-time monitoring in place to make sure that everything's running smooth, that everyone's sticking to what you all agreed on, and that no one's going rogue.
Steven:
And in case of trouble, you got to be alerted. Then the next step is having solid test processes. So you got to test before, during and after releases, and you got to do this religiously. And I know how easy it is to want to skip doing some tests when there's a lot of stress and there's high pressure on getting things done, but once you start doing that, you're going to see that you're going to make mistakes, and these mistakes are going to bite you later on. So tests before, during and after releases. And make sure to have a multidisciplinary crisis team on standby. So for example, have an SEO, have a content marketing person and a developer and a DevOps person available. So make sure that they know about releases and things like that, and make sure there is a plan, so in case something happens, you can easily roll back and go in, kind of reconvene, fix the issues, and make a plan on rolling out the release in the second attempt.
Steven:
So to recap, because I've been unloading a lot of information. Lack of SEO is hurting SEO performance. It's one of the biggest factors holding back SEOs at the moment. 85% of the respondents had at least one moderate to high SEO incidents in the last year, and nearly 60% of the SEO incidents lasted longer than seven days. Nearly half had an impact of $10,000 or more, and nearly a quarter of the respondents reports that they're spending more than one and a half day a week detecting and fixing SEO issues. 40% of the respondents finds it difficult to detect these SEO issues. We as SEOs need to evolve and make sure that we're keeping up with everything that's going on at Google. And we got to make sure we keep a watchful eye on all the updates being pushed out, and we need to have to write processes and tooling in place, the right policies, et cetera, et cetera. SEOs need to evolve.
Steven:
The question isn't if something is going to go wrong, the question is when, so make sure that you plan accordingly. Existing tooling often falls short. You can rely on Google Search Console, Google analytics, rank trackers, or legacy crawlers to update you, to ping you about things going wrong. And I'm willing to take bets on this. In 99% of the time, you're going to be way too late and it's already going to have an impact on your bottom line. Do all that you can to prevent issues from making their way into production. And when SEO trouble finds its way into your production environment, act swiftly and minimize its impact. Grab that fire extinguisher and put out the fire as quickly as possible. ContentKing is here to help, obviously you know that by now. And we've got a lot of time left for questions, comments, feedback, ideas. I'd love to hear from you, what you think, what kind of SEO disasters you've gone through, any questions you may have, et cetera, et cetera. Shoot.
Travis:
Awesome. That was great Steven. We do have a quick question on the survey. These 60% lasting seven days, was that due to the sites not knowing there was an issue, that something was broken, or was it just difficult to fix?
Steven:
Let me pull up the slide. Bear with me folks. So the 57% of the SEO incident lasting longer than seven days. So it's actually a combination of finding out about these issues, and then fixing them. And especially within larger organizations that operate slower, fixing the actual issue takes longer. So I would estimate that this was like 50/50, in terms of the time it lasted to find out about these incidents. Meaning that 50% was how long it took to discover the issue, and 50% is how long it took to actually get the fix in production.
Travis:
Awesome. And then kind of piggybacking off of that, say you're ranking for position one for a term and something happens, you drop off the rankings. How long does it take for you to regain that ranking, after you resolve issue?
Steven:
Yeah, that's a great question. It really depends on what went wrong and the extent of the impact it had. Unfortunately, what we're seeing is if you fix the issue and it's kind of like nothing happened, you don't always get back to your previous ranking. So you're going to end up with a lower position, driving the less organic traffic. So this is one of those cases where it really depends on what happens, your site, and of course the competition is doing stuff as well. So there's a lot of move in parts. But yeah, to answer your question, there's no guarantee to getting back that ranking that you lost, unfortunately. That's where it gets tricky.
Travis:
Got you. And we have a question from Jesse. Could you speak more about performance budgeting? Is that based on Chrome Lighthouse scores?
Steven:
Exactly. Sorry, I should have mentioned that. Yeah, that's absolutely based on Chrome Lighthouse scores, and obviously you don't have rum data on your test environment, so you got to make due with lighthouse scores. It's not perfect, but it's a good approach to implementing these performance budgets, yes.
Travis:
And then Bernard has a question he's just shot over. It's pretty common knowledge that technical SEO issues lead to decreased SEO performance. How quickly do you need to fix the issue, until you get dinged by Google?
Steven:
It depends on-
Travis:
Is there a grace period?
Steven:
Yes there is, depending on what went wrong. So for example, a robots texts issue. So for example, if you now push a change to your robots text file and it says basically keeping everyone out, including Google and Bing, et cetera, et cetera. None of the crawlers have access to the site anymore. It's going to take a couple of days, maybe even a week for you to really see that impact. So there's a lag to the impact that's going to have. So say you fix it on day four, you can still see that the issue continues to go on longer, within Google systems. It's not like the moment you fix it, the issue's over. At the end of the day, there's a ton of signals being pumped into Google systems, and it's going to be processing, and it's going to spit out results at the end of the day.
Steven:
There's that one. But for instance, if you start no-indexing pages, it's usually going to pick up super quick, as it's such a clear signal from web masses and SEOs, to search engines that you want to get this page out of the index. So that's got to be picked up very quickly.
Travis:
Awesome. And then how slow is too slow for page loads?
Steven:
So the interesting thing here, the right answer would be it's too slow if you are a target audience becomes really frustrated and leaves your site, and that differs per vertical. So for instance, if you're in travel, everything's got to be lightning fast, but if you are in the B2B space and you're selling say, I don't know, construction materials or something, and people aren't in such a hurry, you have a lot of leeway, compared to the travel vertical. So it really depends on the vertical you're in, the target audience and how forgiving they are basically when it comes to the page speed. Of course, at the end of the day, you always want to be the best and the quickest. But yeah, this is one of the situations where there's a bunch of stuff to consider. So I hope that context makes sense.
Travis:
Yeah, that's helpful. And then this is kind of tied to the E-commerce product page, where you mentioned this deleting product pages, what's the best practice for E-commerce SEO, where a product is sold out or discontinued, how should you handle that page?
Steven:
So what I always recommend in these cases, and that's actually a great question, is to keep the page up. If you're not discontinuing, if you don't have any plans to get rid of it, and if it's just temporarily sold out, just keep it up on the site. Because once a new shipment comes in and you're basically restocked, you want to start selling right away.
Steven:
Now having said that, say we're talking about a product page off a product that's been discontinued or a newer version of the product, like a version 2.0 has been launched, you can see if you can redirect that page to the new version, for example, or if there are alternative products, you can even push that. You can redirect that product page to a blog article where you make a comparison. It's like, hey, this product isn't available anymore, it's been discontinued, but consider these alternatives. So there's multiple approaches to it. I wrote a very detailed article about that. Travis, not sure. We're probably going to send an email to all of the attendees later on, let's include it there. There is a variety of approaches to tackling this, to make sure that you make the most of what you've built up.
Travis:
Awesome. Definitely can include that in the show notes that goes out tomorrow. And we have a question from Jonas. How does ContentKing display core web vital scores at the page level? Are they the most recent snapshot or an average of all visits to that page?
Steven:
So there's basically two types of core web vital scores. There is Lighthouse scores, which is synthetic data. It's called lab data, so everything's done automatically and not based on user signals. So you have those and you can continuously run those. That's what ContentKing does, continuously running those tests and reporting that. And then there is the rum data, real user data that's being pulled in, from the public resources. And those are updated, I think once a month off the top of my head. So yeah, depending on how quick the data is available, it's being pulled into ContentKing.
Travis:
Awesome, thank you. And then what's the right way to prune or declutter useless pages? Should do 404, 410, redirect or any absolute don'ts maybe?
Steven:
Yeah, so content pruning is interesting. So for those that aren't very familiar with it, the idea of content pruning is that you either really prune outdated content, you get rid of it completely, or you choose to update it, or maybe you want to redirect it. So there's a couple of options, but the idea is that by tending to the content, you're making sure that the overall site is performing better and it's just like you're growing watermelons or whatever, and you're pruning a couple of branches that aren't doing too good, so all of the other energy is being sent to the watermelons that you're really growing, you're going to end up with really good watermelons. And you apply that to your site, and you're going to go through your page and say, Hey, which pages aren't performing that well? And then there's a couple of options. If there's absolutely no future for a page and it's not getting a lot of traffic and there's no way you can really redirect it, it doesn't have any links, I would just get rid of it and return HCV status code 410.
Steven:
But oftentimes you're seeing that, you push out an article and it's just not performing that well, but when you really dive into Google Search Console or a keyword discovery platform, you're finding that it ranks pretty well for certain queries, like top of page two, and that there is potential for you to break through to page one. And that's when I would see. It's like, hey, can you improve this page? Can you extend it, update it, add more context and more useful information for your visitors, so you can break into page one and start pulling in more organic traffic. And again, I've written a lot about this topic, Travis. Hopefully we can include that article in the show notes as well.
Travis:
Definitely.
Steven:
Cool, great question.
Travis:
Awesome, and then we have a question from Heather. She shared, we just migrated off a legacy WordPress platform and they're discovering hundreds of 404 pages from random links. There would be content uploads with CSS files. What's the best practice to redirect these pages, if so, or just allow them to 404?
Steven:
Yeah, that's such an interesting question, Heather. It looks like these are assets, like JavaScript, CSS files. Those do not need to be indexed, and if they're no longer used deliberately, then I would just leave those 404s and look for bigger fish that you have to fry, and almost every time, there's bigger issues going on, on the site, so I wouldn't worry about this. Now having said that, if your current CSS files and JS files are not accessible to search engines, that is an issue because Google needs to be able, and other search engines, they need to be able to render the page like we do, when we pull up a page in our browser. Search engines are doing the same thing, and after rendering, they're analyzing the pages, looking where the content sits, et cetera, et cetera. Without those assets, the CSS files and JS files, they can't. So if they're not available and it's an issue and it's incorrectly unavailable, then it is an issue to fix. Hope that makes sense, the difference between the two cases.
Travis:
Awesome, yeah. Super helpful. And then Bernard sent over another question. What do you do if you're in Google black hole of low indexation, either site colon or Google Search Console, indexation numbers are very low compared to how many pages you have. How do you go about diagnosing which pages Google doesn't like, and how do you fix that?
Steven:
Yeah, that was a great question, Bernard. I would compare it to when you start to go to the gym. If you do a massive workout, you're going to be hurting the next day. It's going to be the worst. So it's best to always start with some low intensity workout. Same goes for your site. Push out a bit of content and then feel out what users and search engines find, and then just start experimenting. And once you see that you're building traction with certain types of content or certain topics, build out more of those. So I would highly encourage everyone to just run experiments, try stuff, not go all in from the get-go and do massive content pieces, because you could end up writing for a week, and what if the article doesn't perform, and Google is like, this mediocre content. There's a lot of content out there that's better, that provides more value than, yeah. It's going to be a bad investment. So I would start doing little things, doing lots of experiments, to see what's working and what's not.
Travis:
Cool, super helpful. And we have two more questions. The next one is, do you have process for updating plugin versions? Most WordPress sites have 20 to 30 and I know you're not a big fan of the auto and update plugins, but do you have a process like adding them staging environment to do your own testing, or do you wait for specific analysts to do testing and share their findings? And if so, who do you follow for that?
Steven:
Got you. Yeah, that was a good question. So what I would recommend, I would recommend having a recurring window where you reserve time to go over plugins and other things that need updating. Something like once a month, you make sure to go check everything and you first do everything on a staging environment, of course. And then when you see that everything's working, you push to production. Now there could be plugins that require immediate attention due to vulnerabilities, things like that. So you need to have a process for those updates as well. So you need to build in some slack in your development pipeline, for example, or if you're working with an agency, to make sure that they have some time reserved for stuff like that. So when something comes up, that they can tackle that. Ask for go-to people, I don't know the names of the sites, but our team would be keeping track of sites that report on vulnerabilities within popular web platforms, and they send newsletters and you can just go over those and see, hey, is anything I'm using vulnerable? Yes or no.
Travis:
Awesome. And then another question around, do you believe in log file analysis, and what do you specifically pay attention to for Google bot visits? And do you try to look at a frequency of Google bot visits through specific pages, and draw correlations?
Steven:
Yeah, I totally believe in log file analysis. The way I see it, there's two types of log file analyses. One is doing a very thorough analysis that you do maybe once a year or twice a year, where you analyze overall crawl behavior, and you look for fundamental issues within your site, and you tackle those. Could be that it's too hard for search engines to discover content because it's just too far down in your site architecture, for example. Or it could be that your pagination isn't working very well or that you have crawler traps, stuff like that, the larger issues that you want to tackle. And then there is more real-time log file analysis. And what I mean by that is nowadays with CDNs, content delivery networks that a lot of sites use, they have this option where you can basically plug and play, get the log files from the CDNs and pull them into system of your choice.
Steven:
For example, content king supports this, and that way you can show in real time to other people than SEOs, how search engines are crawling your site. So for instance, for content teams, this is super useful. They're pumping out content, but rarely do they know when they get crawled, when they get indexed, when they start to rank, et cetera, et cetera, and this way they get a much bigger, better picture of the crawling, indexing and ranking process, and it's very motivating for them to see as well. And they can start to put one and one together as well, because for instance, if a certain content piece really takes off on social media, once they start sharing, it gets a lot of engagement, a lot of social traffic, you usually see a one on one correlation with it being indexed super quickly, and it's just ranking super high as well. And having sharing those insights with people beyond SEOs is super useful because the more knowledge within the organization, the better. And you can create some really cool competitions around this as well.
Travis:
Awesome, yeah. And we have two more questions that just came in. One's from Nicholas. Is it worthwhile to use AB testing for the situation of low ranking content and seeing what will work?
Steven:
I guess it depends on the scale you're operating. So for example, if you're a site like Booking.com and you're dealing with issues like this, you've got a ton of traffic that you can leverage to run these tests, yeah, definitely. But if you run a small site, maybe you only have a thousand or a couple thousand visitors a month, it's really hard to do these tests. But if you have to traffic and there's a lot at stake, I would definitely recommend looking into testing and doing things like AB tests to build a business case and to get resources, to roll it out across the whole site or maybe even more sites.
Travis:
Cool, that makes sense. Last question, Heather asked, what is your feeling on Wiki pages? Are they still relevant for SEO?
Steven:
Yeah, they are definitely relevant because the way I look at it Heather, are Wiki page is relevant to users? I think in a lot of cases they are, so that's why they are important to search engines as well. And despite Google saying that they don't use no follow links within their ranking systems, et cetera, et cetera, having a Wiki page with links, really helps you in increasing your authority. And not sure if you heard about this, Google's knowledge graph, where they really try to piece together all of the entities on the internet. So for example, if we're talking about ContentKing, ContentKing doesn't have a Wiki page, but it does have social media profiles. We have press releases. We have a lot of content people are linking to us. So Google's using all of that information to kind of get an image of what ContentKing is and what we've been up to, what vertical we're in, how cool we are, how authoritative we are, et cetera, et cetera. So say we had that Wiki page, it would just provide more context to search engines. So long story short, yes.
Travis:
Cool, and she's got a follow up question. Is there a difference between that and a knowledge base?
Steven:
You could see the knowledge graph, as a knowledge base. Just basically look at it as a big database with a ton of information on it. So everything Google knows about entities, entities being a person or a company or a content page, whatever, it's all stored in that database. So I think we're talking about the same thing. Knowledge graph is really just a big knowledge base or a database.
Travis:
Awesome. Well, thanks so much for spending so much time with us Steven, and answering all of our questions.
Steven:
Sure thing, those were good questions.
Travis:
And before, make sure you jump on Twitter, let Steven know how much you appreciate his time today in the webinar. We'll also send out the recording and all the links that were discussed in today's webinar from Steven and then Steven, do you have anything you want to share before we jump off?
Steven:
Actually I do. I would love to help you on your journey to less SEO issues. So let me pull up the last slide. Oh, there we go. So you can get a free 45 day trial of ContentKing. So if you want to know what ContentKing is all about, you want to experience it yourself, head over to conentkingapp.com, set up a trial account and message our support team that you listened in on the webinar, and they will upgrade your trial to a 45 day trial. So you can play around with it and see for yourself. See if it helps you catch SEO issues and changes, or maybe your colleagues or clients going rogue.