SEO Basics Guide 2017 – SEO Crawling
In part 4 of this SEO Basics Guide 2017, we will be looking at SEO Crawling. We will also focus on Dealing with Crawlers and other concepts closely related of SEO Crawling.
If you missed any of the earlier parts you can read them by clicking the link below:
SEO Crawling refers to the process by which Google explores your website. Every time you add a new page of update one, Google crawls your website and takes into account the changes made. This is done through algorithms and bots – called Google Spider Bots- regularly scan each and every website on the internet and update Google’s records. Having an up to date sitemap submitted to Google Webmaster Tools allows your website to be crawled. Data such as the number of inbound and outbound links, dead links and any other changes to the website is collected.
Restrict crawling where it’s not needed.
A robot.txt file tells search engines whether they can access and therefore crawl parts of your site.
You may not want to have every webpage crawled as they might not be useful to users if found on search engines. You can use Google Webmaster Tools to generate a robot.txt file which will tell Google which part of your website to crawl.
There are a handful of other ways to prevent content appearing in search results, such as adding “NOINDEX” to your robots meta tag, using .htaccess to password protect directories, and using Google Webmaster Tools to remove content that has already been crawled.
For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files.
Combat comment spam with “no follow”
Setting the value of the “rel” attribute of a link to “nofollow” will tell Google that certain links on your site shouldn’t be followed.
Sites which have a blog with public commenting turned on, any links posted within the comments may link to pages that you are not
comfortable with. Additionally, comments sections are highly susceptible to spam. Therefore, adding a “no follow” link ensures that you’re not giving your hard-earned reputation to a spammy site.
Automatically add “nofollow” comments.
The best way to avoid comment spam is to automatically add nofollow rel attribute to your comments section. You can also do this on other parts of your website that involve user-generated inputs, such as guestbooks, forums, shout-outs etc. Linking to sites that Google considers spammy can affect the reputation of your website.
Using “nofollow” for individual content, whole pages, etc.
Another use of nofollow is when you’re writing content and wish to reference a website, but don’t want to pass your reputation on to it. For example, if you want to warn your reader base of spammy sites or any other untrustworthy websites, this would be the ideal time to use nofollow.
You can use “nofollow” in your meta tags, which is placed inside the <head> tags. This will cause all links on a page to be nofollow. The Webmaster Central Blog provides a helpful post on using the robots meta tag.
Adding “nofollow” attributes to certain sections, or even certain pages of your website can greatly increase your SEO. There is no point in producing high-quality content and have it ruined by spammy comments or spammy websites. Google will not look favorably on your website if the websites you’re linking to are low quality. Always be mindful of the links coming in and going out from your website.
Join me next time in SEO Basics 2017 Guide – Part 5!