CRAWLABILITY & INDEXING
When you enter a search query into your web browser, search engine send out what they call “bots” to scan the whole of the internet looking for any websites that return relevant information based on the keywords or phrase searched for. If your website is not easy to find you will not be found in the search listings.
It is essential that these search engine “bots” can quickly and easily lock onto your website.
If they are unable to find you your website you could be missing out on a wide market of potential clients.
Use of XMLSitemap
Website Redirects
Google Webmaster Tools
Broken Links
Some of the more common factors that can prevent this from happening include not using a recognised sitemap, a robot’s text file blocking important information, redirected links and broken or canonical links built into your website.
Also, as Google do not like websites with duplicate content, another issue could be if your domain name works on both a www. and non www. versions.
If your website opens up when using either of these URL’s, the chances are that Google will recognise these as individual websites, as so duplicated content. This can often be rectified through your hosting company set up, by adding specific coding to an .htaccess file or via Google Webmaster control panel.
There are number of free Google Tools that will improve your crawlability and correct use of a sitemap will improve your indexing.