Thursday 26 June 2014

Welcome to our blog

I hope to offer more information on making ones content more findable on the Web.

Finding information by crawling

We use software known as “web crawlers” to discover publicly available webpages. The most well‐known crawler is called “Googlebot.” Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.

The crawl process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they look for links for other pages to visit. The software pays special attention to new sites, changes to existing sites and dead links.

No comments:

Post a Comment