Search engines such as Google discover information about your site by employing software known as “spiders” to crawl the web. Once the spiders find a site, they follow links within the site to gather information about all the pages. The spiders periodically revisit sites to find new or changed content.
Google Sitemaps is an experiment in web crawling. By using Sitemaps to inform and direct our crawlers, we hope to expand our coverage of the web and speed up the discovery and addition of pages to our index.
If your site has dynamic content or pages that aren’t easily discovered by following links, you can use a Sitemap file to provide information about the pages on your site. This helps the spiders know what URLs are available on your site and about how often they change.
A Sitemap provides an additional view into your site (just as your home page and HTML site map do). This program does not replace our normal methods of crawling the web. Google still searches and indexes your sites the same way it has done in the past whether or not you use this program. A Sitemap simply gives Google additional information that we may not otherwise discover. Sites are never penalized for using this service. This is a beta program, so we cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index. Over time, we expect both coverage and time-to-index to improve as we refine our processes and better understand webmasters’ needs.
Also, you can submit updated Sitemaps as your URLs change, but you don’t have to, as the spiders will periodically revisit your site (and will use the frequency information you provide in your Sitemap as one of the factors in how often they revisit) and look for new pages.