Operation Manual

82 Developing Sites and Pages
Optimization of Web pages for search engines is possible in several ways:
Meta Tags: Tags store search engine descriptors (i.e., keywords and a
description) for the site and/or an individual page. These tags are used to
allow better matching between entered search engine text (like you might
enter into Google) and the keywords you've associated with your Web
site or page. Additionally, a robots meta tag also lets you
include/exclude the site or pages from being indexed; hyperlinks to other
pages can also be prevented from being explored (crawled by "spiders)".
Robots: Pages (or folders) can be excluded from search-engine indexing
by using a robots file. This works in an equivalent way to the robots meta
tag but uses a text file (robots.txt) to instruct robots or spiders what not
to index. The file simply lists excluded Web site page/folder references.
Sitemaps: The opposite of the "robots" concept; pages can be included to
aid and optimize intelligent crawling/indexing. Web site page references
are stored in a dedicated sitemap file (sitemap.xml).
Whether you are using Meta tags, robots, or sitemaps independently or in
combination, WebPlus makes configuration simple. As these settings can be
established or modified for the whole site (Site Properties; Search Engine tab)
any newly created page will adopt site's search engine settings. If you change
the site settings, all Web pages will update to the new settings automatically.
However, you can override the site's settings on a specific Web page (
Page
Properties; Search Engine tab) at any time. The page's override means that
subsequent changes to site settings will always be ignored.