4/12/2008

Google Fills Out Forms & Crawls Results

One of the biggest search challenges has long been that the major search engines like Google cannot crawl material that can only be retrieved through the use of forms. Now Google is filling out those form to obtain the information previously hidden, the company has announced.

Google says that for the past few months, it has been filling in forms on a "small number" of "high-quality" web sites to get back information. What words has it been entering into those forms? Words automatically selected that occur on the site, with check boxes and drop-down menus also being selected:

In the past few months we have been exploring some HTML forms to try to discover new web pages and URLs that we otherwise couldn't find and index for users who search on Google. Specifically, when we encounter a element on a high-quality site, we might choose to do a small number of queries using the form. For text boxes, our computers automatically choose words from the site that has the form; for select menus, check boxes, and radio buttons on the form, we choose from among the values of the HTML.

Results returned are then crawled. Ironically, it was just over a year ago that Google warned against getting search results like these indexed. Now it's actually generating and crawling those results itself.

Don't want Google doing this to your site? Google says that if your form is blocked through robots.txt or meta robots instructions, those forms won't be accessed. In addition, some other forms won't be touched if they fit certain technical criteria:

We only retrieve GET forms and avoid forms that require any kind of user information. For example, we omit any forms that have a password input or that use terms commonly associated with personal information such as logins, userids, contacts, etc.

The move is potentially good for searchers, in that it will open up material often referred to being part of the "deep web" or "invisible web" as it was hidden behind forms. Search Engine Land executive editor Chris Sherman actually co-authored a book on the topic. He and fellow author Gary Price didn't coin the term invisible web but they certainly help popularize it.

It should be noted that Google's not the first to do something like this. Companies like Quigo, BrightPlanet and WhizBang Labs were doing this type of work years ago. But it never translated over to the major search engines. Now chapter two of surfacing deep web material is opening, this time with a major search player -- in that, Google is being a pionee

Eastern by Danny Sullivan

1 comment:

Ashi said...

Search engine optimization is a time taking and tricky business. It requires a lot of effort and hard work to rank in top. But the key phrases used to rank well on one search engine may totally fail or be less effective to rank on other search engines. Well all the majorly known search engines differ from each other in some form or the other. It is for this reason that some people create web pages for a particular search engines while the rest of the pages are created for other search engines. Usually a slight difference is present in these pages. So when indexing takes place the search engine crawlers might find the slightest difference and mark them as spam. To overcome these difficulties a robot.txt file is created which is a simple txt or word pad file that is uploaded in the root folder of your site.

Write the following
User-Agent: (Spider Name)
Disallow: (File Name)

To disallow all engines from indexing a file you simply use the * character where the engines name would usually be. However beware that the * character won't work on the Disallow line. Palcomonline.com

Live Page Popularity