« Hillary Rodham lied to America | Main | Add a Favicon to your website »
May 31, 2005
Googlebots - Waiting for the next Deep Scan
So, I'm trying to generate more hits for my miserable little website. It doesn't get a lot of hits, not because it's a disconnected series of discrete, rambling, inchoate thoughts, but because of some problem inherent in the Google search engine.
The Google Spiders (googlebots) then crawl around through these links in cyberspace, cataloging, ranking, and indexing them. You need as many inbound links (links to your web page) from high-priority sites as possible. As a webmaster, you have to direct the activity of these software robots, steering the spiders to the web's sitemap.
There are a few different types of spiders. Some crawl just the main index, and others perform a "deep crawl" where they crawl through every page and link on the web site. If you have a page that you don't want the robots (spiders) to crawl, you can specify this in the robots.txt file.
New web pages are stuck in the sandbox, sort of like a penalty box for new web pages. There are various theories as to whether this is due to too many one way links, or reciprocal links, or some combination thereof. So, that's kinda what I'm doing, is slowly attempting to crawl out of the sandbox.
One of the best ways to get a lot of one-way links is reportedly to submit your site at DMOZ. So, I submitted my site here, and to Google.
1) In your meta html, set meta name="robots" content="index,follow"
2) If you don't have a site map, then build one.
3) Make sure your Robots.txt does not ban any furthur crawling
4) Memorize the information at www.google.com/webmasters
You can create a spider map for you web site here.
http://www.webmasterworld.com/forum3/23566.htm
Posted by Peenie Wallie on May 31, 2005 at 2:19 PM
Comments