SEO Tips and Tactics – Deep Crawling
Last time, we talked about getting people to link to your site and to add your site to their favorites. This time, were going to discuss something called “deep crawling.” What’s deep crawling? It’s when a search engine comes to your site and begins reading through all of the pages in your site. They follow each link that they encounter and eventually find all of the pages in your site. Then all of the content in your site is catalogged so when someone searches for something that matches the content of your site, your site is listed in the search results pages. Now that’s a perfect scenario. A more realistic scenario would look like this…
A search engine comes to your site, reads through a couple pages and gets disgusted for one reason or another and leaves. You can have a site up for years and never have some of your pages spidered by the search engines. Why is it that they read some and not others? Well, it can be for caused by many different factors. Sometimes a search engine dislikes your bloated code, so they leave. Sometimes they encounter a broken link or a malfuntioning script. So they leave. Or, a very common issue is the use of too many directories or folders. If you save a file in a folder, enclosed in another folder, enclosed in another folder, the search engines will likely ignore that page. It’s simply buried too deep. It doesn’t seem to tough to open a virtual folder, but Google doesn’t like doing it. I’ve seen pages that are unranked, go from not being found, to the top of the results page within a couple days, just by moving the page out of several folders and saving it at the root directory. This easy access pleases Google and they’ll reward you. Despite what you may read, they don’t like deep crawling. Don’t make then struggle to find your page. Keep it simple and they’ll reward you.
Chadd Bryant
Internet Building Codes