The ways in which people search for information on the Web have changed.
Web content (text, images and audio) is being created and uploaded at a rate faster than the ability of search engines to index this data. As a consequence, it is getting more and more difficult to be “found” in an ever-larger mass of unindexed and unconnected pages.
Moreover, people are becoming discouraged when they receive thousands of returns on the keywords they type into a search engine. As a result, it is increasingly vital that a site’s listing appear in the first few pages of search results — because most people will not take the time to scroll further than that.
This two-part article outlines 10 strategies that can help Web sites not only get indexed, but also position themselves for success. None of these points is more valuable than the others, but in combination they can produce results. Focusing on improving your site by addressing just one or two points might not lead to measurable gains, but trying to address four or five points in concert should help you improve your search engine ranking.
1. Proactively Submit the Site URL to a Search Engine
The business model of many search engines is similar to the advertising-based model that periodicals use: Assemble great content to attract the viewer, then sell that demographic to advertisers. Search engines obviously should allow content producers to submit URLs, because doing so strengthens the value of the search engine’s collection of indexed pages, which in turn attracts users, giving the search engine a stronger demographic to sell to advertisers.
For a small company, proactively submitting the URL of several of its Web pages is a relatively easy process that begins with simply clicking on a “submit URL” link. Such links usually can be found in one form or another on the sites of the more popular search engines. However, submitting individual URLs to multiple search engines can be time consuming, so, in a Darwinian way, third-party sites have evolved that let users enter information about a particular Web page and click a button to submit that URL to several search engines simultaneously. Some of these sites provide multiple submission services for a fee, while others do it for free and rely on advertising to reap revenue.
On a cautionary note, be suspicious of sites that claim to mass-submit your URL to many dozens or hundreds of search engines. There are less than 10 major search engines that you should care about being listed in. The rest are highly specific to particular industry sectors and probably are not relevant to your business.
2. Consider Paid Submission
While cost-conscious firms may shy away from paid submission services, there are valid reasons to make such an investment. Dennis Buchheim, director of paid inclusion at Inktomi’s head office in Sunnyvale, California, explains that paid inclusion not only helps provide consumers with more relevant results, but also helps businesses ensure their presence in algorithmic search results. For example, the service allows a client to receive reports that contain information about specific keywords entered by searchers who clicked on the client’s site in a list of search results. This information is valuable because it allows site owners to make adjustments and refinements to improve their ranking. Buchheim says that by analyzing key reporting metrics and measuring the quality of their metadata, businesses can optimize ROI and increase conversion rates
Having your site indexed and catalogued by a spider shortly after any changes have been uploaded to the Web also can enhance your competitive advantage over other companies. Paid inclusion allows a company to have its page indexed and catalogued more quickly, rather than waiting days or weeks for an updated page to be found by a Web crawler. Some variations of paid inclusion services guarantee “recrawling” of particular pages so that changes will be noted and subsequently available in the search engine’s database.
3. Tailor Content for Spiders
Search engine indexing software programs, nicknamed “spiders” to reflect how they crawl through the Web’s pages, record several aspects of a page, including its text. In the record that is created and indexed, spiders identify the frequency of particular words on a page, and this becomes part of a complicated algorithm that calculates the page’s value and, ultimately, its rank. A spider might work like this: If one page contained the word “cancer” 4 times and another page contained the word “cancer” 12 times, plus they both had the word “cancer” in their meta tags and page titles, the second page would rank higher. Spidering algorithms also calculate and rank how words are connected to other words, such as “car parts,” “car finance” or “car warranty” on the Web page of an auto dealership.
Because word count traditionally has been part of the indexing algorithm, webmasters often have tried to deceive spiders by adding additional words to their pages to artificially inflate the page rank calculation. Some conniving Web authors even have been known to add dozens or hundreds of keywords at the bottom of a page in white font on a white background. A surfer looking at such a page would see only blank space, while spiders are color blind and would record all of these words as part of the word count. Although some search engines have tried to create algorithms that cannot be tricked in this way, the method still can be somewhat effective and is far simpler than spending management time developing reciprocal links. (A warning, though: Such behavior can get a site banned entirely from a search engine if its underhanded tactics are discovered.)
Of course, content that spiders evaluate for ranking is not just limited to text, but also includes HTML code referencing image files and audio files. This means that naming your image files woodbridge.jpg and stonebridge.jpg is much more useful than naming them image1.jpg and image2.jpg. It is also helpful to make sure ALT text tags for images include some words that can assist in supporting the overall theme of the page.
4. Remember: Page Title Is Vital
A page’s title is often confused with its name. To clear things up, the page name is equivalent to the file name — i.e., abcdefg.htm — whereas the page title is the word or words that show up in a browser’s title bar. The page title should be crafted carefully. AltaVista suggests it is what search engine users see first when they scan a list of query results, and Inktomi’s Buchheim notes that it is not enough to rank high in a search engine. “You also have to … be enticing enough to click.” Indeed, enticement to click is based on both an attractively worded title and the accompanying description, which comes from the META description tags that we will discuss in point 5.
Also, consider that many medium- and large-size corporate pages are flush with images, Flash, frames and other features that are not easily recognized by spiders. AltaVista suggests page title is even more important when a particular page (such as one with frames) has little text content.
5. Mind Your Metatags
There are several kinds of metatags, but from a managerial perspective, only two are critical: the Meta Description tags and the Meta KeyWord tags.
Meta Description tags are the carefully crafted phrases and short sentences that can appear under the page title in a listing of search results. Because the attractiveness of these words can determine whether or not a searcher decides to click on a company’s link, it is important to craft this text to be compelling. Some webmasters creating pages for highly competitive consumer product companies hire consultants to write Meta Descriptions in hopes that viewers will be enticed to visit the company Web site.
Meta Keyword tags contain the key words and phrases that webmasters place in the background code at the top of the web page. In the late 1990s, spiders often used these tags, found in the HTML header at the top of each page, to pick up “clues” as to the page content — perhaps akin to reading song titles on an album cover. However, because so many Web page authors misrepresented their site content by including misleading keywords, use of Meta KeyWord tags now plays a far smaller role in determining page value and ranking. Inktomi’s Buchheim says his company’s search engine pays little attention to Meta Keyword tags, which are considered supplementary to other factors, such as title and number of links. Likewise, www.webrankinfo.com advises that Google no longer relies on Meta Keyword tags either.
In addition to Google, other search engines such as AOL Search also do not use software that responds to Meta Keyword tags; rather, employees visit submitted URLs and determine whether or not they should be included. While the use of people in the screening process is more expensive than relying exclusively on spidering algorithms, the subsequent indexed compilation has greater value and consequently attracts a discriminating demographic that can be sold back to deep-pocketed advertisers.
However, since it seems simple to “pack in” a lot of words in the Meta Keyword section of a page, many Web authors still rely on this technique even though its Golden Age was in 1999, 2000 and 2001. “Many people incorrectly believe that good Meta Keyword tags are all that is needed to achieve good listings in the search engines,” cautions www.submit-it.com. The tags still contribute somewhat to site ranking, but by themselves they are not of significant value, considering how the search engines of 2003 operate.
Go on to Part 2.
Prof. W. Tim G. Richardson is a full-time Professor at Seneca College, and concurrently teaches part-time at Centennial College. He is also a Lecturer in the Division of Management at the University of Toronto in Canada. He can be reached through his Web site, www.witiger.com.
Social Media
See all Social Media