This page has been robot translated, sorry for typos if any. Original content here.

Promotion of the site step by step

Promotion of the site step by step

Table of contents:
Introduction
1. General information about search engines
1.1 History of the development of search engines
1.2 General principles of search engines
2. Internal ranking factors
2.1 Textual design of web pages
2.1.1 The amount of text per page
2.1.2 Number of keywords per page
2.1.3 Keyword Density
2.1.4 Location of keywords on the page
2.1.5 Stylistic design of the text
2.1.6 The "TITLE" tag
2.1.7 Keywords in the text of links
2.1.8 "ALT" Image Tags
2.1.9 Desciption Meta-tag
2.1.10 Keywords meta tag
2.2 Structure of the site
2.2.1. Number of pages on the site
2.2.2. Navigation menu
2.2.3 Keyword in the title of the page
2.2.4 Avoid subdirectories
2.2.5 One page - one key phrase
2.2.6 Homepage of the site
2.3 Common Errors
2.3.1 Graphical title
2.3.2 Graphical navigation menu
2.3.3 Navigating through scripts
2.3.4 Session identifier
2.3.5 Redirects
2.3.6 The Hidden Text
2.3.7 One-Pixel Links
3 External ranking factors
3.1 What is used for accounting for external links to the site
3.2 Importance of references (citation index)
3.3. Reference text
3.4 Relevance of referring pages
3.5 Google PageRank - the theoretical basis
3.6 Google PageRank - practical use
3.7 TIC and VIC Yandex
3.8 Increasing Link Popularity
3.8.1 Submission to general-purpose directories
3.8.2 The DMOZ Directory
3.8.3 The Yandex Directory
3.8.4 Link Exchange
3.8.5 Press releases, news feeds, thematic resources
4 Indexing the site
5 Keyword Selection
5.1 Initial choice of keywords
5.2. High-frequency and low-frequency
5.3 Assessing the level of competition of search queries
5.4 Sequential refinement of search queries
6 Various information about search engines
6.1 Google SandBox
6.2 Google LocalRank
6.3. Features of the various search engines
6.4 Tips, assumptions, observations
6.5 Creating the right content
6.6 Choosing a domain and hosting
6.7 Change of website address
7. Semonitor - a package of programs for promotion and optimization of the site
7.1 Module Definition of Positions
7.2 External links module
7.3 Module Indexing the site
7.4 Log Analyzer Module
7.5 Module Page Rank Analyzer
7.6 Keyword Selection Module
7.7 HTML analyzer module
7.7 Site Registration Programs AddSite and Add2Board
8. Useful resources
Instead of the conclusion - promotion of the site step by step
----------------------------------------

Introduction
This course is intended for authors and site owners who want to more closely address the issues of search engine optimization and promotion of their resources. It is designed mainly for beginners, albeit an experienced webmaster, I hope, will learn something new from it. On the Internet you can find a large number of articles on the topic of search engine optimization, this tutorial attempts to combine all information in the form of a single, consistent course.
The information presented in this textbook can be divided into several parts:
- Clear, specific recommendations, practical guidance for action;
- theoretical information, which, in our opinion, should be possessed by any specialist in the field of seo;
- advice, observations, recommendations obtained on the basis of experience, study of various materials, etc.

1. General information about search engines
1.1 History of the development of search engines
In the initial period of the development of the Internet, the number of its users was small, and the amount of information available is relatively small. In most cases, people from different universities and laboratories had access to the Internet, and in general the Network was used for scientific purposes. At that time, the task of finding information on the Internet was far from being as relevant as it is now.
One of the first ways to organize access to the information resources of the network was the creation of catalogs of sites in which links to resources were grouped according to the topic. The first such project was the Yahoo site, which opened in April 1994. After the number of sites in the directory of Yahoo has increased significantly, the ability to find information on the catalog has been added. This, of course, was not a search engine in the full sense, since the search scope was limited only to resources present in the directory, and not all Internet resources.
Catalogs of references were widely used earlier, but practically lost their popularity at the present time. The reason for this is very simple - even modern catalogs containing a huge amount of resources represent information only about a very small part of the Internet. The largest directory of the DMOZ network (or Open Directory Project) contains information about 5 million resources, while the Google search engine database consists of more than 8 billion documents.
The first full-fledged search engine was the project WebCrawler appeared in 1994.
In 1995, the search engines Lycos and AltaVista appeared. The last many years was the leader in the field of information retrieval on the Internet.
In 1997, Sergey Brin and Larry Page created Google as part of a research project at Stanford University. At the moment Google is the most popular search engine in the world.
September 23, 1997 was officially announced the search engine Yandex, the most popular in the Russian-speaking part of the Internet.
Currently, there are 3 main international search engines - Google, Yahoo and MSN Search, which have their own databases and search algorithms. Most other search engines (which can be counted very much) uses in one form or another the results of the three listed. For example, the search AOL (search.aol.com) and Mail.ru use the Google database, and AltaVista, Lycos and AllTheWeb - the Yahoo database.
In Russia, the main search engine is Yandex, followed by Rambler, Google.ru, Aport, Mail.ru and KM.ru.

1.2 General principles of search engines
The search system consists of the following main components:
Spider (spider) - a browser-like program that downloads web pages.
Crawler (crawler, "traveling" spider) - a program that automatically passes through all the links found on the page.
Indexer (an indexer) is a program that analyzes web pages downloaded by spiders.
Database - the store of downloaded and processed pages.
Search engine results engine - fetches search results from the database.
Web server (web server) is a web server that interacts with the user and the rest of the search engine.
Detailed implementation of search engines may differ from each other (for example, the Spider + Crawler + Indexer bundle can be executed as a single program that downloads well-known web pages, analyzes them and searches for links by new resources), but all search engines have the described general features.
Spider. A spider is a program that downloads web pages in the same way as a user's browser. The difference is that the browser displays the information contained on the page (textual, graphical, etc.), the spider does not have any visual components and works directly with the html-text of the page (you can do "view the html-code" in your browser to see the "raw" html-text).
Crawler. Selects all the links that appear on the page. Its task is to determine where the spider should go next, based on references or based on a pre-defined list of addresses. Krauler, following the links found, searches for new documents that are not yet known to the search engine.
Indexer. The indexer parses the page into parts and analyzes them. Various elements of the page are selected and analyzed, such as text, headings, structural and style features, special service html tags, etc.
Database. The database is the storehouse of all the data that the search engine downloads and analyzes. Sometimes a database is called a search engine index.
Search Engine Results Engine. The system of output of results deals with the ranking of the pages. It decides which pages satisfy the user's request, and in which order they should be sorted. This happens according to the algorithms of the search engine ranking. This information is the most valuable and interesting for us - it is with this component of the search engine that the optimizer interacts, trying to improve the site's position in the issue, so in the future we will consider in detail all factors affecting the ranking of results.
Web server. Typically, the server has an html-page with an input field, in which the user can specify the search term of interest. The web server is also responsible for delivering the results to the user as an html page. Back to top >>>

2. Internal ranking factors
All factors that affect the position of the site in the issuance of the search engine can be broken down into external and internal ones. Internal ranking factors are those that are under the control of the owner of the website (text, design, etc.).
2.1 Textual design of web pages
2.1.1 The amount of text per page
Search engines value sites that are rich in information content. In general, you should strive to increase the text content of the site.
The optimal ones are pages containing 500-3000 words or 2-20 kb. text (from 2 to 20 thousand characters).
A page consisting of just a few sentences is less likely to hit the top search engines.
In addition, more text on the page increases the visibility of the page in the search engines due to rare or random search phrases, which in some cases can give a good influx of visitors.
2.1.2 Number of keywords per page
Keywords (phrases) should occur in the text at least 3-4 times. The upper limit depends on the total volume of the page - the larger the total volume, the more repetitions can be made.
Separately, we should consider the situation with search phrases, that is, word combinations from several keywords. The best results are observed if the phrase occurs several times in the text exactly as a phrase (ie all words together in the right order), and in addition, words from the phrase come across in the text several times singly. There must also be some difference (imbalance) between the number of occurrences of each of the words that make up the phrase.
Consider the situation with an example. Let's say we optimize the page for the phrase "dvd player". A good option - the phrase "dvd player" occurs 10 times in the text, in addition, the word "dvd" occurs separately 7 more times, the word "player" 5 more times. All the figures in the example are conditional, but they show a general idea well.
2.1.3 Keyword Density
The density of the keyword on the page shows the relative frequency of the word's content in the text. Density is measured in percent. For example, if a given word was encountered 5 times on a page of 100 words, then the density of this word is 5%. Too low density will result in the search engine not giving proper meaning to this word. Too high density is able to enable the search engine's spam filter (that is, the page will be artificially lowered in search results due to excessive frequent use of the passphrase).
The density of the key text is 5-7%. In the case of phrases consisting of several words, you should calculate the total density of all the keywords that make up the phrase and make sure that it fits within the specified limits.
Practice shows that the density of the key text is more than 7-8%, although it does not lead to any negative consequences, but also the sense of the special in most cases also does not.
2.1.4 Location of keywords on the page
A very short rule - the closer the keyword or phrase to the beginning of the document, the more weight they get in the eyes of the search engine.
2.1.5 Stylistic design of the text
Search engines give special importance to the text, one way or another selected on the page. The following recommendations can be made:
- use keywords in the headings (the text marked with the "H" tags, in particular "h1" and "h2"). Currently, the use of css allows you to override the type of text highlighted by these tags, so using the "H" tags is less important than before, but you should not neglect them at all;
- Highlight keywords in bold type (not in the whole text, of course, but make this selection 2-3 times on the page does not hurt). To do this, we recommend using the "strong" tag, instead of the more traditional "B" tag (bold).
2.1.6 The "TITLE" tag
One of the most important tags that search engines attach great importance. You must use keywords in the TITLE tag.
In addition, the link to your site in the search engine output will contain text from the TITLE tag, so this is, in some way, the business card of the page.
It is by this link that the visitor of the search engine navigates to your site, so the TITLE tag should not only contain keywords, but be informative and attractive.
Typically, the output of the search engine is 50-80 characters from the TITLE tag, so the size of the header should preferably be limited to this length.
2.1.7 Keywords in the text of links
Also a very simple rule - use keywords in the text of outbound links from your pages (both to other internal pages of your site, and to other network resources), this can add a small advantage to you when ranking.
2.1.8 "ALT" Image Tags
Any image on the page has a special attribute "alternative text", which is specified in the tag "ALT". This text will be displayed on the screen in the event that the image could not be downloaded or the images are blocked in the browser.
Search engines remember the value of the ALT tag when parsing (indexing) a page, but do not use it when ranking search results.
At the moment, it is reliably known that the Google search engine takes into account the text in the ALT tag of those images that are links to other pages, the rest of the ALT tags are ignored. There are no exact data on other search engines, but one can assume something like this.
In general, it is worth giving such advice - to use keywords in the ALT tags it is possible and necessary, although it does not matter in principle.
2.1.9 Desciption Meta-tag
The Description Meta tag is specifically designed for specifying the page description. This tag does not affect the ranking in any way, but, nevertheless, it is very important. Many search engines (and, in particular, the largest Yandex and Google) display information from this tag in search results, if this tag is present on the page and its content corresponds to the content of the page and the search query.
It's safe to say that a high place in the search results is not always provided by a large number of visitors. If the description of your competitors in the results of the issuance will be more attractive than your site, then the visitors of the search engine will choose them, not your resource.
Therefore, the proper composition of the Description meta-tag is of great importance. The description should be brief, but informative and attractive, contain keywords specific to this page.
2.1.10 Keywords meta tag
This meta-tag was originally intended to indicate the keywords of this page. However, at present it is almost not used by search engines.
However, you should fill this tag "just in case". When filling out, the following rule should be adhered to: add only those keywords that are actually present on the page.

2.2 Structure of the site
2.2.1. Number of pages on the site
The general rule is that the more, the better. The increase in the number of pages of the site improves its visibility in the search engines.
In addition, the gradual addition of new information materials to the site is perceived by search engines as the development of the site, which can give additional advantages in ranking.
Thus, try to post more information on the site - news, press releases, articles, useful tips and so on.
2.2.2. Navigation menu
As a rule, any site has a navigation menu. Use the keywords in the links of the menu, this will give extra weight to those pages that are referenced.
2.2.3 Keyword in the title of the page
There is an opinion that the use of keywords in the name of the html-file of the page can positively affect its place in the search results. Naturally, this applies only to English-language requests.
2.2.4 Avoid subdirectories
If your site has a moderate number of pages (several dozen), then it is better that they are in the root directory of the site. Search engines consider such pages to be more important.
2.2.5 One page - one key phrase
Try to optimize each page under your own key phrase. Sometimes you can choose 2-3 related phrases, but you should not optimize one page for 5-10 phrases at once, most likely the result will not be any.
2.2.6 Homepage of the site
Optimize the main page of the site (domain name, index.html) for the most important phrases for you. This page has the greatest chance of getting into the top search engines.
According to my observations, the main page of the site can account for up to 30-40% of the total search traffic.

2.3 Common Errors
2.3.1 Graphical title
Very often in the design of the site is used a graphic title (cap), that is, a full-width image that contains, as a rule, a company logo, name and some other information.
Should not be doing that! The top of the page is a very valuable place where you can place the most important keywords. In the case of a graphic image, this place is wasted.
In some cases, there are quite ridiculous situations: the header contains textual information, but for the sake of greater visual appeal is made in the form of a picture (accordingly the image can not be taken into account by the search engines).
It is best to use a combined option - the graphic logo at the top of the page is present, but does not occupy its full width. The rest of the text header is placed with the keywords.
2.3.2 Graphical navigation menu
The situation is similar to the previous paragraph - internal links on your site should also contain keywords, this will give an additional advantage in ranking. If the navigation menu for greater attractiveness is done in the form of graphics, the search engines will not be able to take into account the text of the links.
If you do not have the option to skip the graphical menu, do not forget to at least provide all the pictures with the correct ALT tags.
2.3.3 Navigating through scripts
In some cases, navigation through the site is through the use of scripts. It should be understood that the search engines can not read and execute scripts. Thus, the link given through the script will not be available to the search engine and the crawler will not go through it.
In such cases, you should always duplicate links in the usual way, so that navigation on the site was available to everyone - both for your visitors and for search engine robots.
2.3.4 Session identifier
On some sites it is customary to use the session ID - that is, each visitor when visiting the site receives a unique parameter & session_id =, which is added to the address of each visited page of the site.
Using the session identifier allows you to more conveniently collect statistics about the behavior of site visitors and can be used for some other purposes.
However, from the search robot's point of view, the page with the new address is a new page. At each visit to the site the search robot will receive a new session ID and, visiting the same pages as before, will treat them as new pages of the site.
Strictly speaking, search engines have algorithms for "splicing" mirrors and pages with the same content, so sites that use session IDs will still be indexed. However, the indexing of such sites is difficult and in some cases can go wrong. Therefore, the use of session IDs on the site is not recommended.
2.3.5 Redirects
Redirects hamper the analysis of the site by search robots. Do not use redirects if there are no clear reasons for this.
2.3.6 The Hidden Text
The last two points are more likely not errors, but a deliberate deception of searches, but they still need to be mentioned.
Using hidden text (the color of the text coincides with the background color, for example, white on white) allows you to "pump" the page with the necessary keywords without disrupting the logic and design of the page. Such text is invisible to visitors, however it is perfectly read by search robots.
Using such "gray" optimization methods can lead to a site ban - that is, the forced exclusion of a site from the index (database) of the search engine.
2.3.7 One-Pixel Links
The use of 1 * 1 pixel graphic links (that is, actually invisible to the visitor) is also perceived by search engines as an attempt at deception and can lead to a site ban. Back to top >>>

3 External ranking factors
3.1 What is used for accounting for external links to the site
As can be seen from the previous section, almost all the factors affecting the ranking are under the control of the author of the page. Thus, for the search engine it becomes impossible to distinguish a really high-quality document, from a page created specifically for a given search phrase or even a page generated by a robot and not carrying any useful information at all.
Therefore, one of the key factors in the ranking of pages is the analysis of external links to each page being evaluated. This is the only factor that is beyond the control of the author of the site.
It is logical to assume that the more external links available on the site, the more interest this site presents to visitors. If the owners of other sites on the network put a link to the estimated resource, it means that they consider this resource to be of sufficient quality. Following this criterion, the search engine can also decide how much weight to attach to a particular document.
Thus, there are two main factors by which the pages available in the database search engine will be sorted when issued. This relevance (that is, how much the page is related to the subject of the query - the factors described in the previous section) and the number and quality of external links. The latter factor also received the names reference citation, link popularity or citation index.
3.2 Importance of references (citation index)
It's easy to see that a simple count of the number of external links does not give us enough information to evaluate the site. Obviously, the link from www.microsoft.com should mean much more than the link from the home page www.hostingcompany.com/~myhomepage.html, so you can not compare the popularity of sites by the number of external links - you must also take into account the importance of links.
To evaluate the number and quality of external links to a site, search engines introduce the concept of citation index.
The index of citation or IC is a general indication of numerical indicators that estimate the popularity of a particular resource, that is, some absolute value of the importance of the page. Each search engine uses its own algorithms to calculate its own citation index, as a rule, these values ​​are not published anywhere
In addition to the usual citation index, which is an absolute indicator (that is, a specific number), the term weighted citation index is introduced, which is a relative value, that is, it shows the popularity of this page relative to the popularity of other pages on the Internet. The term "weighted citation index" (WIC) is usually used in relation to the search engine Yandex.
A detailed description of citation indexes and algorithms for their calculation will be presented in the following sections.
3.3. Reference text
A huge importance in the ranking of search results is given to the text of external links to the site.
The text of the link (or, alternatively, the anchor or reference text) is the text between the "A" and "/ A" tags, that is, the text by which you can click on the browser to go to a new page.
If the link text contains the necessary keywords, the search engine sees this as an additional and very important recommendation, a confirmation that the site does contain valuable information relevant to the topic of the search query.
3.4 Relevance of referring pages
In addition to the reference text, the general information content of the referring page is also taken into account.
Example. Suppose we are promoting a resource for selling cars. In this case, the link from the car repair site will mean much more than a similar link from the site for gardening. The first link comes with a thematically similar resource, so it will be more appreciated by the search engine.
3.5 Google PageRank - the theoretical basis
The first who patented the system of accounting for external links was Google. The algorithm was called PageRank. In this chapter, we'll talk about this algorithm and how it can influence the ranking of search results.
PageRank is calculated for each web page separately, and is determined by the PageRank (citation) of the pages referring to it. A kind of vicious circle.
The main task is to find a criterion expressing the importance of the page. In the case of PageRank, this criterion was used to select theoretical page attendance.
Let's consider the model of the user's journey through the network by clicking on the links. It is assumed that the user starts browsing the sites with some randomly selected page. Then he goes to other resources via links. In this case, there is a possibility that the visitor will leave the site and again start viewing documents from a random page (in the PageRank algorithm the probability of such an action is 0.15 at each step). Accordingly, with a probability of 0.85, he will continue the journey by clicking on one of the links available on the current page (all links are equal in this case). Continuing the journey to infinity, he will visit popular pages many times, and on little-known ones - less.
Thus, PageRank of a web page is defined as the probability of finding a user on this web page; while the sum of the probabilities for all Web pages of the network is one, since the user is necessarily on a page.
Since it is not always convenient to operate with probabilities, after a series of transformations with PageRank it is possible to work in the form of concrete numbers (as, for example, we used to see it in Google ToolBar, where each page has PageRank from 0 to 10).
According to the model described above, we get that:
- every page on the network (even if it does not have external links) initially has a nonzero PageRank (albeit very small);
- every page that has outbound links transfers a portion of its PageRank to pages it refers to. In this case, the transmitted PageRank is inversely proportional to the number of links on the page - the more links, the smaller PageRank is transmitted for each;
- PageRank is not transmitted completely, at each step there is damping (that probability is 15% when the user starts viewing from a new, randomly selected page).
Let us now consider how PageRank can influence the ranking of search results (say "can", because in its pure form, PageRank has long been not involved in the Google algorithm, as it was before, but more on this below). With the influence of PageRank, everything is very simple - after the search engine has found a number of relevant documents (using text criteria), you can sort them according to PageRank - since it would be logical to assume that a document that has more high-quality external links contains the most valuable information.
Thus, the PageRank algorithm "pushes" upward the search for those documents that are most popular without the search engine.
3.6 Google PageRank - practical use
Currently, PageRank is not used directly in the Google algorithm. This is understandable - because PageRank characterizes only the number and quality of external links to the site, but does not take into account the reference text and information content of the referring pages - namely these factors will have the maximum value in the ranking. It is assumed that for ranking Google uses the so-called thematic PageRank (that is, only links from thematically linked pages), but details of this algorithm are known only to Google developers.
To learn the value of PageRank for any web page, you can use the Google ToolBar, which shows the value of PageRank in the range from 0 to 10. Note that Google ToolBar does not show the exact value of PageRank, but only the PageRank range into which the site hits, and the range number (from 0 to 10) is determined by the logarithmic scale.
Let's explain by an example: each page has the exact value of PageRank, known only to Google. To determine the desired range and output information on the ToolBar a logarithmic scale is used (an example is shown in the table)
Real value of PR Value ToolBar
1-10 -> 1
10-100 -> 2
100-1000 -> 3
1000-10.000 -> 4
etc.
All figures are relative, but they clearly demonstrate that the PageRank ranges shown in Google ToolBar are not equivalent to each other. For example, raising PageRank from 1 to 2 is easy, and from 6 to 7 it is much more difficult.
In practice, PageRank is used mainly for two purposes:
1. Rapid assessment of the level of website promotion. PageRank does not provide accurate information about referring pages, but allows you to quickly and easily "estimate" the level of development of the site. For English-language sites, you can adhere to the following gradation: PR 4-5 - the most typical PR for most sites of average promotion. PR 6 - very well promoted site. PR 7 - a value that is almost unattainable for a regular webmaster, but sometimes occurs. PR 8, 9, 10 - are found only at sites of large companies (Microsoft, Google, etc.). PageRank knowledge can be used when exchanging links, in order to evaluate the quality of the page proposed for the exchange and in other similar situations.
2. Assessment of the level of competition on the search query. Although PageRank is not used directly in ranking algorithms, it nevertheless allows you to indirectly evaluate the competitiveness of a given query. For example, if there are pages with PageRank 6-7 in the search engine, then the site with PageRank 4 has very little chance of climbing to the top.
Another important note is that the PageRank values ​​shown in Google ToolBar are recalculated rather infrequently (every few months), so ToolBar shows some kind of outdated information. That is, the Google search engine itself takes into account changes in external links much more quickly than these changes are displayed in the Google ToolBar.
3.7 TIC and VIC Yandex
VIC - weighted citation index - analog PageRank, used by the search engine Yandex. The values ​​of the WIC are not published anywhere and are known only to Yandex. Since you can not find out the WIC, you should just remember that Yandex has its own algorithm for assessing the "importance" of the pages.
TIC - thematic index of citation - is calculated for the site as a whole and shows the authority of the resource relative to other, thematically close resources (and not all Internet sites in general). TIC is used for ranking sites in the Yandex catalog and does not affect the search results in Yandex itself.
The values ​​of TIC are shown in Yandex.Bar. It should only be remembered that the TIC is calculated for the site as a whole, and not for each specific page.
For practice, TIC can be used for the same purposes as PageRank - assessing the site's popularity and assessing the level of competition for a given search query. Due to the coverage of the Internet by the Yandex search engine, the TIC is very well suited for evaluating Russian-language sites.
3.8 Increasing Link Popularity
3.8.1 Submission to general-purpose directories
On the Internet, there are a large number of directory sites (directories) that contain links to other network resources, broken down by subject. The process of adding information about your site to them is called a submission.
Such directories are paid and free, they can require or do not require a back link from your site. Their attendance is very small, that is, there is no real inflow of visitors from them. However, search engines take into account links from such directories that can raise your site in search results.
Important! Consider that the real value is only those directories that place a direct link to your site. At this point, it is worth dwelling in more detail. There are two ways to link. A direct link is placed through the standard HTML language construct ("A href = ... etc"). In addition, links can be put through various kinds of scripts, redirects, etc. Search engines understand only direct links, directly specified in the html-code. Therefore, if the directory does not provide a direct link to your site, then its value is close to zero.
Do not submit to FFA (free-for-all) directories. Such directories automatically host links of any subject matter, they are ignored by search engines. The only thing that will result in the submission to the FFA is the increase in spam on your e-mail address. In fact, the main purpose of the FFA is this.
Be wary of the promises of various programs and services to add your resource to hundreds of thousands of search engines, directories and directories. Really useful directories on the network will be typed no more than a few hundred, this figure and you need to build on. Professional services for submission work just with so many directories. If huge numbers are promised in hundreds of thousands of resources, then the submission database consists mainly of the FFA archives mentioned above and other useless resources.
Give preference to manual or semi-automatic submission - do not trust fully automated processes. As a rule, submitting, under the control of a person, gives a much better return than fully automatic submission.
The need to add a site to paid catalogs, or affix a response backlink from your site, should be decided separately for each catalog. In most cases, this does not make much sense, but there may be exceptions.
Web site submission to directories does not have a very significant effect, but it improves visibility of the site in search engines. This possibility is generally available and does not require much time or money, so do not forget about it when promoting your project.
3.8.2 The DMOZ Directory
The DMOZ directory (www.dmoz.org) or the Open Directory Project is the largest Internet directory. In addition, there are a large number of copies of the main DMOZ website on the Internet. Thus, placing your site in the DMOZ directory, you will receive not only a valuable link from the catalog itself, but also a few dozen links from related resources. Thus, the DMOZ directory is of great value for the webmaster.
It's not easy to get to the directory, it depends on your luck. The site can appear in the catalog in a few minutes after the addition, and can wait many months for its turn.
If your site does not appear in the directory for a long time, but you are sure that everything was done correctly and the site is suitable for the catalog in terms of its parameters, you can try to write to the editor of your category with a question about your application (the DMOZ site provides such an opportunity). No guarantees, of course, are given, but it can help.
Adding to the DMOZ directory is free, including for commercial sites.
3.8.3 The Yandex Directory
The presence in the Yandex catalog gives a valuable thematic link to your site, which can improve the position of your site in the search engine. In addition, the Yandex directory itself can give a little traffic to your site.
There are paid and free options for adding information to the Yandex directory. Of course, in the case of a free option, neither the terms nor the addition of the site are guaranteed in any way.
In conclusion, a couple of recommendations for submitting to such important directories as DMOZ and Yandex. First of all, carefully read the requirements for sites, descriptions, etc., so as not to violate the rules when applying (this can lead to the fact that your application will not be considered).
And secondly, the presence in these catalogs is a desirable requirement, but not mandatory. If you can not get into these directories, do not despair - to achieve high positions in the search results you can and without these directories, most sites do exactly that.
3.8.4 Link Exchange
The exchange of links is that you are linking to other sites from a dedicated page, you yourself get similar links from them. In general, search engines do not welcome the exchange of links, since in most cases it aims to change the output of a search engine and does not bring anything useful to Internet users. However, this is an effective way to increase link popularity, if you observe several simple rules.
- Change links with thematically linked sites. Exchange with non-thematic sites is ineffective;
- before the exchange, make sure that your link is placed on a "good" page. That is, the page should have some PageRank (preferably 3-4 or higher), should be available for indexing by search engines, the placed link should be direct, the total number of links on the page should not exceed 50, etc .;
- Do not create links on the site of catalogs. The idea of ​​such a catalog looks attractive - it becomes possible to change with a large number of sites of any subject, for any site there is a corresponding category in the catalog. However, in our case, quality is more important than quantity, and here there are a number of pitfalls. No webmaster will put a qualitative link on you, if in response he receives a dummy link from your directory (PageRank pages from such directories, as a rule, leaves much to be desired). In addition, search engines are extremely negative to such directories, there were also cases of ban sites for the use of such directories;
- Highlight a separate page on the site under the link exchange. It should have some PageRank, be indexed by search engines, etc. Do not put more than 50 links from the same page (otherwise some of the links may not be considered by the search engines). This will help you to find partners for exchange more easily;
- search engines try to track reciprocal links, so if possible, use another domain / site other than the promoted one to host the response links. Например, вы продвигаете ресурс site1.com, а ответные ссылки ставите на ресурсе site2.com – это оптимальный вариант;
- проявляйте некоторую осторожность при обмене. Довольно часто приходится сталкиваться с тем, что не совсем честные вебмастера удаляют ваши ссылки со своих ресурсов, поэтому необходимо время от времени проверять наличие своих ссылок.
3.8.5 Press releases, news feeds, thematic resources
Этот раздел относится уже скорее к маркетингу сайта, а не к чистому seo. Существует большое число информационных ресурсов и новостных лент, которые публикуют пресс-релизы и новости на различные темы. Такие сайты способны не только привести к вам посетителей напрямую, но и повысить столь нужную нам ссылочную популярность сайта.
Если вы затрудняетесь создать пресс-релиз или новость самостоятельно, то подключайте журналистов – они помогут вам найти или создать информационный повод.
Ищите тематически связанные ресурсы. В Интернете существует огромное количество проектов, которые, не являясь вашими конкурентами, посвящены той же тематике, что и ваш сайт. Старайтесь найти подход к владельцам этих ресурсов, вполне вероятно, что они будут рады разместить информацию о вашем проекте.
И последнее – это относится ко всем способам получения внешних ссылок – старайтесь несколько разнообразить ссылочный текст. Если все внешние ссылки на ваш сайт будут иметь одинаковый ссылочный текст, то это может быть понято поисковыми системами как попытка спама. Back to top >>>

4 Indexing the site
Before the site appears in the search results, it must be indexed by the search engine. Indexing means that the crawler visited your site, analyzed it and entered information into the database of the search engine.
If some page is entered in the search engine index, then it can be displayed in the search results. If there is no page in the index, then the search engine knows nothing about it, and therefore can not use the information from this page in any way.
Most medium-sized sites (that is, containing dozens or hundreds of pages) usually do not experience any problems with the correct indexing by search engines. However, there are a number of points that should be considered when working on the site.
The search engine can learn about the newly created site in two ways:
- Manually add the site address through the appropriate form of the search engine. In this case, you yourself tell the search engine about the new site and its address falls into the queue for indexing. Add only the main page of the site, the rest will be found by the search robot by links;
- to provide the search robot to independently find your site. If your new resource has at least one external link from other resources already indexed by the search engine, then the crawler will visit and index your site in a short time. In most cases, it is recommended to use this option, that is, to get a few external links to the site and just wait for the arrival of the robot. Manually adding a site can even extend the waiting time of the robot.
The time necessary for indexing a site is usually from 2-3 days to 2 weeks, depending on the search engine. The fastest of all sites is the Google search engine.
Try to make the site friendly to search engines. For this, consider the following factors:
- Try to ensure that any pages of your site are accessible by links from the main page for no more than 3 jumps. If the site structure does not allow this, then make a so-called site map that will allow you to execute the specified rule;
- Do not repeat common mistakes. Session IDs make indexing difficult. If you use navigation through scripts, be sure to duplicate links in the usual way - search engines do not know how to read scripts (for more information about these and other errors, see Chapter 2.3);
- remember that search engines index not more than 100-200 Kb of text on the page. For pages of a larger volume, only the top of the page will be indexed (the first 100-200 Kb). From this follows the rule - do not use pages larger than 100 KB, if you want them to be indexed completely.
You can control the behavior of search robots using a robots.txt file, you can explicitly allow or forbid certain pages for indexing. There is also a special "NOINDEX" tag that allows you to close individual pages for indexing, but this tag is supported only by Russian search engines.
Databases of search engines are constantly updated, records in the database can be changed, disappear and appear again, so the number of indexed pages of your site can change periodically.
One of the most common reasons for the disappearance of the page from the index is the inaccessibility of the server, that is, the crawler could not access it when trying to index the site. After restoring the server, the site should appear in the index again after a while.
It should also be noted that the more external links your site has, the faster its re-indexing.
You can track the process of indexing the site using the analysis of server log-files, in which all visits of search robots are recorded. In the corresponding section, we will discuss in detail the programs that allow this. Back to top >>>
5 Keyword Selection
5.1 Initial choice of keywords
The selection of keywords is the first step from which the construction of the site begins. At the time of preparing the texts for the site, the set of keywords should already be known.
To determine keywords, you should first of all use the services offered by the search engines themselves.
For English-language sites this is www.wordtracker.com and inventory.overture.com
For Russian-language adstat.rambler.ru/wrds/, direct.yandex.ru and stat.go.mail.ru
When using these services, you need to remember that their data can be very different from the real picture. When using the Yandex Direct service, it should also be remembered that this service shows not the expected number of requests, but the expected number of times an ad is shown for a given phrase. Since visitors to the search engine often view more than one page, the actual number of requests is necessarily smaller than the number of impressions for the same query.
The Google search engine does not provide information about the frequency of requests.
After the list of keywords is approximately determined, you can analyze your competitors, in order to find out which key phrases they are targeting, you will probably be able to learn something new.
5.2. High-frequency and low-frequency
When optimizing the site, you can distinguish two strategies - optimization for a small number of highly popular keywords, or for a large number of little-popular. In practice, both are usually combined.
The lack of high-frequency requests - as a rule, a high level of competition for them. For a young site it is not always possible to climb to the top for these queries.
For low-frequency queries, often it is sufficient to mention the desired word combination on the page, or minimal text optimization. Under certain conditions, low-frequency queries can yield very good search traffic.
The goal of most commercial sites is to sell one or another product or service, or in some other way to make money on their visitors. This should be taken into account in search engine optimization and in the selection of keywords. It is necessary to strive to receive targeted visitors to the site (that is, ready to buy the proposed product or service), rather than just to a large number of visitors.
Example. The query "monitor" is much more popular and at times more competitive than the query "monitor samsung 710N" (the exact name of the model). However, for the monitor vendor the second visitor is much more valuable, and getting it is much easier, since the level of competition for the second request is small. This is another possible difference between high-frequency and low-frequency queries, which should be taken into account.
5.3 Assessing the level of competition of search queries
After the set of keywords is approximately known, it is necessary to determine the main core of words under which optimization will be carried out.
Low-frequency queries for obvious reasons are discarded immediately (temporarily). In the previous section, we described the benefits of low-frequency queries, but they are low-frequency queries, which do not require special optimization. Therefore, we do not consider them in this section.
For very popular phrases, the level of competition is usually very high, so you need to really evaluate the capabilities of your site. To assess the level of competition, you should calculate a number of indicators for the top ten sites in the search engine:
- average PageRank of pages in issue;
- the average TIC site, whose pages were in issue;
- the average number of external links to sites in the issuance of versions of various search engines;
Extra options:
- the number of pages on the Internet that contain the specified search term (in other words, the number of search results);
- the number of pages on the Internet that contain the exact match of the specified phrase (as when searching in quotation marks).
These additional parameters will help to indirectly assess the complexity of the withdrawal of the site in the top for a given phrase.
In addition to the described parameters, you can also check how many sites from the issuance are present in the main directories, such as the directories DMOZ, Yahoo and Yandex.
Analyzing all the above parameters and comparing them with the parameters of your own site will allow you to clearly predict the prospects for the withdrawal of your site to the top by the specified phrase.
Evaluating the level of competition for all selected phrases, you can choose a number of fairly popular phrases with an acceptable level of competition, which will be made the main bet for promotion and optimization.
5.4 Sequential refinement of search queries
As mentioned above, search engine services often provide very inaccurate information. Therefore, it is quite rare to determine the ideal set of keywords for your site from the first time.
After your site is made up and certain steps are taken to promote it, in your hands is additional statistics on the keywords: you know the ranking of your site in the issuance of search engines for a particular phrase and also know the number of visits to your site for this phrase .
Owning this information, you can pretty clearly define successful and unsuccessful phrases. Often, you do not even need to wait for the site to go top in the ranked phrases in all search engines - one or two are enough.
Example. Let's say your site ranked first in the Rambler search engine for this phrase. In this case, neither in Yandex, nor in Google it is not yet in issue on this phrase. However, knowing the percentage of visits to your site from different search engines (for example, Yandex - 70%, Google - 20%, Rambler - 10%), you can already predict approximate traffic for this phrase and decide whether it fits your website or no.
In addition to highlighting the failed phrases, you can find new successful options. For example, to see that some phrase, under which no promotion has been done, brings good traffic, even though your site on this phrase is on 2 or 3 pages in the issue.
Thus, in your hands is a new, refined set of keywords. After this, you should proceed with the reorganization of the site - changing the texts for more successful phrases, creating new pages for new phrases found, etc.
Thus, after a while you will be able to find the best set of keywords for your site and substantially increase the search traffic.
Some more tips. According to statistics, the main page of the site accounts for up to 30% -50% of all search traffic. It is best seen in search engines and has the most external links. Therefore, the main page of the site should be optimized for the most popular and competitive requests. Each page of the site should be optimized for 1-2 basic phrases (and, possibly, for a number of low-frequency queries). This will increase the chances of reaching the top search engines for the given phrases. Back to top >>>
6 Various information about search engines
6.1 Google SandBox
In early 2004, in the optimizer environment, a new mysterious concept emerged - Google SandBox or Google's sandbox. This designation received a new Google spam filter, aimed at excluding from the issuance of young, newly created sites.
The SandBox filter manifests itself in the fact that newly created sites are not available in the search engine for virtually all phrases. This happens despite the availability of high-quality and unique content and the right promotion (without the use of spam techniques).
At the moment, SandBox concerns only the English segment, sites in Russian and other languages ​​are not affected by this filter. However, it is likely that this filter can expand its influence.
It can be assumed that the purpose of the SandBox filter is to exclude spam sites from the distribution - indeed, no search spammer can wait for months until the results appear. However, along with this, a lot of normal, newly created sites suffer.
Precise information on what exactly is a Sandbox filter is still not there. There are a number of assumptions obtained on the basis of experience, which we will give below:
- SandBox is a filter for young sites. The newly created site falls into the "sandbox" and is in it indefinitely, until the search engine translates it into the category of "ordinary";
- SandBox is a filter for new links, placed on newly created sites. Try to notice the fundamental difference from the previous assumption - the filter is not superimposed on the age of the site, but on the age of links to the site. In other words, Google has no claims to the site, but refuses to take into account external references to it, if less than X months have elapsed since they appeared. Since external links are one of the main ranking factors, ignoring external links is equivalent to the absence of a site in the search engine. Which of the two above assumptions is more difficult to say, it is likely that both are true;
- The site can be located in the sandbox from 3 months to a year or more. There is also an observation that the sites come out of the sandbox en masse. Those. The sandbox period is determined not individually for each site, but for large groups of sites (sites created in a certain time range fall into one group). The filter is then removed immediately for the entire group, so sites from the same group will stay in the sand for a different time.
Typical signs that your site is in the sandbox:
- Your site is normally indexed by Google, regularly visited by the search robot;
- your site has PageRank, the search engine knows and correctly displays external links to your site;
- search on the website address (www.site.com) produces the correct results, with the correct title, snippet (resource description), etc .;
- your site is normally located by the rare and unique phrases contained in the text of the pages;
- your site is not visible in the first thousand results for any other queries, even for those for which it was originally created. Sometimes there are exceptions and the site for some requests appears on 500-600 positions, which, of course, does not change the essence.
There are practically no filter bypass methods. There are a number of assumptions about how this can be done, but this is nothing more than assumptions, moreover, unacceptable for a regular webmaster. The main method is to work on the site and wait for the end of the filter.
After the filter is removed, there is a sharp increase in ratings of 400-500 or more positions.
6.2 Google LocalRank
On February 25, 2003, Google patented a new page ranking algorithm, called LocalRank. The idea is based on the idea of ​​ranking pages not by their global reference citation, but by citing among a group of pages thematically related to the query.
Algorithm LocalRank is not used in practice (at least, as it is described in the patent), however, the patent contains a number of interesting ideas, which we believe should be familiar to each optimizer. The consideration of the topic of referring pages is used by almost all search engines. Although this happens, apparently, by several other algorithms, the study of the patent will make it possible to understand the general ideas, how this can be realized.
When reading this chapter, consider that it contains theoretical information, and not a practical guide to action.
The main idea of ​​the LocalRank algorithm is expressed by the following three points:
1. Using a certain algorithm, select a certain number of documents relevant to the search query (denote this number N). These documents are initially sorted according to some criterion (this can be PageRank, or relevance evaluation or some other criterion or grouping them). Denote the numerical expression of this criterion as OldScore.
2. Each of the N pages undergoes a new ranking procedure, which results in each page receiving some new rank. Denote it by LocalScore.
3. At this step, the OldScore and LocalScore values ​​are multiplied, resulting in a new value of NewScore, according to which the final ranking of the pages occurs.
The key in this algorithm is a new ranking procedure, as a result of which each page is assigned a new rank LocalScore. Let us describe this procedure in more detail.
0. Using a certain ranking algorithm, N pages are selected that match the search query. The new ranking algorithm will only work with these N pages. Each page in this group has some OldScore rank.
1. When calculating LocalScore for a given page, all pages from N that have external links to this page are selected. We denote the set of these pages M. At the same time, the set M does not contain pages from the same host (host, filtering will occur at the IP address), as well as pages that are mirrors of this one.
2. The set M splits into subsets Li. In these subsets there are pages that are united by the following signs:
- belonging to one (or similar) hosts. Thus, in one group there will be pages where the first three octets of the IP address are the same. That is, the pages whose IP address belongs to the range
xxx.xxx.xxx.0
xxx.xxx.xxx.255
will be deemed to belong to the same group;
- pages that have the same or similar content (mirrors, mirrors);
- pages of one site (domain).
3. Each page in every set Li has some rank (OldScore). From each set is selected one page with the largest OldScore, the rest are excluded from consideration. Thus, we get some set of K pages that link to this page.
4. Страницы в множестве K сортируются согласно параметру OldScore, затем в множестве K остаются только k первых страниц (k – некоторое заданное число), остальные страницы исключаются из рассмотрения.
5. На данном шаге рассчитывается LocalScore. По оставшимся k страницам происходит суммирование их значений OldScore. Это можно выразить следующей формулой:
Здесь m – некоторый заданный параметр, который может варьироваться от 1 до 3 (к сожалению, информация, содержащаяся в патенте на описываемый алгоритм, не дает подробного описания данного параметра).
После того, как расчет LocalScore для каждой страницы из множества N закончен, происходит расчет значений NewScore и пересортировка страниц согласно новому критерию. Для рассчета NewScore используется следующая формула:
NewScore(i)= (a+LocalScore(i)/MaxLS)*(b+OldScore(i)/MaxOS)
i – страница, для которой рассчитывается новое значение ранга.
a и b – некоторые числа (патент не дает более подробной информации об этих параметрах).
MaxLS – максимальное из рассчитанных значений LocalScore
MaxOS – максимальное из значений OldScore
Теперь постараемся отвлечься от математики и повторим все вышесказанное простым языком.
На первом этапе происходит отбор некоторого количества страниц соответствующих запросу. Это делается по алгоритмам, не учитывающим тематику ссылок (например, по релевантности и общей ссылочной популярности).
После того, как группа страниц определена, будет подсчитана локальная ссылочная популярность каждой из страниц. Все страницы так или иначе связаны с темой поискового запроса и, следовательно, имеют отчасти схожу тематику. Проанализировав ссылки друг на друга в отобранной группе страниц (игнорируя все остальные страницы в Интернете), получим локальную (тематическую) ссылочную популярность.
После проделанного шага у нас есть значения OldScore (рейтинг страницы на основе релевантности, общей ссылочной популярности и других факторов) и LocalScore (рейтинг страницы среди тематически связанных страниц). Итоговый рейтинг и ранжирование страниц проводится на основе сочетания этих двух факторов.
6.3. Features of the various search engines
Все, сказанные выше идеи по текстовой оптимизации и увеличению ссылочной популярности применимы ко всем поисковым системам в равной степени. Более подробное описание Google объясняется большим наличием информации об этой поисковой системе в свободном доступе, однако идеи, высказанные в отношении Google, в большой степени применимы и к другим поисковым системам.
Вообще, я не являюсь сторонником поиска «секретного знания» о том, как детально работают алгоритмы различных поисковых систем. Все они в той или иной мере подчиняются общим правилам и грамотная работа над сайтом (без учета каких-либо особенностей) приводит к хорошим позициям почти во всех поисковых системах.
Тем не менее, приведем некоторые особенности различных поисковых систем:
Google – очень быстрая индексация, очень большое значение придается внешним ссылкам. База Google используется очень большим числом других поисковых систем и порталов.
MSN – больший, нежели у других поисковых систем, акцент на информационное содержимое сайта.
Yandex – крупнейшая российская поисковая система. Обрабатывает (по разным данным) от 60% до 80% всех русскоязычных поисковых запросов. Уделяет особое внимание тематическим ссылкам (нетематические внешние ссылки также имеют эффект, но в меньшей степени, чем у других поисковых систем). Индексация проходит медленнее, чем у Google, однако так же в приемлемые сроки. Понижает в рейтинге или исключает из индекса сайты, занимающиеся нетематическим ссылкообменом (содержащих каталоги нетематических ссылок, созданных лишь с целью повышения рейтинга сайта), а также сайты, участвующие в системах автоматического обмена ссылками. В периоды обновлений базы, которые длятся несколько дней, выдача Яндекса постоянно меняется, в такие периоды следует отказаться от каких-либо работ по сайту и дождаться стабильных результатов работы поисковой системы.
Rambler – наиболее загадочная поисковая система. Занимает второе (по другим данные третье после Google) место по популярности среди российских пользователей. По имеющимся наблюдениям, понижает в рейтинге сайты, активно занимающиеся раскруткой (быстрое увеличение числа внешних ссылок). Ценит наличие поисковых терминов в простом тексте страницы (без выделения различными стилистическими тегами).
Mail.ru – набирающая популярность поисковая система. Использует результаты поисковой системы Google после некоторой дополнительной обработки. Оптимизация под Mail.ru сводится к оптимизации под Google.
6.4 Tips, assumptions, observations
В данной главе представлена информация, появившаяся в результате анализа различных статей, общения оптимизаторов, практических наблюдений и т.п. Информация эта не является точной и достоверной – это всего лишь предположения и идеи, однако идеи интересные. Данные, представленные в этом разделе, воспринимайте не как точное руководство, а как информацию к размышлению.
- исходящие ссылки. Ссылайтесь на авторитетные в вашей области ресурсы, используя нужные ключевые слова. Поисковые системы ценят ссылки на другие ресурсы той же тематики;
- исходящие ссылки. Не ссылайтесь на FFA сайты и прочие сайты, исключенные из индекса поисковой системы. Это может привести к понижению рейтинга вашего собственного сайта;
- исходящие ссылки. Страница не должна содержать более 50-100 исходящих ссылок. Это не приводит к понижению страницы в рейтинге, но ссылки сверх этого числа не будут учтены поисковой системой;
- исходящие site wide ссылки, то есть ссылки, стоящие на каждой странице сайта. Считается, что поисковые системы негативно относятся к таким ссылкам и не учитывают их при ранжировании. Существует также другое мнение, что это относится только к большим сайтам с тысячами страниц;
- идеальная плотность ключевых слов. Очень часто приходится слышать подобный вопрос. Ответ заключается в том, что идеальной плотности ключевых слов не существует, вернее она различная для каждого запроса, то есть рассчитывается поисковой системой динамически, в зависимости от поискового термина. Наш совет – проанализировать первые сайты из выдачи поисковой системы, что позволит примерно оценить ситуацию;
- возраст сайта. Поисковые системы отдают предпочтение старым сайтам, как более стабильным;
- обновление сайта. Поисковые системы отдают предпочтение развивающимся сайтам, то есть тем, на которых периодически добавляется новая информация, новые страницы;
- доменная зона (касается западных поисковиков). Предпочтение отдается сайтам, расположенным в зонах .edu, .mil, .gov и т.п. Такие домены могут зарегистрировать только соответствующие организации, поэтому доверия таким сайтам больше;
- поисковые системы отслеживают, какой процент посетителей возвращается к поиску, после посещения того или иного сайта из вылачи. Большой процент возвратов означает нетематическое содержимое, и такая страница понижается в поиске;
- поисковые системы отслеживают, насколько часто выбирается та или иная ссылка в результатах поиска. Если ссылка выбирается редко, значит, страница не представляет интереса и такая страница понижается в рейтинге;
- используйте синонимы и родственные формы ключевых слов, это будет оценено поисковыми системами;
- слишком быстрый рост числа внешних ссылок воспринимается поисковыми системами как искусственная раскрутка и ведет к понижению рейтинга. Очень спорное утверждение, прежде всего потому, что такой способ может использоваться для понижения рейтинга конкурентов;
- Google не учитывает внешние ссылки, если они находятся на одном (или сходных) хостах, то есть страницах, IP адрес которых принадлежит диапазону xxx.xxx.xxx.0 xxx.xxx.xxx.255. Такое мнение происходит скорее всего от того, что Google высказывал данную идею в своих патентах. Однако сотрудники Google заявляют, что никаких ограничений по IP адресу на внешние ссылки не налагается, и нет никаких оснований не доверять им;
- поисковые системы проверяют информацию о владельце домена. Соответственно ссылки с сайтов, принадлежащих одному владельцу имеют меньший вес, чем обычные ссылки. Информация представлена в патенте;
- срок, на который зарегистрирован домен. Чем больше срок, тем большее предпочтение отдается сайту;
6.5 Creating the right content
Контент (информационное содержимое сайта) играет важнейшую роль в раскрутке сайта. Тому есть множество причин, о которых мы расскажем в этой главе, а также дадим советы, как правильно наполнить сайт информацией.
- уникальность контента. Поисковики ценят новую информацию, нигде ранее не публиковавшуюся. Поэтому при создании сайта опирайтесь на собственные тексты. Сайт, построенный на основе чужих материалов, имеет гораздо меньшие шансы на выход в топ поисковых систем. Как правило, первоисточник всегда находится выше в результатах поиска;
- при создании сайта не забывайте, что он изначально создается для посетителей, а не для поисковых систем. Привести посетителя на сайт – это только первый и не самый трудный шаг. Удержать посетителя на сайте и превратить его в покупателя – вот действительно сложная задача. Добиться этого можно только грамотным информационным наполнением сайта, интересным для человека;
- старайтесь регулярно обновлять информацию на сайте, добавлять новые страницы. Поисковики ценят развивающиеся сайты. Кроме того, больше текста – больше посетителей на сайт. Пишите статьи на тему вашего сайта, публикуйте отзывы посетителей, создайте форум для обсуждения вашего проекта (последнее – только если посещаемость сайта позволит создать активный форум). Интересный контент – залог привлечения заинтересованных посетителей;
- сайт, созданный для людей, а не поисковых машин, имеет большие шансы на попадание в важные каталоги, такие как DMOZ, Яндекс и другие;
- интересный тематический сайт имеет гораздо больше шансов на получение ссылок, отзывов, обзоров и т.д. других тематических сайтов. Такие обзоры сами по себе могут дать неплохой приток посетителей, кроме того, внешние ссылки с тематических ресурсов будут по достоинству оценены поисковыми системами.
В заключение еще один совет. Как говорится, сапоги должен делать сапожник, а писать тексты должен журналист или технический писатель. Если вы сумеете создать увлекательные материалы для вашего сайта – это очень хорошо. Однако у большинства из нас нет особых способностей к написанию привлекательных текстов. Тогда лучше доверить эту часть работы профессионалам. Это более дорогой вариант, но в долгосрочной перспективе он себя оправдает.
6.6 Choosing a domain and hosting
В настоящее время создать свою страницу в Интернет может любой и для этого не нужно никаких затрат. Существуют компании, предоставляющие бесплатный хостинг, которые разместят вашу страницу в обмен на право показывать на ней свою рекламу. Многие Интернет-провайдеры также дадут вам место на своем сервере, если вы являетесь их клиентом. Однако все эти варианты имеют очень существенные недостатки, поэтому, при создании коммерческого проекта, вы должны отнестись к этим вопросам с большей ответственностью.
Прежде всего стоит купить свой собственный домен. Это дает вам следующие преимущества:
- проект, не имеющий собственного домена, воспринимается как сайт-однодневка. Действительно, почему мы должны доверять данному ресурсу, если его владельцы не готовы потратить даже символическую сумму для создания минимального имиджа. Размещение бесплатных материалов на таких ресурсах возможно, но попытка создания коммерческого проекта без собственного домена почти всегда обречена на неудачу;
- собственный домен дает вам свободу в выборе хостинга. Если текущая компания перестала вас устраивать, то вы в любой момент можете перенести свой сайт на другую, более удобную или быструю площадку.
При выборе домена помните о следующих моментах:
- старайтесь, чтобы имя домена было запоминающимся и его произношение и написание было бы однозначным;
- для раскрутки международных англоязычных проектов более всего подходят домены с расширением .com Можно также использовать домены из зон .net, .org, .biz и т.п., однако этот вариант менее предпочтителен;
- для раскрутки национальных проектов всегда следует брать домен в соответствующей национальной зоне (.ru – для русскоязычных проектов, .de – для немецких и т.д.);
- в случае двуязычных (и более) сайтов следует выделить свой домен под каждый из языков. Национальные поисковые системы в большей степени оценят такой подход, чем наличие на основном сайте подразделов на различных языках.
Стоимость домена составляет (в зависимости от регистратора и зоны) 10-20$ в год.
При выборе хостинга следует опираться на следующие факторы:
- скорость доступа;
- время доступности серверов (uptime);
- стоимость трафика за гигабайт и количество предоплаченного трафика;
- желательно, чтобы площадка располагалась в том же географическом регионе, что и большинство ваших посетителей;
Стоимость хостинга для небольших проектов колеблется в районе 5-10$ в месяц.
При выборе домена и хостинга избегайте «бесплатных» предложений. Часто можно видеть, что хостинг-компании предлагают бесплатные домены своим клиентам. Как правило, домены в этом случае регистрируются не на вас, а на компанию, то есть фактическим владельцем домена является ваш хостинг-провайдер. В результате вы не сможете сменить хостинг для своего проекта, либо будете вынуждены выкупать свой собственный, раскрученный домен. Также в большинстве случаев следует придерживаться правила не регистрировать свои домены через хостинг-компанию, так как это может затруднить возможный перенос сайта на другой хостинг (даже несмотря на то, что вы являетесь полноценным владельцем своего домена).
6.7 Change of website address
Иногда по ряду причин может потребоваться смена адреса проекта. Некоторые ресурсы, начинавшиеся на бесплатном хостинге и адресе, развиваются до полноценных коммерческих проектов и требуют переезда на собственный домен. В других случаях находится более удачное название для проекта. При любых подобных вариантах встает вопрос правильного переноса сайта на новый адрес.
Наш совет в этом плане таков – создавайте на новом адресе новый сайт с новым, уникальным контентом. На старом сайте поставьте на новый ресурс видные ссылки, чтобы посетители могли перейти на ваш новый сайт, однако не убирайте совсем старый сайт и его содержимое.
При таком подходе вы сможете получать поисковых посетителей как на новый, так и на старый ресурс. При этом у вас появляется возможность охватить дополнительные темы и ключевые слова, что было бы сложно сделать в рамках одного ресурса.
Перенос проекта на новый адрес задача сложная и не очень приятная (так как в любом случае раскрутку нового адреса придется начинать практически с нуля), однако, если этот перенос необходим, то следует извлечь максимум пользы из него. Back to top >>>
7. Semonitor – пакет программ для раскрутки и оптимизации сайта
В предыдущих главах мы рассказали о том, как правильно создать свой сайт и какими методами его можно раскрутить. Последняя глава посвящена программным инструментам, которые позволят автоматизировать значительную часть работы над сайтом и добиться более высоких результатов. Речь пойдет о пакете программ Semonitor, который вы можете скачать с нашего сайта (www.semonitor.ru).
7.1 Module Definition of Positions
Проверка позиций сайта в поисковых системах является практически каждодневной задачей каждого оптимизатора. Позиции можно проверять и вручную, однако, если у вас несколько десятков ключевых слов и 5-7 поисковых систем, которые нужно мониторить, то этот процесс будет очень утомительным.
Модуль Определение позиций проделает всю работу автоматически. Вы сможете получить информацию о рейтингах вашего сайта по всем ключевым словам в различных поисковых системах, увидеть динамику и историю позиций (рост/падение вашего сайта по заданным ключевым словам), увидеть ту же информацию в наглядной графической форме.
7.2 External links module
Программа сама опросит все доступные поисковые системы и составит наиболее полный и не содержащий дубликатов список внешних ссылок на ваш ресурс. Для каждой ссылки вы увидите такие важные параметры, как PageRank ссылающейся страницы и ссылочный текст (в предыдущих главах мы подробно рассказали о ценности этих параметров).
Помимо общего списка внешних ссылок вы сможете отслеживать их динамику – то есть видеть вновь появившиеся и устаревшие ссылки.
7.3 Module Indexing the site
Покажет все страницы, проиндексированные той или иной поисковой системой. Необходимый инструмент при создании нового ресурса. Для каждой из проиндексированных страниц будет показано значение PageRank.
7.4 Log Analyzer Module
Вся информация о ваших посетителях (с каких сайтов они попали на ваш ресурс, какие ключевые слова использовали, в какой стране они находятся и многое другое) содержится в лог-файлах сервера. Модуль лог-анализатор представит всю эту информацию в удобных и наглядных отчетах.
7.5 Module Page Rank Analyzer
Определяет огромное количество конкурентной информации о заданном списке сайтов. Автоматически определяет для каждого сайта из списка такие параметры, как Google PageRank, Yandex ТИЦ, число внешних ссылок, присутствие сайта в основных каталогах DMOZ, Yandex и Yahoo каталоге. Идеальный инструмент для анализа уровня конкуренции по тому или иному поисковому запросу.
7.6 Keyword Selection Module
Подбирает релевантные ключевые слова для вашего сайта, сообщает данные об их популярности (число запросов за месяц), оценивает уровень конкуренции по той или иной фразе.
7.7 HTML analyzer module
Анализирует html-код страницы, подсчитывает вес и плотность ключевых слов, создает отчет о правильности текстовой оптимизации сайта. Используется на этапе создания собственного сайта, а также для анализа сайтов конкурентов. Позволяет анализировать как локальные html-страницы, так и он-лайн проекты. Поддерживает особенности русского языка, поэтому может быть использован для успешной работы, как с английскими, так и с русскими сайтами.
7.7 Site Registration Programs AddSite and Add2Board
Программы AddSite (регистрация в каталогах) и Add2Board (доски объявлений) позволят поднять ссылочную популярность вашего проекта. Подробнее об этих программах можно узнать на сайте наших партнеров www.addsite.ru В начало >>>
8. Полезные ресурсы
В сети существует масса ресурсов по продвижению сайтов. Приведем здесь основные, наиболее ценные на наш взгляд:
www.searchengines.ru – крупнейший в рунете сайт по оптимизации
forum.searchengines.ru – место, где общаются оптимизаторы, как профессионалы так и новички
http://www.optimization.ru/subscribe/list.html - архив рассылок компании Ашманов и партнеры, базовый курс поисковой оптимизации. Самая лучшая рассылка на тему seo, здесь вы найдете самые последние новости, статьи и т.д.
http://www.optimization.ru – сайт посвященный ежегодной конференции Поисковая оптимизация и продвижение сайтов в Интернете. Возможность приобрести доклады предыдущих конференций, содержащие очень ценную информацию.
http://seminar.searchengines.ru/ - сборник материалов по теме seo. Сборник платный, но цена невысокая, очень хорошая подборка материалов.

Вместо заключения – раскрутка сайта шаг за шагом
В этой главе я расскажу о том, как раскручиваю свои собственные сайты. Это нечто вроде небольшой пошаговой инструкции, в которой коротко повторяется то, что было описано в предыдущих разделах. В своей работе я использую программу Semonitor, поэтому она и будет взята в качестве примера.
1. Для начала работы над сайтом следует обладать некоторыми базовыми знаниями, в достаточном объеме они представлены в данном курсе. Подчеркну, что не нужно быть гуру в оптимизации и эти базовые знания приобретаются достаточно быстро. После этого начинаем работать,
экспериментировать, выводить сайты в топ и т.д. Здесь нам и становятся нужны программные средства.
2. Составляем примерный список ключевых слов и проверяем уровень конкуренции по ним. Оцениваем свои возможности и останавливаем выбор на достаточно популярных, но среднеконкурентных словах. Подбор ключевых слов делается с помощью соответствующего модуля, примерная проверка конкуренции в нем же. По наиболее интересным для нас запросам
проводим детальный анализ выдачи поисковых систем в модуле Page Rank
анализатор и принимаем окончательные решения по ключевым словам.
3. Начинаем писать тексты для сайта. Часть текстов я пишу самостоятельно, часть наиболее важных отдаю на написание журналистам. То есть мое мнение - контент прежде всего. При наличии хорошего
информационного содержимого, будет легче получать внешние ссылки и посетителей. На этом же этапе начинаем использовать модуль HTML-анализатор, создаем нужную плотность ключевых слов. Каждую страницу оптимизируем под свое словосочетание.
4. Регистрация сайта в каталогах. Для этого служит программа AddSite (www.addsite.ru)
5. После того, как первые шаги сделаны, ждем и проверяем индексацию, чтобы убедиться, что сайт нормально воспринимается различными поисковыми системами.
6. На следующем шаге уже можно начинать проверку позиций сайта по нужным ключевым словам. Позиции скорее всего будут не очень хорошими сначала, однако информацию к размышлению это даст.
7. Продолжаем работать над увеличением ссылочной популярности, процесс отслеживаем с помощью модуля Внешние ссылки.
8. Посещаемость сайта анализируем с помощью Лога анализатора и работаем над ее увеличением.