This page has been robot translated, sorry for typos if any. Original content here.

Site promotion step by step

Site promotion step by step

Table of contents:
1. General information about search engines
1.1 the history of the development of search engines
1.2 General principles of the search engines
2. Internal ranking factors
2.1 Text Design Web Pages
2.1.1 The amount of text on the page
2.1.2 Number of keywords per page
2.1.3 Keyword Density
2.1.4 The location of keywords on the page
2.1.5 Stylistic design of the text
2.1.6 TITLE Tag
2.1.7 Keywords in the link text
2.1.8 ALT tags of images
2.1.9 Meta-tag Desciption
2.1.10 Keywords Meta Tags
2.2 Site structure
2.2.1 The number of pages of the site
2.2.2. Navigation menu
2.2.3 Keyword in the page title
2.2.4 Avoid Subdirectories
2.2.5 One page - one key phrase
2.2.6 Homepage of the site
2.3 Common mistakes
2.3.1 Graphic Header
2.3.2 Graphic Navigation Menu
2.3.3 Navigation through scripts
2.3.4 Session ID
2.3.5 Redirects
2.3.6 Hidden text
2.3.7 Single Pixel Links
3 External ranking factors
3.1 Why use the accounting of external links to the site
3.2 Importance of links (citation index)
3.3. Reference text
3.4 Relevance of referring pages
3.5 Google PageRank - Theoretical Foundations
3.6 Google PageRank - practical use
3.7 TIC and Vic Yandex
3.8 Increase link popularity
3.8.1 General Directory Submission
3.8.2 DMOZ Catalog
3.8.3 Yandex Catalog
3.8.4 Link Exchange
3.8.5 Press releases, news feeds, thematic resources
4 Site Indexing
5 Selection of keywords
5.1 Initial Keyword Selection
5.2 High Frequency and Subwoofer
5.3 Assessing the level of competition for search queries
5.4 Sequential refinement of search queries
6 Various information about search engines
6.1 Google SandBox
6.2 Google LocalRank
6.3 Features of the various search engines
6.4 Tips, Assumptions, Observations
6.5 Creating the right content
6.6 Choosing a domain and hosting
6.7 Changing the site address
7. Semonitor - software package for promotion and site optimization
7.1 Module Position Definition
7.2 Module external links
7.3 Site Indexing Module
7.4 Module Log Analyzer
7.5 Module Page Rank Analyzer
7.6 Keyword Selection Module
7.7 HTML Analyzer Module
7.7 AddSite and Add2Board Registration Programs
8. Useful Resources
Instead of conclusion - website promotion step by step

This course is intended for authors and site owners who want to address in more detail the issues of search engine optimization and promotion of their resource. It is designed mainly for beginners, although an experienced webmaster, I hope, will learn something new from it. On the Internet you can find a large number of articles on the topic of search engine optimization, in this tutorial an attempt is made to combine all the information in the form of a single, consistent course.
The information presented in this tutorial can be divided into several parts:
- clear, specific recommendations, a practical guide to action;
- theoretical information, which, in our opinion, should have any expert in the field of seo;
- advice, observations, recommendations obtained on the basis of experience, the study of various materials, etc.

1. General information about search engines
1.1 the history of the development of search engines
In the initial period of development of the Internet, the number of its users was small, and the amount of available information was relatively small. In most cases, employees of various universities and laboratories had access to the Internet, and in general the Network was used for scientific purposes. At this time, the task of searching for information on the Internet was far from being as urgent as it is now.
One of the first ways to organize access to information resources of the network was the creation of site directories in which links to resources were grouped according to the subject. The first such project was Yahoo, which opened in April 1994. After the number of sites in the Yahoo directory increased significantly, the ability to search for information in the directory was added. This, of course, was not a search engine in the full sense, since the search area was limited only by the resources present in the directory, and not all the resources of the Internet.
Link catalogs were widely used before, but have practically lost their popularity at the present time. The reason for this is very simple - even modern directories that contain a huge amount of resources provide information only about a very small part of the Internet. The largest directory of the DMOZ network (or the Open Directory Project) contains information on 5 million resources, while the Google search engine database consists of more than 8 billion documents.
The first full-fledged search engine was the WebCrawler project which appeared in 1994.
In 1995, the search engines Lycos and AltaVista appeared. Last for many years was the leader in the field of information retrieval in the Internet.
In 1997, Sergey Brin and Larry Page created Google as part of a research project at Stanford University. Google is currently the most popular search engine in the world.
On September 23, 1997, the Yandex search engine, the most popular in the Russian-language part of the Internet, was officially announced.
Currently there are 3 main international search engines - Google, Yahoo and MSN Search, which have their own databases and search algorithms. Most of the remaining search engines (of which you can count a lot) use the results of the three listed in one form or another. For example, AOL search ( and use the Google database, and AltaVista, Lycos and AllTheWeb use the Yahoo database.
In Russia, Yandex is the main search engine, followed by Rambler,, Aport, and

1.2 General principles of the search engines
The search engine consists of the following main components:
Spider (spider) - browser-like program that downloads web pages.
Crawler (crawler, "traveling" spider) - a program that automatically passes through all the links found on the page.
Indexer (indexer) - a program that analyzes web pages downloaded by spiders.
Database (database) - a repository of downloaded and processed pages.
Search engine results engine (retrieving results) - retrieves search results from a database.
A web server is a web server that communicates between the user and the rest of the search engine components.
The detailed implementation of search engines may differ from each other (for example, the Spider + Crawler + Indexer bundle can be implemented as a single program that downloads known web pages, analyzes them and searches for new resources through links), but the common ones described are inherent to all search engines. traits
Spider. Spider is a program that downloads web pages in the same way as a user's browser. The difference is that the browser displays the information contained on the page (text, graphic, etc.), the spider does not have any visual components and works directly with the html text of the page (you can do “view html code” in your browser to see the “raw” html-text).
Crawler. Selects all links present on the page. His task is to determine where the spider should go, based on the links or on the basis of a predefined list of addresses. Crawler, following the found links, searches for new documents that are still unknown to the search engine.
Indexer. The indexer parses the page into its component parts and analyzes them. Various elements of the page are identified and analyzed, such as text, headings, structural and style features, special service html-tags, etc.
Database. The database is the repository of all the data that the search engine downloads and analyzes. Sometimes the database is called a search engine index.
Search Engine Results Engine. The results system deals with page ranking. It decides which pages satisfy the user's request, and in what order they should be sorted. This happens according to search engine ranking algorithms. This information is the most valuable and interesting for us - it is with this component of the search engine that the optimizer interacts, trying to improve the position of the site in the issue, therefore in the future we will consider in detail all the factors affecting the ranking of the results.
Web server. As a rule, on the server there is an html-page with an input field in which the user can specify the search term he is interested in. The web server is also responsible for delivering the results to the user in the form of an html page. To the beginning >>>

2. Internal ranking factors
All factors affecting the position of the site in the issuance of a search engine can be divided into external and internal. Internal ranking factors are those that are under the control of the website owner (text, layout, etc.).
2.1 Text Design Web Pages
2.1.1 The amount of text on the page
Search engines value content rich sites. In general, you should strive to increase the text content of the site.
The best should be considered pages containing 500-3000 words or 2-20 kb. text (from 2 to 20 thousand characters).
A page consisting of just a few sentences is less likely to get into the top search engines.
In addition, more text on the page increases the visibility of the page in the search engines due to rare or random search phrases, which in some cases can give a good influx of visitors.
2.1.2 Number of keywords per page
Key words (phrases) should appear in the text at least 3-4 times. The upper limit depends on the total page size - the larger the total volume, the more repetitions you can do.
Separately, you should consider the situation with search phrases, that is, phrases of several keywords. The best results are observed if the phrase is found in the text several times exactly as a phrase (i.e., all the words together in the correct order), and in addition, the words from the phrase come across in the text several times one by one. There should also be some difference (unbalance) between the number of occurrences of each of the words that make up the phrase.
Consider the situation by example. Suppose we optimize the page under the phrase "dvd player." A good option - the phrase "dvd player" is found in the text 10 times, in addition, the word "dvd" occurs separately 7 more times, the word "player" 5 more times. All the numbers in the example are arbitrary, but they show a good general idea.
2.1.3 Keyword Density
The density of a keyword on a page indicates the relative frequency of the content of the word in the text. Density is measured in percent. For example, if a given word occurs 5 times on a page of 100 words, then the density of this word is 5%. Too low density will result in the search engine not giving proper meaning to this word. Too high density can turn on the search engine spam filter (i.e., the page will be artificially lowered in the search results due to excessively frequent use of a keyword phrase).
The optimal key text density is 5-7%. In the case of phrases consisting of several words, one should calculate the total density of all the keywords that make up the phrase and make sure that it fits within the specified limits.
Practice shows that the density of the key text is more than 7-8%, although it does not lead to any negative consequences, but it also does not have a special meaning in most cases.
2.1.4 The location of keywords on the page
A very short rule - the closer a keyword or phrase is to the beginning of the document, the more weight they gain in the eyes of the search engine.
2.1.5 Stylistic design of the text
Search engines emphasize the text, one way or another highlighted on the page. You can make the following recommendations:
- use keywords in headings (text marked with “H” tags, in particular “h1” and “h2”). Currently, the use of css allows you to override the appearance of the text highlighted by these tags, so the use of the “H” tags is less important than before, however, you should not neglect them;
- select keywords in bold (not in the whole text, of course, but making such a selection 2-3 times on the page will not hurt). For this purpose it is recommended to use the “strong” tag, instead of the more traditional “B” (bold) tag.
2.1.6 TITLE Tag
One of the most important tags to which search engines attach great importance. Be sure to use keywords in the TITLE tag.
In addition, the link to your site in the search engine will contain text from the TITLE tag, so this is, in some ways, the business card of the page.
It is through this link that the search engine visitor goes to your site, so the TITLE tag should not only contain keywords, but also be informative and attractive.
As a rule, 50-80 characters from the TITLE tag fall into the search engine output, so it is advisable to limit the size of the header to this length.
2.1.7 Keywords in the link text
It is also a very simple rule - use keywords in the text of outgoing links from your pages (both to other internal pages of your site and other network resources), this may add you a slight advantage when ranking.
2.1.8 ALT tags of images
Any image on the page has a special attribute "alternative text", which is specified in the tag "ALT". This text will be displayed on the screen in the event that the download of the image failed or the display of images is blocked in the browser.
Search engines remember the value of the ALT tag when parsing (indexing) a page, but do not use it when ranking search results.
At the moment, it is reliably known that Google search engine takes into account the text in the ALT tag of those images that are links to other pages, the rest of the ALT tags are ignored. For other search engines there is no exact data, but we can assume something similar.
In general, it is worth giving this advice - using keywords in ALT tags is possible and necessary, although it does not matter in principle.
2.1.9 Meta-tag Desciption
Meta tag Description is specifically designed to set the page description. This tag does not affect the ranking, but, nevertheless, is very important. Many search engines (and, in particular, the largest Yandex and Google) display information from this tag in search results if this tag is present on the page and its content matches the content of the page and the search query.
It is safe to say that a high place in the search results does not always provide a large number of visitors. If the description of your competitors in the results of the issue will be more attractive than your site, then the search engine visitors will choose them, rather than your resource.
Therefore, competent writing of the meta tag Description is of great importance. The description should be brief, but informative and attractive, contain keywords specific to this page.
2.1.10 Keywords Meta Tags
This meta tag was originally intended to indicate the keywords of a given page. However, at present it is almost not used by search engines.
However, it is worth filling this tag "just in case." When filling out, you should adhere to the following rule: add only those keywords that are actually present on the page.

2.2 Site structure
2.2.1 The number of pages of the site
The general rule is that the more the better. The increase in the number of pages of the site improves its visibility in search engines.
In addition, the gradual addition of new information materials to the site is perceived by search engines as the development of the site, which may provide additional benefits when ranking.
Thus, try to place on the site more information - news, press releases, articles, useful tips and so on.
2.2.2. Navigation menu
As a rule, any site has a navigation menu. Use keywords in the links menu, this will give additional weight to those pages to which the link leads.
2.2.3 Keyword in the page title
There is an opinion that the use of keywords in the name of the html-file of the page can positively affect its place in the search results. Naturally, this applies only to English-language queries.
2.2.4 Avoid Subdirectories
If your site has a moderate number of pages (several dozen), then it is better that they are located in the root directory of the site. Search engines find such pages more important.
2.2.5 One page - one key phrase
Try to optimize each page for your own key phrase. Sometimes you can choose 2-3 related phrases, but you should not optimize one page for 5-10 phrases at once, most likely there will be no result.
2.2.6 Homepage of the site
Optimize the main page of the site (domain name, index.html) with the most important phrases for you. This page has the greatest chance of getting into the top search engines.
According to my observations, the main page of the site may account for up to 30-40% of the total search traffic.

2.3 Common mistakes
2.3.1 Graphic Header
Very often, the site design uses a graphic header (header), that is, a picture in the full width of the page, containing, as a rule, the company logo, name and some other information.
Should not be doing that! The top of the page is a very valuable place where you can place the most important keywords. In the case of a graphic image, this place is wasted.
In some cases, there are quite absurd situations: the heading contains textual information, but in order to make it more visual attractive, it is made in the form of a picture (accordingly, the displayed text cannot be taken into account by search engines).
It is best to use the combined option - the graphic logo at the top of the page is present, but does not occupy its entire width. The remainder contains a text header with keywords.
2.3.2 Graphic Navigation Menu
The situation is similar to the previous item - internal links on your site should also contain keywords, this will give an additional advantage when ranking. If the navigation menu for the purpose of greater attractiveness is made in the form of graphics, then search engines will not be able to take into account the text of links.
If there is no possibility to refuse the graphic menu, do not forget, at least, to provide all the pictures with the correct ALT tags.
2.3.3 Navigation through scripts
In some cases, the site is navigated through the use of scripts. It should be understood that search engines can not read and execute scripts. Thus, the link specified through the script will be inaccessible to the search engine and the search robot will not pass through it.
In such cases, be sure to duplicate the links in the usual way, so that site navigation is accessible to all - both for your visitors and search engine robots.
2.3.4 Session ID
On some sites, it is customary to use a session identifier - that is, each visitor gets a unique & session_id = parameter when they visit the site, which is added to the address of each site page visited.
Using the session identifier makes it more convenient to collect statistics on the behavior of visitors to the site and can be used for some other purposes.
However, from the point of view of the search robot, the page with the new address is a new page. Each time you visit the site, the search robot will receive a new session identifier and, visiting the same pages as before, will perceive them as new pages of the site.
Strictly speaking, search engines have algorithms for “gluing together” mirrors and pages with the same content, therefore sites using session identifiers will still be indexed. However, the indexing of such sites is difficult and in some cases may not work properly. Therefore, the use of session identifiers on the site is not recommended.
2.3.5 Redirects
Redirects make it difficult for search engines to analyze the site. Do not use redirects if there are no clear reasons for this.
2.3.6 Hidden text
The last two points are not related to errors, but to deliberate deception of searches, but they should still be mentioned.
Using hidden text (the color of the text coincides with the background color, for example, white on white) allows you to “pump up” the page with the necessary keywords without disturbing the logic and design of the page. Such text is invisible to visitors, but it is perfectly readable by search robots.
The use of such "gray" optimization methods can lead to a site ban - that is, the compulsory exclusion of a site from the search engine index (database).
2.3.7 Single Pixel Links
The use of graphic image links of 1 * 1 pixel size (that is, actually invisible to the visitor) is also perceived by search engines as an attempt to deceive and may lead to a site ban. To the beginning >>>

3 External ranking factors
3.1 Why use the accounting of external links to the site
As can be seen from the previous section, almost all factors affecting the ranking are controlled by the author of the page. Thus, it becomes impossible for a search engine to distinguish a really high-quality document from a page created specifically for a given search phrase or even a page generated by a robot and not carrying any useful information at all.
Therefore, one of the key factors in ranking pages is the analysis of external links to each page being evaluated. This is the only factor that is beyond the control of the author of the site.
It is logical to assume that the more external links there are on the site, the more interesting this site is for visitors. If the owners of other sites in the network have put a link to the estimated resource, it means that they consider this resource to be of sufficient quality. Following this criterion, the search engine can also decide how much weight to give a particular document.
Thus, there are two main factors by which the pages available in the search engine database will be sorted upon issue. This is relevance (that is, how far the page in question is related to the subject of the request — factors described in the previous section) and the number and quality of inbound links. The latter factor has also received the names of reference citation, link popularity, or citation index.
3.2 Importance of links (citation index)
It is easy to see that simply counting the number of external links does not give us enough information to evaluate the site. Obviously, a link from should mean much more than a link from the home page, so you cannot compare the popularity of sites only by the number of external links - you must also take into account the importance of links.
To assess the number and quality of external links to the site, search engines introduce the concept of citation index.
The citation index or IC is a general designation of numerical indicators that evaluate the popularity of a particular resource, that is, some absolute value of page importance. Each search engine uses its own algorithms to calculate its own citation index, as a rule, these values ​​are not published anywhere.
In addition to the ordinary citation index, which is an absolute indicator (that is, a specific number), the term weighted citation index is introduced, which is a relative value, that is, it shows the popularity of this page relative to the popularity of other pages on the Internet. The term "weighted citation index" (CIC) is usually used in relation to the Yandex search engine.
A detailed description of citation indices and their calculation algorithms will be presented in the following sections.
3.3. Reference text
Great importance when ranking search results is attached to the text of external links to the site.
The link text (or otherwise anchor or link text) is the text between the “A” and “/ A” tags, that is, the text that you can click on with the mouse pointer in the browser to go to a new page.
If the link text contains the necessary keywords, the search engine sees this as an additional and very important recommendation, confirming that the site really contains valuable information relevant to the topic of the search query.
3.4 Relevance of referring pages
In addition to the reference text, the general information content of the referring page is also taken into account.
Example. Suppose we promote a car sales resource. In this case, a link from a car repair site would mean much more than a similar link from a gardening site. The first link comes from a thematically similar resource, so it will be more appreciated by the search engine.
3.5 Google PageRank - Theoretical Foundations
The first who patented the system of accounting for external links was Google. The algorithm is called PageRank. In this chapter, we will talk about this algorithm and how it can affect the ranking of search results.
PageRank is calculated for each web page separately, and is determined by the PageRank (quoting) of the pages that link to it. A kind of vicious circle.
The main task is to find a criterion expressing the importance of the page. In the case of PageRank, the theoretical page traffic was chosen as such a criterion.
Consider the model of the user's travel through the network by clicking on the links. It is assumed that the user starts browsing sites from a randomly selected page. Then the links he goes to other resources. At the same time, there is a possibility that the visitor will leave the site and again start viewing documents from a random page (in the PageRank algorithm, the probability of such an action is 0.15 at each step). Accordingly, with a probability of 0.85, he will continue the journey by clicking on one of the links available on the current page (all links are equivalent). Continuing the journey to infinity, he will visit the popular pages many times, and on the little-known - less.
Thus, the PageRank of a web page is defined as the probability of a user finding a given web page; at the same time, the sum of probabilities over all web pages of the network is equal to one, since the user is necessarily located on any page.
Since it is not always convenient to operate with probabilities, after a series of transformations, you can work with PageRank in the form of specific numbers (like, for example, we are used to seeing it in the Google ToolBar, where each page has a PageRank from 0 to 10).
According to the model described above, we obtain that:
- each page in the network (even if there are no external links to it) initially has a non-zero PageRank (although very small);
- each page that has outgoing links, sends a part of its PageRank to the pages to which it refers. In this case, the transferred PageRank is inversely proportional to the number of links on the page - the more links, the smaller PageRank is transmitted for each;
- PageRank is not fully transmitted, at each step there is a fading (that same probability of 15%, when the user starts browsing from a new, randomly selected page).
Let us now consider how PageRank can influence the ranking of search results (we say “may”, since in its pure form, PageRank has not been involved in the Google algorithm for a long time, as it was before, but more on that below). With the influence of PageRank, everything is very simple - after the search engine has found a number of relevant documents (using text criteria), you can sort them according to PageRank - since it would be logical to assume that the document with a greater number of quality external links contains the most valuable information.
Thus, the PageRank algorithm "pushes" upward in the search for those documents that are most popular even without a search engine.
3.6 Google PageRank - practical use
Currently, PageRank is not used directly in the Google algorithm. This is understandable - after all, PageRank characterizes only the quantity and quality of external links to the site, but does not take into account the reference text and the information content of the referring pages at all - namely, these factors will have a maximum value when ranking. It is assumed that for ranking Google uses the so-called themed PageRank (that is, taking into account only links from thematically related pages), but the details of this algorithm are known only to Google developers.
You can find out the value of PageRank for any web page using the Google ToolBar, which shows the PageRank value in the range from 0 to 10. It should be noted that the Google ToolBar does not show the exact PageRank value, but only the PageRank range that the site falls into, and the range number (from 0 to 10) is determined on a logarithmic scale.
Let’s explain by example: each page has an exact PageRank value, known only by Google. To determine the desired range and display information on the ToolBar, a logarithmic scale is used (an example is shown in the table)
Real PR Value ToolBar Value
1-10 -> 1
10-100 -> 2
100-1000 -> 3
1000-10.000 -> 4
All figures are conditional, but clearly demonstrate that the PageRank ranges shown in the Google ToolBar are not equivalent to each other. For example, raising PageRank from 1 to 2 is easy, and from 6 to 7 is much more difficult.
In practice, PageRank is mainly used for two purposes:
1. Quick assessment of the level of site promotion. PageRank does not give accurate information about the referring pages, but allows you to quickly and easily estimate the level of development of the site. For English-speaking sites, you can follow the following gradation: PR 4-5 - the most typical PR for most sites of medium promotion. PR 6 is a very well-advertised site. PR 7 is a value practically unattainable for a regular webmaster, but sometimes it does. PR 8, 9, 10 - found only at the sites of large companies (Microsoft, Google, etc.). Knowledge of PageRank can be used when exchanging links in order to assess the quality of the page offered for the exchange and in other similar situations.
2. Evaluation of the level of competition for the search query. Although PageRank is not used directly in the ranking algorithms, it nevertheless allows you to indirectly evaluate the competitiveness of a given query. For example, if there are sites with PageRank 6-7 in a search engine, then a site with PageRank 4 has very little chance of getting to the top.
Another important note is that the PageRank values ​​shown in the Google ToolBar are rarely recalculated (every few months), so the ToolBar shows some outdated information. That is, the Google search engine itself takes into account changes in external links much faster than these changes are displayed in the Google ToolBar.
3.7 TIC and Vic Yandex
VIC - weighted citation index - analogue of PageRank, used by Yandex search engine. The values ​​of the CID are not published anywhere and are known only to Yandex. Since it is impossible to find out the CID, you just need to remember that Yandex has its own algorithm for estimating the “importance” of pages.
TIC - thematic citation index - is calculated for the site as a whole and shows the authority of the resource relative to other, thematically close resources (and not all Internet sites in general). TIC is used for ranking sites in the Yandex catalog and does not affect the search results in Yandex itself.
The values ​​of the TIC are shown in Yandeks.Bar. It should only be remembered that the TIC is calculated for the site as a whole, and not for each specific page.
In practice, TICs can be used for the same purposes as PageRank - evaluation of the site promotion and evaluation of the level of competition for a given search query. By virtue of the Internet coverage by the search engine Yandex, TIC is very well suited for evaluating Russian-language sites.
3.8 Increase link popularity
3.8.1 General Directory Submission
There are a large number of directory sites (directories) on the Internet that contain links to other network resources, broken down by topic. The process of adding information about your site to them is called a submit.
Such directories are paid and free, may or may not require a reciprocal link from your site. Their attendance is very small, that is, there is no real influx of visitors from them. However, search engines take into account links from such directories, which can raise your site in search results.
Important! Keep in mind that only directories that place a direct link to your site are of real value. At this point it is worth staying in more detail. There are two ways to link. A direct link is put through the standard HTML construction ("A href = ..., etc."). In addition, links can be placed through various kinds of scripts, redirects, etc. Search engines understand only direct links directly specified in the html-code. Therefore, if the catalog does not provide a direct link to your site, then its value is close to zero.
Do not submit to FFA (free-for-all) directories. Such directories automatically place in themselves links on any subject, they are ignored by search engines. The only thing that the FFA submission leads to is an increase in spam to your email address. In fact, the main goal of the FFA is this.
Be careful about the promises of various programs and services to add your resource to hundreds of thousands of search engines, directories and directories. Really useful directories in the network will be typed no more than a few hundred, from this figure and need to make a start. Professional submission services work with just so many directories. If huge numbers of hundreds of thousands of resources are promised, then the submission base consists mainly of the archives and other useless resources mentioned by the FFA.
Give preference to manual or semi-automatic submit - do not trust fully automated processes. As a rule, submitting under the control of a person gives a much better return than a fully automatic submit.
The need to add a site to paid directories, or affixing a reciprocal backlink from your site should be addressed separately for each directory. In most cases, this does not make much sense, but there may be exceptions.
Submitting a site to directories does not have a very significant effect, but it somewhat improves the visibility of a site in search engines. This opportunity is generally available and does not require much time or financial costs, so do not forget about it when promoting your project.
3.8.2 DMOZ Catalog
The DMOZ directory ( or the Open Directory Project is the largest Internet directory. In addition, there are a large number of copies of the main DMOZ site on the Internet. Thus, by placing your website in the DMOZ directory, you will receive not only a valuable link from the directory itself, but also a few dozen links from related resources. Thus, the DMOZ directory is of great value to the webmaster.
Getting into the directory is not easy, or rather, it depends on your luck. The site may appear in the directory a few minutes after the addition, and maybe many months wait for their turn.
If your site does not appear in the catalog for a long time, but you are sure that everything was done correctly and the site is suitable for the catalog in terms of its parameters, you can try to write to the editor of your category with a question about your application (the DMOZ website provides such an opportunity). Of course, there are no guarantees, but this can help.
Adding to the DMOZ catalog for free, including for commercial sites.
3.8.3 Yandex Catalog
Presence in the Yandex catalog gives you a valuable thematic link to your site, which can improve the position of your site in the search engine. In addition, the Yandex catalog itself is able to provide little traffic to your site.
There are paid and free options for adding information to the Yandex catalog. Of course, in the case of the free option, neither the timing nor the addition of the site itself are guaranteed.
In conclusion, a couple more recommendations for submitting to such important directories as DMOZ and Yandex. First of all, carefully read the requirements for sites, descriptions, etc., so as not to violate the rules when submitting an application (this may lead to the fact that your application will not be considered).
And the second - the presence in these directories is a desirable requirement, but not mandatory. If you are unable to get into these directories, do not despair - to achieve high positions in the search results is possible without these directories, most sites do just that.
3.8.4 Link Exchange
Link exchange is that you from a dedicated page put links to other sites, you yourself get similar links from them. In general, search engines do not welcome link exchanges, as in most cases it is intended to change the search engine results and does not bring anything useful to Internet users. However, this is an effective way to increase link popularity, if you follow a few simple rules.
- exchange links with thematically related sites. Sharing with non-thematic sites is ineffective;
- before the exchange, make sure that your link will be placed on the "good" page. That is, the page should have some PageRank (preferably 3-4 or higher), should be available for indexing by search engines, the link should be direct, the total number of links on the page should not exceed 50, etc .;
- Do not create link directories on the site. The idea of ​​such a catalog looks attractive - it becomes possible to change with a large number of sites on any subject, for any site there is a corresponding category in the catalog. However, in our case, quality is more important than quantity and here there are a number of pitfalls. No webmaster will put a quality link on you if in response he gets a dummy link from your catalog (the PageRank of pages from such directories, as a rule, leaves much to be desired). In addition, search engines are extremely negative to such directories, there were also cases of ban sites for the use of such directories;
- highlight a separate page on the site under the exchange of links. It should have some PageRank, be indexed by search engines, etc. Do not place more than 50 links from the same page (otherwise some of the links may not be taken into account by search engines). Это поможет вам легче находить партнеров по обмену;
- поисковые системы стараются отслеживать взаимные ссылки, поэтому, если есть возможность, используйте для размещения ответных ссылок другой домен/сайт, отличный от продвигаемого. Например, вы продвигаете ресурс, а ответные ссылки ставите на ресурсе – это оптимальный вариант;
- проявляйте некоторую осторожность при обмене. Довольно часто приходится сталкиваться с тем, что не совсем честные вебмастера удаляют ваши ссылки со своих ресурсов, поэтому необходимо время от времени проверять наличие своих ссылок.
3.8.5 Пресс-релизы, новостные ленты, тематические ресурсы
Этот раздел относится уже скорее к маркетингу сайта, а не к чистому seo. Существует большое число информационных ресурсов и новостных лент, которые публикуют пресс-релизы и новости на различные темы. Такие сайты способны не только привести к вам посетителей напрямую, но и повысить столь нужную нам ссылочную популярность сайта.
Если вы затрудняетесь создать пресс-релиз или новость самостоятельно, то подключайте журналистов – они помогут вам найти или создать информационный повод.
Ищите тематически связанные ресурсы. В Интернете существует огромное количество проектов, которые, не являясь вашими конкурентами, посвящены той же тематике, что и ваш сайт. Старайтесь найти подход к владельцам этих ресурсов, вполне вероятно, что они будут рады разместить информацию о вашем проекте.
И последнее – это относится ко всем способам получения внешних ссылок – старайтесь несколько разнообразить ссылочный текст. Если все внешние ссылки на ваш сайт будут иметь одинаковый ссылочный текст, то это может быть понято поисковыми системами как попытка спама. В начало >>>

4 Site Indexing
Before a site appears in search results, it must be indexed by a search engine. Indexing means that the search robot visited your site, analyzed it and entered the information into the search engine database.
If a page is listed in a search engine index, then it can be displayed in the search results. If the page in the index is missing, the search engine does not know anything about it, and, therefore, can not use the information from this page.
Most medium-sized sites (that is, containing several tens or hundreds of pages) usually do not experience any problems with proper indexing by search engines. However, there are a number of points that should be taken into account when working on the site.
The search engine can learn about the newly created site in two ways:
- manual addition of the website address through the appropriate search engine form. In this case, you yourself inform the search engine about the new site and its address is queued for indexing. You should add only the main page of the site, the rest will be found by the search robot on the links;
- provide a search robot to find your site by yourself. If your new resource has at least one external link from other resources already indexed by the search engine, the search robot will visit and index your site in a short time. In most cases, it is recommended to use this option, that is, to receive several external links to the site and just wait for the arrival of the robot. Manual addition of the site can even lengthen the waiting time of the robot.
The time required to index the site is usually from 2-3 days to 2 weeks, depending on the search engine. Faster than all the sites indexing search engine Google.
Try to make the site search engine friendly. To do this, consider the following factors:
- Try to ensure that any pages of your site can be accessed via links from the main page in no more than 3 transitions. If the structure of the site does not allow this, then make a so-called sitemap that will allow you to follow the rule;
- Do not repeat common mistakes. Session IDs make indexing difficult. If you use scripted navigation, be sure to duplicate the links in the usual way - search engines cannot read scripts (for more information about these and other errors, see Chapter 2.3);
- Remember that search engines index no more than 100-200 kb of text per page. For larger pages, only the top of the page will be indexed (the first 100-200 kb.). The rule follows from this: do not use pages larger than 100 kb if you want them to be fully indexed.
You can control the behavior of search robots using a robots.txt file, in which you can explicitly enable or disable certain pages for indexing. There is also a special tag “NOINDEX” that allows you to close parts of the page for indexing, but this tag is supported only by Russian search engines.
The databases of search engines are constantly updated, the records in the database may be subject to change, disappear and reappear, so the number of indexed pages of your site may periodically change.
One of the most common reasons for a page to disappear from an index is server inaccessibility, that is, the search robot, while attempting to index a site, could not access it. After the server is restored, the site should appear in the index again after some time.
It should also be noted that the more external links your site has, the faster it will be re-indexed.
You can track the process of indexing a site by analyzing server log files in which all visits of search robots are recorded. In the appropriate section, we will describe in detail about the programs that allow you to do this. To the beginning >>>
5 Selection of keywords
5.1 Initial Keyword Selection
The selection of keywords is the first step with which the construction of the site begins. At the time of the preparation of texts on the site a set of keywords should already be known.
To determine keywords, first of all, you should use the services offered by the search engines themselves.
For English-language sites this is and
For Russian, and
When using these services, you need to remember that their data can be very different from the real picture. When using Yandex Direct, it should also be remembered that this service does not show the expected number of requests, but the expected number of ad impressions for a given phrase. Since search engine visitors often view more than one page, the actual number of requests is necessarily less than the number of ad impressions for the same request.
The Google search engine does not provide information about the frequency of requests.
After the list of keywords is approximately defined, you can analyze your competitors, in order to find out which key phrases they are targeting, you may be able to learn something new.
5.2 High Frequency and Subwoofer
When optimizing a site, two strategies can be distinguished: optimization for a small number of highly popular keywords, or for a large number of less popular keywords. In practice, both are usually combined.
The lack of high-frequency queries is usually a high level of competition over them. For a young site it is not always possible to climb to the top for these requests.
For low-frequency queries, it is often sufficient to mention the desired phrase on the page, or minimal textual optimization. Under certain conditions, low-frequency queries can give very good search traffic.
The purpose of most commercial sites is to sell a particular product or service, or in some other way to make money on their visitors. This should be taken into account when searching for optimization and when selecting keywords. We must strive to get targeted visitors to the site (that is, ready to purchase the proposed product or service), rather than just a large number of visitors.
Example. The query “monitor” is much more popular and at times more competitive than the query “monitor samsung 710N” (the exact name of the model). However, for the seller of monitors, the second visitor is much more valuable, and getting it is much easier, since the level of competition for the second request is small. This is another possible difference between high-frequency and low-frequency queries that should be taken into account.
5.3 Assessing the level of competition for search queries
After the set of keywords is approximately known, it is necessary to determine the main core of words under which the optimization will be carried out.
Low-frequency requests for obvious reasons are discarded immediately (temporarily). In the previous section, we described the benefits of low-frequency queries, but they are also low-frequency, which do not require special optimization. Therefore, in this section, we do not consider them.
For very popular phrases, the level of competition is usually very high, so you need to really evaluate the capabilities of your site. To assess the level of competition should calculate a number of indicators for the top ten sites in the search results:
- average PageRank of pages in issue;
- The average TIC of sites whose pages were included in the issue;
- The average number of external links to sites in the issuance of different versions of search engines;
Extra options:
- the number of pages on the Internet containing a given search term (in other words, the number of search results);
- the number of pages on the Internet containing the exact match of the given phrase (as when searching in quotes).
These additional parameters will help to indirectly assess the complexity of the site output to the top for a given phrase.
In addition to the described parameters, you can also check how many sites from the issue are present in the main directories, such as DMOZ, Yahoo and Yandex catalogs.
Analysis of all the above parameters and comparing them with the parameters of your own site will allow you to quite clearly predict the prospects for bringing your site to the top for the specified phrase.
Assessing the level of competition for all the selected phrases, you can choose a number of fairly popular phrases with an acceptable level of competition, which will be the main stake during promotion and optimization.
5.4 Sequential refinement of search queries
As mentioned above, search engine services often provide very inaccurate information. Therefore, it is quite rare to determine the ideal set of keywords for your site from the first time.
After your site has been established and certain steps have been taken to promote it, additional statistics on your keywords are in your hands: you know the ranking of your site in search engines for a particular phrase and you also know the number of visits to your site for this phrase .
Owning this information you can quite clearly identify successful and unsuccessful phrases. Often you do not even need to wait for the site to go to the top for the evaluated phrases in all search engines - one or two is enough.
Example. Suppose your site ranked first in the search engine Rambler for this phrase. At the same time, neither in Yandex nor in Google is it yet in issue for this phrase. However, knowing the percentage of visits to your site from various search engines (for example, Yandex - 70%, Google - 20%, Rambler - 10%), you can already predict the approximate traffic for this phrase and decide whether it is suitable for your site or not.
In addition to highlighting unsuccessful phrases, you can find new successful variations. For example, to see that a certain phrase, under which no promotion was done, brings good traffic, even though your site is on the second or third page for this phrase.
Thus, in your hands is a new, refined set of keywords. After that, you should proceed to the restructuring of the site - changing the texts for more successful phrases, creating new pages for new phrases found, etc.
Thus, after a while you will be able to find the best set of keywords for your site and significantly increase search traffic.
Some more tips. According to statistics, the main page of the site accounts for up to 30% -50% of all search traffic. It is best seen in search engines and has the most external links. Therefore, the main page of the site should be optimized for the most popular and competitive requests. Each page of the site should be optimized for 1-2 basic phrases (and, possibly, for a number of low-frequency queries). This will increase the chances of entering the top search engines for given phrases. To the beginning >>>
6 Various information about search engines
6.1 Google SandBox
At the beginning of 2004, a new mysterious concept emerged in the environment of optimizers - Google SandBox or Google sandbox. This designation received a new spam filter Google, aimed at excluding from the issue of young, newly created sites.
The SandBox filter is manifested in the fact that the newly created sites are missing in the search results for almost all the phrases. This happens despite the availability of high-quality and unique informational content and properly conducted promotion (without using spam methods).
At the moment, SandBox applies only to the English segment, sites in Russian and other languages ​​are not exposed to this filter. However, it is likely that this filter can expand its influence.
It can be assumed that the goal of the SandBox filter is to exclude spam sites from issuing - indeed, not a single search spammer can wait months before the results appear. However, along with it suffers a huge number of normal, newly created sites.
There is still no exact information about what exactly the SandBox filter is. There are a number of assumptions obtained on the basis of experience, which we present below:
- SandBox is a filter for young sites. The newly created site falls into the sandbox and is there indefinitely until the search engine translates it into the category of "ordinary";
- SandBox is a filter for new links to newly created sites. Try to notice a fundamental difference from the previous assumption - the filter is not superimposed on the age of the site, but on the age of links to the site. In other words, Google has no complaints about the site, but refuses to take into account external links to it, if less than X months have passed since their appearance. Since external links are one of the main ranking factors, ignoring external links is tantamount to the lack of a site in search engine results. It is difficult to say which of the two assumptions is more true, it is likely that both of them are true;
- the site can be located in the sandbox from 3 months to a year or more. There is also the observation that sites come out of the sandbox in droves. Those. The term of the sandbox is not determined individually for each site, but for large groups of sites (sites created in a certain time range fall into one group). The filter is then removed at once for the whole group, thus, sites from one group will stay in the sand at different times.
Typical signs that your site is in a sandbox:
- your site is normally indexed by Google, regularly visited by a search robot;
- your site has PageRank, the search engine knows and correctly displays external links to your site;
- search at the site address ( gives the correct results, with the correct title, snippet (description of the resource), etc .;
- your site is normally located on rare and unique phrases contained in the text of the pages;
- your site is not visible in the first thousand results for any other requests, even for those for which it was originally created. Sometimes there are exceptions and the site for some requests appears on 500-600 positions, which, of course, does not change the essence.
Filter traversal methods are almost non-existent. There are a number of assumptions about how this can be done, but this is nothing more than assumptions, besides not very acceptable for an ordinary webmaster. The main method is to work on the site and wait for the filter to finish.
After the filter is removed, there is a sharp increase in ratings by 400-500 or more positions.
6.2 Google LocalRank
On February 25, 2003, Google patented a new page ranking algorithm called LocalRank. It is based on the idea of ​​ranking the pages not by their global reference citation, but by the citation among the group of pages thematically related to the query.
The LocalRank algorithm is not used in practice (at least in the form in which it is described in the patent), however, the patent contains a number of interesting ideas with which, we believe, every optimizer should be familiar. Accounting topics referring pages used by almost all search engines. Although this occurs, apparently, according to several other algorithms, the study of the patent will make it possible to understand the general ideas of how this can be implemented.
When reading this chapter, keep in mind that it contains theoretical information, and not a practical guide to action.
The main idea of ​​the LocalRank algorithm is expressed by the following three points:
1. Using a certain algorithm, select a certain number of documents relevant to the search query (we denote this number N). These documents are initially sorted according to a certain criterion (this may be PageRank, or an assessment of relevance or some other criterion or their grouping). Denote the numerical expression of this criterion as OldScore.
2. Each of the N pages goes through a new ranking procedure, as a result of which each page gets some new rank. We denote it as LocalScore.
3. At this step, the values ​​of OldScore and LocalScore are multiplied, resulting in a new value of NewScore, according to which the final ranking of pages takes place.
The key to this algorithm is the new ranking procedure, as a result of which each page is assigned a new rank of LocalScore. We describe this procedure in more detail.
0. Using a ranking algorithm, N pages that match the search query are selected. The new ranking algorithm will only work with these N pages. Each page in this group has some rank OldScore.
1. When calculating LocalScore for this page, all pages from N are highlighted that have external links to this page. We denote the set of these pages by M. In this case, the set of M will not get pages from the same host (host, filtering will occur by IP address), as well as pages that are mirrors of this one.
2. The set M is divided into subsets of Li. These subsets contain pages combined with the following features:
- belonging to one (or similar) hosts. Thus, the pages in which the first three octets of the IP address are the same will fall into one group. That is, the page whose IP address belongs to the range
will be considered as belonging to one group;
- pages that have the same or similar content (mirrors, mirrors);
- Pages of one site (domain).
3. Each page in each set Li has a certain rank (OldScore). Из каждого множества выбирается по одной странице с самым большим OldScore, остальные исключаются из рассмотрения. Таким образом, мы получаем некоторое множество K страниц, ссылающихся на данную страницу.
4. Страницы в множестве K сортируются согласно параметру OldScore, затем в множестве K остаются только k первых страниц (k – некоторое заданное число), остальные страницы исключаются из рассмотрения.
5. На данном шаге рассчитывается LocalScore. По оставшимся k страницам происходит суммирование их значений OldScore. Это можно выразить следующей формулой:
Здесь m – некоторый заданный параметр, который может варьироваться от 1 до 3 (к сожалению, информация, содержащаяся в патенте на описываемый алгоритм, не дает подробного описания данного параметра).
После того, как расчет LocalScore для каждой страницы из множества N закончен, происходит расчет значений NewScore и пересортировка страниц согласно новому критерию. Для рассчета NewScore используется следующая формула:
NewScore(i)= (a+LocalScore(i)/MaxLS)*(b+OldScore(i)/MaxOS)
i – страница, для которой рассчитывается новое значение ранга.
a и b – некоторые числа (патент не дает более подробной информации об этих параметрах).
MaxLS – максимальное из рассчитанных значений LocalScore
MaxOS – максимальное из значений OldScore
Теперь постараемся отвлечься от математики и повторим все вышесказанное простым языком.
На первом этапе происходит отбор некоторого количества страниц соответствующих запросу. Это делается по алгоритмам, не учитывающим тематику ссылок (например, по релевантности и общей ссылочной популярности).
После того, как группа страниц определена, будет подсчитана локальная ссылочная популярность каждой из страниц. Все страницы так или иначе связаны с темой поискового запроса и, следовательно, имеют отчасти схожу тематику. Проанализировав ссылки друг на друга в отобранной группе страниц (игнорируя все остальные страницы в Интернете), получим локальную (тематическую) ссылочную популярность.
После проделанного шага у нас есть значения OldScore (рейтинг страницы на основе релевантности, общей ссылочной популярности и других факторов) и LocalScore (рейтинг страницы среди тематически связанных страниц). Итоговый рейтинг и ранжирование страниц проводится на основе сочетания этих двух факторов.
6.3 Особенности работы различных поисковых систем
Все, сказанные выше идеи по текстовой оптимизации и увеличению ссылочной популярности применимы ко всем поисковым системам в равной степени. Более подробное описание Google объясняется большим наличием информации об этой поисковой системе в свободном доступе, однако идеи, высказанные в отношении Google, в большой степени применимы и к другим поисковым системам.
Вообще, я не являюсь сторонником поиска «секретного знания» о том, как детально работают алгоритмы различных поисковых систем. Все они в той или иной мере подчиняются общим правилам и грамотная работа над сайтом (без учета каких-либо особенностей) приводит к хорошим позициям почти во всех поисковых системах.
Тем не менее, приведем некоторые особенности различных поисковых систем:
Google – очень быстрая индексация, очень большое значение придается внешним ссылкам. База Google используется очень большим числом других поисковых систем и порталов.
MSN – больший, нежели у других поисковых систем, акцент на информационное содержимое сайта.
Yandex – крупнейшая российская поисковая система. Обрабатывает (по разным данным) от 60% до 80% всех русскоязычных поисковых запросов. Уделяет особое внимание тематическим ссылкам (нетематические внешние ссылки также имеют эффект, но в меньшей степени, чем у других поисковых систем). Индексация проходит медленнее, чем у Google, однако так же в приемлемые сроки. Понижает в рейтинге или исключает из индекса сайты, занимающиеся нетематическим ссылкообменом (содержащих каталоги нетематических ссылок, созданных лишь с целью повышения рейтинга сайта), а также сайты, участвующие в системах автоматического обмена ссылками. В периоды обновлений базы, которые длятся несколько дней, выдача Яндекса постоянно меняется, в такие периоды следует отказаться от каких-либо работ по сайту и дождаться стабильных результатов работы поисковой системы.
Rambler – наиболее загадочная поисковая система. Занимает второе (по другим данные третье после Google) место по популярности среди российских пользователей. По имеющимся наблюдениям, понижает в рейтинге сайты, активно занимающиеся раскруткой (быстрое увеличение числа внешних ссылок). Ценит наличие поисковых терминов в простом тексте страницы (без выделения различными стилистическими тегами). – набирающая популярность поисковая система. Использует результаты поисковой системы Google после некоторой дополнительной обработки. Оптимизация под сводится к оптимизации под Google.
6.4 Советы, предположения, наблюдения
В данной главе представлена информация, появившаяся в результате анализа различных статей, общения оптимизаторов, практических наблюдений и т.п. Информация эта не является точной и достоверной – это всего лишь предположения и идеи, однако идеи интересные. Данные, представленные в этом разделе, воспринимайте не как точное руководство, а как информацию к размышлению.
- исходящие ссылки. Ссылайтесь на авторитетные в вашей области ресурсы, используя нужные ключевые слова. Поисковые системы ценят ссылки на другие ресурсы той же тематики;
- исходящие ссылки. Не ссылайтесь на FFA сайты и прочие сайты, исключенные из индекса поисковой системы. Это может привести к понижению рейтинга вашего собственного сайта;
- исходящие ссылки. Страница не должна содержать более 50-100 исходящих ссылок. Это не приводит к понижению страницы в рейтинге, но ссылки сверх этого числа не будут учтены поисковой системой;
- исходящие site wide ссылки, то есть ссылки, стоящие на каждой странице сайта. Считается, что поисковые системы негативно относятся к таким ссылкам и не учитывают их при ранжировании. Существует также другое мнение, что это относится только к большим сайтам с тысячами страниц;
- идеальная плотность ключевых слов. Очень часто приходится слышать подобный вопрос. Ответ заключается в том, что идеальной плотности ключевых слов не существует, вернее она различная для каждого запроса, то есть рассчитывается поисковой системой динамически, в зависимости от поискового термина. Наш совет – проанализировать первые сайты из выдачи поисковой системы, что позволит примерно оценить ситуацию;
- возраст сайта. Поисковые системы отдают предпочтение старым сайтам, как более стабильным;
- обновление сайта. Поисковые системы отдают предпочтение развивающимся сайтам, то есть тем, на которых периодически добавляется новая информация, новые страницы;
- доменная зона (касается западных поисковиков). Предпочтение отдается сайтам, расположенным в зонах .edu, .mil, .gov и т.п. Такие домены могут зарегистрировать только соответствующие организации, поэтому доверия таким сайтам больше;
- поисковые системы отслеживают, какой процент посетителей возвращается к поиску, после посещения того или иного сайта из вылачи. Большой процент возвратов означает нетематическое содержимое, и такая страница понижается в поиске;
- поисковые системы отслеживают, насколько часто выбирается та или иная ссылка в результатах поиска. Если ссылка выбирается редко, значит, страница не представляет интереса и такая страница понижается в рейтинге;
- используйте синонимы и родственные формы ключевых слов, это будет оценено поисковыми системами;
- слишком быстрый рост числа внешних ссылок воспринимается поисковыми системами как искусственная раскрутка и ведет к понижению рейтинга. Очень спорное утверждение, прежде всего потому, что такой способ может использоваться для понижения рейтинга конкурентов;
- Google не учитывает внешние ссылки, если они находятся на одном (или сходных) хостах, то есть страницах, IP адрес которых принадлежит диапазону Такое мнение происходит скорее всего от того, что Google высказывал данную идею в своих патентах. Однако сотрудники Google заявляют, что никаких ограничений по IP адресу на внешние ссылки не налагается, и нет никаких оснований не доверять им;
- поисковые системы проверяют информацию о владельце домена. Соответственно ссылки с сайтов, принадлежащих одному владельцу имеют меньший вес, чем обычные ссылки. Информация представлена в патенте;
- срок, на который зарегистрирован домен. Чем больше срок, тем большее предпочтение отдается сайту;
6.5 Создание правильного контента
Контент (информационное содержимое сайта) играет важнейшую роль в раскрутке сайта. Тому есть множество причин, о которых мы расскажем в этой главе, а также дадим советы, как правильно наполнить сайт информацией.
- уникальность контента. Поисковики ценят новую информацию, нигде ранее не публиковавшуюся. Поэтому при создании сайта опирайтесь на собственные тексты. Сайт, построенный на основе чужих материалов, имеет гораздо меньшие шансы на выход в топ поисковых систем. Как правило, первоисточник всегда находится выше в результатах поиска;
- при создании сайта не забывайте, что он изначально создается для посетителей, а не для поисковых систем. Привести посетителя на сайт – это только первый и не самый трудный шаг. Удержать посетителя на сайте и превратить его в покупателя – вот действительно сложная задача. Добиться этого можно только грамотным информационным наполнением сайта, интересным для человека;
- старайтесь регулярно обновлять информацию на сайте, добавлять новые страницы. Поисковики ценят развивающиеся сайты. Кроме того, больше текста – больше посетителей на сайт. Пишите статьи на тему вашего сайта, публикуйте отзывы посетителей, создайте форум для обсуждения вашего проекта (последнее – только если посещаемость сайта позволит создать активный форум). Интересный контент – залог привлечения заинтересованных посетителей;
- сайт, созданный для людей, а не поисковых машин, имеет большие шансы на попадание в важные каталоги, такие как DMOZ, Яндекс и другие;
- интересный тематический сайт имеет гораздо больше шансов на получение ссылок, отзывов, обзоров и т.д. других тематических сайтов. Такие обзоры сами по себе могут дать неплохой приток посетителей, кроме того, внешние ссылки с тематических ресурсов будут по достоинству оценены поисковыми системами.
В заключение еще один совет. Как говорится, сапоги должен делать сапожник, а писать тексты должен журналист или технический писатель. Если вы сумеете создать увлекательные материалы для вашего сайта – это очень хорошо. Однако у большинства из нас нет особых способностей к написанию привлекательных текстов. Тогда лучше доверить эту часть работы профессионалам. Это более дорогой вариант, но в долгосрочной перспективе он себя оправдает.
6.6 Choosing a domain and hosting
В настоящее время создать свою страницу в Интернет может любой и для этого не нужно никаких затрат. Существуют компании, предоставляющие бесплатный хостинг, которые разместят вашу страницу в обмен на право показывать на ней свою рекламу. Многие Интернет-провайдеры также дадут вам место на своем сервере, если вы являетесь их клиентом. Однако все эти варианты имеют очень существенные недостатки, поэтому, при создании коммерческого проекта, вы должны отнестись к этим вопросам с большей ответственностью.
Прежде всего стоит купить свой собственный домен. Это дает вам следующие преимущества:
- проект, не имеющий собственного домена, воспринимается как сайт-однодневка. Действительно, почему мы должны доверять данному ресурсу, если его владельцы не готовы потратить даже символическую сумму для создания минимального имиджа. Размещение бесплатных материалов на таких ресурсах возможно, но попытка создания коммерческого проекта без собственного домена почти всегда обречена на неудачу;
- собственный домен дает вам свободу в выборе хостинга. Если текущая компания перестала вас устраивать, то вы в любой момент можете перенести свой сайт на другую, более удобную или быструю площадку.
При выборе домена помните о следующих моментах:
- старайтесь, чтобы имя домена было запоминающимся и его произношение и написание было бы однозначным;
- для раскрутки международных англоязычных проектов более всего подходят домены с расширением .com Можно также использовать домены из зон .net, .org, .biz и т.п., однако этот вариант менее предпочтителен;
- для раскрутки национальных проектов всегда следует брать домен в соответствующей национальной зоне (.ru – для русскоязычных проектов, .de – для немецких и т.д.);
- в случае двуязычных (и более) сайтов следует выделить свой домен под каждый из языков. Национальные поисковые системы в большей степени оценят такой подход, чем наличие на основном сайте подразделов на различных языках.
Стоимость домена составляет (в зависимости от регистратора и зоны) 10-20$ в год.
При выборе хостинга следует опираться на следующие факторы:
- скорость доступа;
- время доступности серверов (uptime);
- стоимость трафика за гигабайт и количество предоплаченного трафика;
- желательно, чтобы площадка располагалась в том же географическом регионе, что и большинство ваших посетителей;
Стоимость хостинга для небольших проектов колеблется в районе 5-10$ в месяц.
При выборе домена и хостинга избегайте «бесплатных» предложений. Часто можно видеть, что хостинг-компании предлагают бесплатные домены своим клиентам. Как правило, домены в этом случае регистрируются не на вас, а на компанию, то есть фактическим владельцем домена является ваш хостинг-провайдер. В результате вы не сможете сменить хостинг для своего проекта, либо будете вынуждены выкупать свой собственный, раскрученный домен. Также в большинстве случаев следует придерживаться правила не регистрировать свои домены через хостинг-компанию, так как это может затруднить возможный перенос сайта на другой хостинг (даже несмотря на то, что вы являетесь полноценным владельцем своего домена).
6.7 Смена адреса сайта
Иногда по ряду причин может потребоваться смена адреса проекта. Некоторые ресурсы, начинавшиеся на бесплатном хостинге и адресе, развиваются до полноценных коммерческих проектов и требуют переезда на собственный домен. В других случаях находится более удачное название для проекта. При любых подобных вариантах встает вопрос правильного переноса сайта на новый адрес.
Наш совет в этом плане таков – создавайте на новом адресе новый сайт с новым, уникальным контентом. На старом сайте поставьте на новый ресурс видные ссылки, чтобы посетители могли перейти на ваш новый сайт, однако не убирайте совсем старый сайт и его содержимое.
При таком подходе вы сможете получать поисковых посетителей как на новый, так и на старый ресурс. При этом у вас появляется возможность охватить дополнительные темы и ключевые слова, что было бы сложно сделать в рамках одного ресурса.
Перенос проекта на новый адрес задача сложная и не очень приятная (так как в любом случае раскрутку нового адреса придется начинать практически с нуля), однако, если этот перенос необходим, то следует извлечь максимум пользы из него. В начало >>>
7. Semonitor – пакет программ для раскрутки и оптимизации сайта
В предыдущих главах мы рассказали о том, как правильно создать свой сайт и какими методами его можно раскрутить. Последняя глава посвящена программным инструментам, которые позволят автоматизировать значительную часть работы над сайтом и добиться более высоких результатов. Речь пойдет о пакете программ Semonitor, который вы можете скачать с нашего сайта (
7.1 Модуль Определение позиций
Проверка позиций сайта в поисковых системах является практически каждодневной задачей каждого оптимизатора. Позиции можно проверять и вручную, однако, если у вас несколько десятков ключевых слов и 5-7 поисковых систем, которые нужно мониторить, то этот процесс будет очень утомительным.
Модуль Определение позиций проделает всю работу автоматически. Вы сможете получить информацию о рейтингах вашего сайта по всем ключевым словам в различных поисковых системах, увидеть динамику и историю позиций (рост/падение вашего сайта по заданным ключевым словам), увидеть ту же информацию в наглядной графической форме.
7.2 Module external links
Программа сама опросит все доступные поисковые системы и составит наиболее полный и не содержащий дубликатов список внешних ссылок на ваш ресурс. Для каждой ссылки вы увидите такие важные параметры, как PageRank ссылающейся страницы и ссылочный текст (в предыдущих главах мы подробно рассказали о ценности этих параметров).
Помимо общего списка внешних ссылок вы сможете отслеживать их динамику – то есть видеть вновь появившиеся и устаревшие ссылки.
7.3 Модуль Индексация сайта
Покажет все страницы, проиндексированные той или иной поисковой системой. Необходимый инструмент при создании нового ресурса. Для каждой из проиндексированных страниц будет показано значение PageRank.
7.4 Модуль Лог-Анализатор
Вся информация о ваших посетителях (с каких сайтов они попали на ваш ресурс, какие ключевые слова использовали, в какой стране они находятся и многое другое) содержится в лог-файлах сервера. Модуль лог-анализатор представит всю эту информацию в удобных и наглядных отчетах.
7.5 Модуль Page Rank анализатор
Определяет огромное количество конкурентной информации о заданном списке сайтов. Автоматически определяет для каждого сайта из списка такие параметры, как Google PageRank, Yandex ТИЦ, число внешних ссылок, присутствие сайта в основных каталогах DMOZ, Yandex и Yahoo каталоге. Идеальный инструмент для анализа уровня конкуренции по тому или иному поисковому запросу.
7.6 Модуль Подбор ключевых слов
Подбирает релевантные ключевые слова для вашего сайта, сообщает данные об их популярности (число запросов за месяц), оценивает уровень конкуренции по той или иной фразе.
7.7 Модуль HTML анализатор
Анализирует html-код страницы, подсчитывает вес и плотность ключевых слов, создает отчет о правильности текстовой оптимизации сайта. Используется на этапе создания собственного сайта, а также для анализа сайтов конкурентов. Позволяет анализировать как локальные html-страницы, так и он-лайн проекты. Поддерживает особенности русского языка, поэтому может быть использован для успешной работы, как с английскими, так и с русскими сайтами.
7.7 Программы регистрации сайтов AddSite и Add2Board
Программы AddSite (регистрация в каталогах) и Add2Board (доски объявлений) позволят поднять ссылочную популярность вашего проекта. Подробнее об этих программах можно узнать на сайте наших партнеров В начало >>>
8. Полезные ресурсы
В сети существует масса ресурсов по продвижению сайтов. Приведем здесь основные, наиболее ценные на наш взгляд: – крупнейший в рунете сайт по оптимизации – место, где общаются оптимизаторы, как профессионалы так и новички - архив рассылок компании Ашманов и партнеры, базовый курс поисковой оптимизации. Самая лучшая рассылка на тему seo, здесь вы найдете самые последние новости, статьи и т.д. – сайт посвященный ежегодной конференции Поисковая оптимизация и продвижение сайтов в Интернете. Возможность приобрести доклады предыдущих конференций, содержащие очень ценную информацию. - сборник материалов по теме seo. Сборник платный, но цена невысокая, очень хорошая подборка материалов.

Instead of conclusion - website promotion step by step
In this chapter, I will talk about how I promote my own websites. It is something like a small step-by-step instruction, in which what was described in the previous sections is briefly repeated. В своей работе я использую программу Semonitor, поэтому она и будет взята в качестве примера.
1. To start working on the site, you should have some basic knowledge, they are sufficiently represented in this course. I emphasize that you do not need to be a guru in optimization and this basic knowledge is acquired fairly quickly. After that we start working
experiment, display sites in the top, etc. This is where we need software.
2. We make an approximate list of keywords and check the level of competition for them. We evaluate our capabilities and stop the choice on rather popular, but medium competitive words. The selection of keywords is done using the appropriate module, an approximate test of competition in it. For the most interesting requests for us
we carry out a detailed analysis of the search engines in the Page Rank module
analyzer and make final decisions on keywords.
3. Start writing texts for the site. I write a part of texts independently, I give a part of the most important ones to writing to journalists. That is my opinion - the content first. При наличии хорошего
content, it will be easier to get external links and visitors. At the same stage, we begin to use the HTML analyzer module, create the desired keyword density. Each page is optimized for your phrase.
4. Registration of the site in directories. To do this, use the AddSite program (
5. After the first steps are taken, we wait and check the indexing to make sure that the site is normally perceived by various search engines.
6. In the next step, you can already begin checking the position of the site for the desired keywords. Positions are likely to be not very good at first, but this will give information for reflection.
7. We continue to work on increasing link popularity, the process is tracked using the External Links module.
8. We analyze site attendance with the help of the Analyzer Log and are working to increase it.