Search Engine Optimisation
History
During the period of the mid-1990s, the first search engines had begun cataloging the early form of the World Wide Web, or more simply, the Web.
Consider the following terms:
Webmaster - Also known as the creator of the website or its author, it is the person who is responsible for the upkeep of the website. The typical role of a webmaster could involve making sure that the web servers, hardware and software are performing correctly, designing and redesigning the website, creating and revising web pages, answering any questions from users, and monitoring web traffic through the site.
URL - In terms of computing, a URL, otherwise known as a Uniform Resource Identifier, is a string (or sequence) of characters (or units of information) which identifies or characterises a resource (or any entity that can be identified) on the Internet.
Spider - A Web crawler, otherwise known as an ant, bot, worm or web spider, is a computerised programme that browses the Internet automatically, and in a structured manner. Such a process is referred to as Web crawling or spidering. It is common for search engines to use spidering as a method of generating up-to-date data. Web crawlers have a primary task of creating a copy of the searched pages which will then be processed by a search engine that will classify the downloaded pages in order to produce fast data searches. Crawlers also provide website maintenance tasks, such as ensuring there are no broken links or checking the validity of HTML code.
Indexing – Search Engine indexing is the process whereby data is collected and stored in order to provide for fast and accurate information retrieval. An alternate name, by which search engines obtain web pages from the Internet, is Web indexing.
It was at this time that webmasters and the providers of website content began optimizing websites with the purpose of making them more attractive for search engines when they came to view the sites.
All that the webmaster had to do was submit the address of the page, also known as the URL, to the various search engines whose role would be to direct a spider to "crawl" that particular page, then extract links to other pages from it, and finally to return information found on the page in order that it may be indexed.
The process of indexing requires a search engine spider to download a page and store it on the search engine's own server. At this point, a computer program, known as an indexer, extracts various pieces of information concerning the page, such as:
All the words that it contains and where they are located
Should there be any particular emphasis or weighting for specific words
All the links the page contains
At this point, the data is then placed into a scheduler that will allocate when the page requires future crawling. The purpose of storing an index is to maximise the speed and ability to find relevant documents for a search query. Without an index, the search engine would scan every document, which would be impractible.
According to the industry analyst Danny Sullivan, who is the editor-in-chief of Search Engine Land, the concept of search engine optimization more than likely came into prominence around 1997.
Articles – How To Succeed
Peter Radford writes Articles with Websites on a wide range of subjects. Article Articles cover Background, Online Marketing, Writing Articles, Search Engine Optimisation.
His Website contains a total of 118 Article Articles, written by others and carefully selected.
View his Website at: articles-how-to-succeed.com
View his Blog at: articles-how-to-succeed.blogspot.com
Post new comment
Please Register or Login to post new comment.