Transactions of the Karelian Research Centre of the Russian Academy of Sciences (Sep 2016)

DEVELOPMENT OF A PROGRAM FOR COLLECTION OF WEBSITE STRUCTURE DATA

  • Andrey Pechnikov,
  • Alexandr Lankin

DOI
https://doi.org/10.17076/mat381
Journal volume & issue
no. 08
pp. 81 – 90

Abstract

Read online

The web graph is the most common mathematical model of a website. Constructing a web graph of a real site requires data about the structure of that site: html-pages and/or documents in the site (in particular, data about URLs of web resources) and hyperlinks linking them. Web servers often use pseudonyms and redirections. They also generate the same pages dynamically via different URL requests. This raises a problem in which there are various URLs but with the same content. Thus, we can get a web graph in which some of its vertices correspond to pages of a site with the same content. The paper describes a crawler called RCCrawler that collects information about websites to build the web graphs of these sites. This crawler largely addresses the above problem as confirmed by a series of experiments conducted.

Keywords