网络爬虫外文翻译参考文献 下载本文

网络爬虫外文翻译参考文献

crawling process even if multithreading is used will be insufficient for large - scale engines that need to fetch large amounts of data rapidly.When a single centralized crawler is used all the fetched data passes through a single physical link.Distributing the crawling activity via multiple processes can help build a scalable, easily configurable system,which is fault tolerant system.Splitting the load decreases hardware requirements and at the same time increases the overall download speed and reliability. Each task is performed in a fully distributed fashion,that is ,no central coordinator exits.

Ⅵ.PROBLEM OF SELECTING MORE “INTERESTING”

A search engine is aware of hot topics because it collects user queries.The crawling process prioritizes URLs according to an importance metric such as similarity(to a driving query),back-link count,Page Rank or their combinations/variations.Recently Najork et al. Showed that breadth-first search collects high-quality pages first and suggested a variant of Page Rank.However,at the moment,search strategies are unable to exactly select the “best” paths because their knowledge is only partial.Due to the enormous amount of information available on the Internet a total-crawling is at the moment impossible,thus,prune strategies must be applied.Focused crawling and intelligent crawling,are techniques for discovering Web pages relevant to a specific topic or set of topics.

CONCLUSION

In this paper we conclude that complete web crawling coverage cannot be achieved, due to the vast size of the whole WWW and to resource availability.Usually a kind of threshold is set up(number of visited URLs, level in the website tree,compliance with a topic,etc.)to limit the crawling process over a selected website.This information is available in search engines to store/refresh most relevant and updated web pages,thus improving quality of retrieved contents while reducing stale content and missing pages.

网络爬虫外文翻译参考文献

谢谢下载,

祝您生活愉快!