开发者

Visit Half Million Pages with Perl

开发者 https://www.devze.com 2023-01-16 03:13 出处:网络
Currently I\'m using Mechanize and the get() method to get each site, and check with content() method each mainpage for something.

Currently I'm using Mechanize and the get() method to get each site, and check with content() method each mainpage for something. I have a very fast computer + 10Mbit connection, and still, it took 9 hours to check 11K si开发者_如何学运维tes, which is not acceptable, the problem is, the speed of the get() function, which , obviously, needs to get the page,is there any way to make it faster,maybe to disable something,as I only need to main page html to be checked.

Thanks,


Make queries in parallel instead of serially. If I needed to do this, I'd fork off a process to grab the page. Something like Parallel::ForkManager, LWP::Parallel::UserAgent or WWW:Curl may help. I tend to favor Mojo::UserAgent.


Use WWW::Curl (and specifically WWW::Curl::Multi). I'm using it to crawl 100M+ pages per day. The module is a thin binding on top of libcurl, so it feels a bit C-ish, but it's fast and does almost anything libcurl is capable of doing.

I would not recommend using LWP::Parallel::UA, as it's kind of slow and the module itself is not very well thought out. When I started out writing a crawler, I originally thought about forking LWP::Parallel::UA, but I decided against that when I looked into it's internals.

Disclaimer: I'm the current maintainer of the WWW::Curl module.

0

精彩评论

暂无评论...
验证码 换一张
取 消