开发者

Web crawler: Using Perl's MozRepl module to deal with Javascript

开发者 https://www.devze.com 2023-04-12 19:18 出处:网络
I am trying to save a couple of web pages by using a web crawler. Usually I prefer doing it with perl\'s WWW::Mechanize modul. However, as far as I can tell, the site I am trying to crawl has many jav

I am trying to save a couple of web pages by using a web crawler. Usually I prefer doing it with perl's WWW::Mechanize modul. However, as far as I can tell, the site I am trying to crawl has many javascripts on it which seem to be hard to avoid. Therefore I looked into the following perl modules

  • WWW::Mechanize::Firefox
  • MozRepl
  • MozRepl::RemoteObject

The Firefox MozRepl extension itself works perfectly. I can use the terminal for navigating the web site just the way it is shown in the developer's tutorial - in theory. However, I have no idea about javascript and therefore am having a hard time using the moduls properly.

So here is the source i like to start from: Morgan Stanley

For a couple of listed firms beneath 'Companies - as of 10/14/2011' I like to save their respective pages. E.g. clicking on the first listed company (i.e. '1-800-Flowers.com, Inc') a javascript function gets called with two arguments -> dtxt('FLWS.O','2011-10-14'), which produces the desired new page. The page I now like to save locally.

With perl's MozRepl module I thought about something like this:

use strict;
use warnings;
use MozRepl;

my $repl = MozRepl->new;
$repl->setup; 
$repl->execute('window.open("http://www.morganstanley.com/eqr/disclosures/webapp/coverage")');

$repl->repl_enter({ source => "content" });
$repl->execute('dtxt("FLWS.O", "2011-10-14")');

Now I like to save the produced HTML page.

So again, the desired code I like to produce should visit for a couple of firms their HTML site and simply save the web page. (Here are e.g. three firms: MMM.N, FLWS.O, SSRX.O)

  1. Is it correct, that I cannot go around the page's javascript functions and therefore cannot use WWW::Mechanize?
  2. Following question 1, are the mentioned perl modules a plausible approach to take?
  3. And finally, if you say the first two questions can be anwsered with yes, it would be really nice if you can help me out with the actual coding. E.g. in the above code, the essential part which is missing is a 'save'-command. (Maybe using Firefo开发者_运维百科x's saveDocument function?)


The web works via HTTP requests and responses.

If you can discover the proper request to send, then you will get the proper response.

If the target site uses JS to form the request, then you can either execute the JS, or analyse what it does so that you can do the same in the language that you are using.

An even easier approach is to use a tool that will capture the resulting request for you, whether the request is created by JS or not, then you can craft your scraping code to create the request that you want.

The "Web Scraping Proxy" from AT&T is such a tool.

You set it up, then navigate the website as normal to get to the page you want to scrape, and the WSP will log all requests and responses for you.

It logs them in the form of Perl code, which you can then modify to suit your needs.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号