Copyright Notice

This text is copyright by CMP Media, LLC, and is used with their permission. Further distribution or use is not permitted.

This text has appeared in an edited form in WebTechniques magazine. However, the version you are reading here is as the author originally submitted the article for publication, not after their editors applied their creativity.

Please read all the information in the table of contents before using this article.
Download this listing!

Web Techniques Column 7 (October 1996)

One of the problems in maintaining a good web-site is making sure there are good valid links to nifty places that may offer further relevant information, or perhaps just some nifty thing you've discovered.

Discovering the links isn't very difficult, usually. After all, any of the big web search engines or web indexing services can probably give you more links on a given topic than you can visit in a lifetime.

The concern is that once you've copied that URL faithfully into your ``hey, cool links here'' page, things tend to move around, or even go away. Then you end up with a bad link.

How do you discover this bad link? Well, you could spend a lot of time browsing your own pages, following all the links to verify that the link is still good. Or, you could just sit back and wait for a visitor to email you, telling you that ``this link is broken''. (Be sure your email address is prominent on the page... I've visited too many pages with no apparent owners, and it's frustrating trying to report a bad link.)

However, you're reading this column, so I presume you'd like to hear about a simple tool I've written to follow these links automatically. With the easy-to-use LWP library (by Gisle Aas), you can write a program that fetches a page, looks for all of its links, then tries each link in turn.

In fact, this program, having noticed those links, can then also look at the content of those pages, looking for additional links, and so on. By recursively traversing the tree, you'll end up visiting everything possible.

Now, if you do this with wild abandon, you'll end up visiting everything on the entire Web! After all, this is how Alta Vista and Lycos and friends discover all these sites.

So, our program has to be a little more selective, restricting its fetches to a particular area of interest to us. Not just a general web wanderer.

The program, which I've called ``hverify'' (for hierarchical verify), is found in Listing 1 [below].

Line 3 extends the built-in library search path to include the location of my locally downloaded CPAN items. I need this for the LWP module, available from the CPAN. Remember, the nearest CPAN can be located using Tom Christiansen's wonderful service at http://www.perl.com/CPAN/.

Line 5 grabs the UserAgent module from LWP, providing an interface between this program and the web servers.

Line 6 pulls in the HTML Parser routines from LWP. This sets up a class that I'll be subclassing below.

Line 7 pulls in the URL routines from LWP, useful for performing operations on URLs.

Lines 9 through 23 form a configuration section. I've hardwired this script to perform a specific task. You may instead choose to get some of this stuff from the command-line. Feel free to rewrite the code.

Lines 11 and 12 give a list of starting URLs. Here, I'm checking starting at my index page.

Lines 13 through 17 define a subroutine that will be called repeatedly with full absolute URLs in the first parameter. This subroutine must return true only for those URLs that we will fetch and then parse for further URLs.

The routine I've defined here checks every page underneath my homepage, except for the one called ``refindex''. I have a special script to check the links in refindex, because it takes too long to perform a general parse on this page.

Lines 18 to 21 define another subroutine. If a URL passed to the previous subroutine returns false, the URL is also checked here. If this subroutine returns true, then the URL is verified for existance (usually via HEAD), but the content is not parsed for further URLs.

For this configuration, I've decided to verify all web, FTP, and Gopher sites. (It's a little hard to verify telnet: sites programmatically, for example.)

Lines 25 to 43 subclass the HTML::Parser class to create a specialized parser that knows how to extract ``A'' and ``IMG'' links. While the details are probably a little too long to go into here, I'll try to hit the highlights.

First, the brackets around the whole shebang cause the package directive in line 26 to be temporary. If I was doing this right, I'd put the whole thing into a separate require'd file, and the block brackets would not be necessary, because the package directive would only be active until the end of that file.

Line 26 gives the name of this new class, ParseLink.

Line 27 declares that this class inherits from HTML::Parser, by setting @ParseLink::ISA. That means that any routines called against an object of type ParseLink might possibly be found in the HTML::Parser class instead.

According to the documentation for HTML::Parser, it calls a method called ``start'' at the beginning of each noted tag. My routine here in lines 29 to 37 determines if the tag is an ``A'' or ``IMG'' tag. If so, the corresponding ``HREF'' or ``SRC'' attributes are folded into the ``links'' instance variable. These attributes are the URL links referenced by the page being examined.

Lines 39 to 42 define another method called get_links to fetch the current value of the ``links'' instance variable. I call this after the parse is over to get everything seen there.

Line 45 creates a new UserAgent object, which will be my interface between this program and the web.

Line 46 sets the user agent attribute for this script. While you can set it to pretty much anything you want, it's nice to set it to something that the web-server-administrator will be able to distinguish. Here, I'm using ``hverify/1.0'', which is the name of the program and the version number.

Line 47 enables the proxy settings from the appropriate environment variables. While I don't need this at Teleport (it becomes a no-op), it's pretty good practice to include this into scripts that others might use.

Line 49 disables buffering on standard output. There's not a lot of output to begin with, and I wanna see it as it comes out.

Line 51 labels the main outer loop as MAINLOOP. In some of the inner loops, it's helpful to abandon all hope on a particular URL, so I need a labeled label to do that.

Lines 52 to 91 process each URL in the @CHECK array. Initially, the value comes from the configuration section above. But as new URLs are found, they are shoved onto the end of theis array.

Line 53 patches up URL from %7e to ~. Yes, %7e is the official way to write a tilde, but everyone types in a tilde instead. So, to make sense in the output, I just patch them up myself.

Line 54 prevents visiting the same URL twice. Each URL is used as a key in the %did hash. If the value comes up as non-zero before incrementing, then we've done this one before.

Lines 55 to 77 are executed for each URL that we want to examine both for existance and additional links, as noted by an affirmative return from the PARSE subroutine (defined in the configuration section).

Lines 57 and 58 attempt to fetch the content of the URL. If the response is bad (as determined by the call to ``is_success'' in line 60), then an error message is generated, and the URL ignored.

For a successful URL fetch, the base URL is next saved in line 67. This base URL might be different from the original absolute URL if there was a BASE attribute in the HTTP header or HTML header, or perhaps a server redirection. This base URL is needed to resolve any relative URLs in the same manner that a browser will resolve them.

Line 67 weeds out the HTML pages from the other pages. There's no point in parsing a page as HTML if it isn't tagged that way.

Line 68 creates a new ParseLink object. Actually, the ``new'' method is not found in the ParseLink class, but in the parent class (HTML::Parser). The returned object is still of type ParseLink, which is important to find the other additional routines.

Line 69 passes the content of the HTML page to the parsing routines, via a method call on the ParseLink object. Once again, this method is not actually found in the ParseLink class, but in the HTML::Parser class. During this invocation, the ``start'' method in ParseLink will be called one time for each start tag. Some of those will result in links being captured into the ``links'' instance variable in the ParseLink object.

Line 70 calls the parse routine again, passing it undef. This is required by the protocol defined in the HTML::Parser documentation to signal the end of the file.

Lines 71 to 75 walk through the resulting list of links. This list is obtained by calling the get_links method on the ParseLink object. This results in a call to the subroutine defined earlier, returning a sorted list of links.

Line 72 transforms a possibly relative URL into an absoulte URL. The ``url'' routine was imported from the URI::URL module above. The $base value is needed for proper relative URL interpretation.

Line 73 is a debugging statement to help me understand what links are being seen.

Line 74 tacks this URL onto the end of the list of things to be checked. It doesn't matter if we've already seen this URL, because the check at the top of the loop (in line 54) will keep us from revisiting that URL.

Lines 78 to 89 perform a similar checking for the ``ping-only'' URLs. Here, we try to fetch the URL with both the HEAD and the GET methods (I found there were some HTTP servers that didn't understand HEAD, in spite of the documented standards. Sigh.)

And that's really all there is to it. Just drop the script into a convenient place, change the configuration (unless you want to verify my pages at Teleport, which I don't recommend), and then run it.

Bad pages will show up as ``Cannot fetch'' and the URL and reason, so you can grep the output for those words. Or, just get rid of all the other output statements.

Note that this program does not respect the ``Robot Exclusion Protocol'', as documented at http://info.webcrawler.com/mak/projects/robots/norobots.html. I presume that if I am testing my own pages, I have permission to do so. Nor does it pause between fetching pages, as a good spider would. Therefore, this program would not be a good general spider without some modifications.

There you have it. A little tool to verify that your links are linked. Now there's no excuse for having a bad link. Enjoy.

Listing 1

        =1=     #!/usr/bin/perl
        =2=     
        =3=     use lib "/home/merlyn/CPAN/lib";
        =4=     
        =5=     use LWP::UserAgent;
        =6=     use HTML::Parser;
        =7=     use URI::URL;
        =8=     
        =9=     ## begin configure
        =10=    
        =11=    @CHECK =                        # list of initial starting points
        =12=      qw(http://www.teleport.com/~merlyn/);
        =13=    sub PARSE {                     # verify existance, parse for further URLs
        =14=      ## $_[0] is the absolute URL
        =15=      $_[0] =~ m!^http://www\.(teleport|stonehenge)\.com/~merlyn! and not
        =16=        $_[0] =~ /refindex/;
        =17=    }
        =18=    sub PING {                      # verify existence, but don't parse
        =19=      ## $_[0] is the absolute URL
        =20=      $_[0] =~ m!^(http|ftp|gopher):!;
        =21=    }
        =22=    
        =23=    ## end configure
        =24=    
        =25=    {
        =26=      package ParseLink;
        =27=      @ISA = qw(HTML::Parser);
        =28=    
        =29=      sub start {                   # called by parse
        =30=        my $this = shift;
        =31=        my ($tag, $attr) = @_;
        =32=        if ($tag eq "a") {
        =33=          $this->{links}{$attr->{href}}++;
        =34=        } elsif ($tag eq "img") {
        =35=          $this->{links}{$attr->{src}}++;
        =36=        }
        =37=      }
        =38=    
        =39=      sub get_links {
        =40=        my $this = shift;
        =41=        sort keys %{$this->{links}};
        =42=      }
        =43=    }                               # end of ParseLink
        =44=    
        =45=    $ua = new LWP::UserAgent;
        =46=    $ua->agent("hverify/1.0");
        =47=    $ua->env_proxy;
        =48=    
        =49=    $| = 1;
        =50=    
        =51=    MAINLOOP:
        =52=      while ($thisurl = shift @CHECK) {
        =53=        $thisurl =~ s/%7e/~/ig;     # ugh :-)
        =54=        next if $did{$thisurl}++;
        =55=        if (PARSE $thisurl) {
        =56=          warn "fetching $thisurl\n";
        =57=          $request = new HTTP::Request('GET',$thisurl);
        =58=          $response = $ua->request($request); # fetch!
        =59=          
        =60=          unless ($response->is_success) {
        =61=            warn
        =62=              "Cannot fetch $thisurl (status ",
        =63=              $response->code, " ", $response->message,")\n";
        =64=            next MAINLOOP;
        =65=          }
        =66=          next MAINLOOP unless $response->content_type =~ /text\/html/i;
        =67=          $base = $response->base;
        =68=          my $p = ParseLink->new;
        =69=          $p->parse($response->content);
        =70=          $p->parse(undef);
        =71=          for $link ($p->get_links) {
        =72=            $abs = url($link, $base)->abs;
        =73=            warn "... $link => $abs\n";
        =74=            push(@CHECK, $abs);
        =75=          }
        =76=          next MAINLOOP;
        =77=        }
        =78=        if (PING $thisurl) {
        =79=          warn "verifying $thisurl\n";
        =80=          for $method (qw(HEAD GET)) {
        =81=            $request = new HTTP::Request($method,$thisurl);
        =82=            $response = $ua->request($request); # fetch!
        =83=            next MAINLOOP if $response->is_success; # ok
        =84=          }
        =85=          warn
        =86=            "Cannot fetch $thisurl (status ",
        =87=            $response->code, " ", $response->message,")\n";
        =88=          next MAINLOOP;
        =89=        }
        =90=        warn "[skipping $thisurl]\n";
        =91=      }

Randal L. Schwartz is a renowned expert on the Perl programming language (the lifeblood of the Internet), having contributed to a dozen top-selling books on the subject, and over 200 magazine articles. Schwartz runs a Perl training and consulting company (Stonehenge Consulting Services, Inc of Portland, Oregon), and is a highly sought-after speaker for his masterful stage combination of technical skill, comedic timing, and crowd rapport. And he's a pretty good Karaoke singer, winning contests regularly.

Schwartz can be reached for comment at merlyn@stonehenge.com or +1 503 777-0095, and welcomes questions on Perl and other related topics.