Home arrow Java arrow Page 12 - Crawling the Web with Java
JAVA

Crawling the Web with Java


Are you playing with the possibilities of Java? This article explores in detail how to use Java's Web Crawler class and methods. It is excerpted from chapter six of The Art of Java, written by Herbert Schildt and James Holmes (McGraw-Hill, 2004; ISBN: 0072229713).

Author Info:
By: McGraw-Hill/Osborne
Rating: 4 stars4 stars4 stars4 stars4 stars / 87
June 09, 2005
TABLE OF CONTENTS:
  1. · Crawling the Web with Java
  2. · Fundamentals of a Web Crawler
  3. · An Overview of the Search Crawler
  4. · The SearchCrawler Class part 1
  5. · The SearchCrawler Class part 2
  6. · SearchCrawler Variables and Constructor
  7. · The search() Method
  8. · The showError() and updateStats() Methods
  9. · The addMatch() and verifyURL() Methods
  10. · The downloadPage(), removeWwwFromURL(), and
  11. · An Overview of Regular Expression Processing
  12. · A Close Look at retrieveLinks()
  13. · The searchStringMatches() Method
  14. · The crawl() Method
  15. · Compiling and Running the Search Web Crawler

print this article
SEARCH DEVARTICLES

Crawling the Web with Java - A Close Look at retrieveLinks()
(Page 12 of 15 )

The retrieveLinks( ) method uses the regular expression API to obtain the links from a page. It begins with these lines of code:

// Compile link matching pattern.
Pattern p =
  Pattern.compile("<a\\s+href\\s*=\\s*\"?(.*?)[\"|>]", 
    Pattern.CASE_INSENSITIVE);
Matcher m = p.matcher(pageContents);

The regular expression used to obtain links can be broken down as a series of steps, as shown in the following table:

Character Sequence

Explanation

<a

Look for the characters "<a".

\\s+

Look for one or more space characters.

href

Look for the characters "href".

\\s*

Look for zero or more space characters.

=

Look for the character "=".

\\s*

Look for zero or more space characters.

\"?

Look for zero or one quote character.

(.*?)

Look for zero or more of any character until the next part of the pattern is matched, and place the results in a group.

[\"|>]

Look for quote character or greater than (">") character.

Notice that Pattern.CASE_INSENSITIVE is passed to the pattern compiler. As mentioned, this indicates that the pattern should ignore case when searching for matches.

Next, a list to hold the links is created, and the search for the links begins, as shown here:

// Create list of link matches.
ArrayList linkList = new ArrayList();
while (m.find()) {
 
String link = m.group(1).trim();

Each link is found by cycling through m with a while loop. The find( ) method of Matcher returns true until no more matches are found. Each match (link) found is retrieved by calling the group( ) method defined by Matcher. Notice that group( ) takes 1 as an argument. This specifies that the first group from the matching sequences be returned. Notice also that trim( ) is called on the return value from the group( ) method. This removes any unnecessary leading or trailing space from the value.

Many of the links found in Web pages are not suited for crawling. The following code filters out several links that the Search Crawler is uninterested in:

// Skip empty links.
if (link.length() < 1) {
  continue;
}
// Skip links that are just page anchors.
if (link.charAt(0) == '#') {
  continue;
}
// Skip mailto links.
if (link.indexOf("mailto:") != -1) {
  continue;
}
// Skip JavaScript links.
if (link.toLowerCase().indexOf("javascript") != -1) {
  continue;
}

First, empty links are skipped so as not to waste any more time on them. Second, links that are simply anchors into a page are skipped by checking to see if the first character of the link is a hash (#).

Page anchors allow for links to be made to a certain section of a page. Take, for example, this URL:

  http://osborne.com/#contact

This URL has an anchor to the “contact” section of the page located at http://osborne.com. Links inside the page at http://osborne.com can reference the section relatively as just “#contact”. Since anchors are not links to “new” pages, they are skipped over.

Next, “mailto” links are skipped. Mailto links are used for specifying an e-mail link in a Web page. For example, the link

  mailto:books@osborne.com

is a mailto link. Since mailto links don’t point to Web pages and cannot be crawled, they are skipped over. Finally, JavaScript links are skipped. JavaScript is a scripting language that can be embedded in Web pages for adding interactive functionality to the page. Additionally, JavaScript functionality can be accessed from links. Similar to mailto links, JavaScript links cannot be crawled; thus they are overlooked.

As you’ve just seen, the links in Web pages can take many formats, such as mailto and JavaScript formats. Additionally, traditional links inside Web pages can take a few different formats as well. Following are the three formats that traditional links can take:

The first of the three links shown here is considered to be a fully qualified URL. The second example is a shortened version of the first URL, omitting the “host” portion of the URL. Notice the slash (/) at the beginning of the URL. The slash indicates that the URL is what’s called “absolute.” Absolute URLs are URLs that start at the root of a Web site. The third example is again a shortened version of the first URL, omitting the “host” portion of the URL. Notice that this third example does not have the leading slash. Since the leading slash is absent, the URL is considered to be “relative.” Relative, in the realm of URLs, means that the URL address is relative to the URL on which the link is found.

The lines of code in the next section handle converting absolute and relative links into fully qualified URLs:

// Prefix absolute and relative URLs if necessary.
if (link.indexOf("://") == -1) {
 
// Handle absolute URLs.
 
if (link.charAt(0) == '/') {
   
link = "http://" + pageUrl.getHost() + link;
 
// Handle relative URLs.
 
} else {
   
String file = pageUrl.getFile();
    if (file.indexOf('/') == -1) {
      link = "http://" + pageUrl.getHost() + "/" + link;
    }else {
     String path =
       file.substring(0, file.lastIndexOf('/') + 1);
     link = "http://" + pageUrl.getHost() + path + link;
    }
  }
}

First, the link is checked to see whether or not it is fully qualified by looking for the presence of "://" in the link. If these characters exist, the URL is assumed to be fully qualified. However, if they are not present, the link is converted to a fully qualified URL. As discussed, links beginning with a slash (/) are absolute, so this code adds "http://" and the current page’s URL host to the link to fully qualify it. Relative links are converted here in a similar fashion.

For relative links, the current page URL’s filename is taken and checked to see if it contains a slash (/). A slash in the filename indicates that the file is in a directory hierarchy. For example, a file may look like this:

          dir1/dir2/file.html
or simply like this:
          file.html

In the latter case, "http://", the current page’s URL host, and "/" are added to the link since the current page is at the root of the Web site. In the former case, the “path” (or directory) portion of the filename is retrieved to create the fully qualified URL. This case concatenates "http://", the current page’s URL host, the path, and the link together to create a fully qualified URL.

Next, page anchors and "www" are removed from the fully qualified link:

// Remove anchors from link.
int index = link.indexOf('#');
if (index != -1) {
 
link = link.substring(0, index);
}
// Remove leading "www" from URL's host if present.
link = removeWwwFromUrl(link);

For the same reason that anchor-only links are skipped over, links with anchors tacked on to the end are skipped over. The leading "www" is also removed from links so that duplicate links are skipped over later in this method.

Next, the link is verified to make sure it is a valid URL:

// Verify link and skip if invalid.
URL verifiedLink = verifyUrl(link);
if (verifiedLink == null) {
  continue;
}

After validating that the link is a URL, the following code checks to see if the link’s host is the same as the one specified by Start URL and checks to see if the link has already been crawled:

/* If specified, limit links to those
   having the same host as the start URL. */
if (limitHost &&
    !pageUrl.getHost().toLowerCase().equals(
      verifiedLink.getHost().toLowerCase()))
{
  continue;
}
// Skip link if it has already been crawled.
if (crawledList.contains(link)) {
  continue;
}

Finally, the retrieveLinks( ) method ends by adding each link that passes all filters to the link list.

// Add link to list.
linkList.add(link);
}
return (linkList);

After the while loop finishes and all links have been added to the link list, the link list is returned.


blog comments powered by Disqus
JAVA ARTICLES

- Java Too Insecure, Says Microsoft Researcher
- Google Beats Oracle in Java Ruling
- Deploying Multiple Java Applets as One
- Deploying Java Applets
- Understanding Deployment Frameworks
- Database Programming in Java Using JDBC
- Extension Interfaces and SAX
- Entities, Handlers and SAX
- Advanced SAX
- Conversions and Java Print Streams
- Formatters and Java Print Streams
- Java Print Streams
- Wildcards, Arrays, and Generics in Java
- Wildcards and Generic Methods in Java
- Finishing the Project: Java Web Development ...

Watch our Tech Videos 
Dev Articles Forums 
 RSS  Articles
 RSS  Forums
 RSS  All Feeds
Write For Us 
Weekly Newsletter
 
Developer Updates  
Free Website Content 
Contact Us 
Site Map 
Privacy Policy 
Support 

Developer Shed Affiliates

 




© 2003-2017 by Developer Shed. All rights reserved. DS Cluster - Follow our Sitemap
Popular Web Development Topics
All Web Development Tutorials