Skip to content

gocrawlvswombat

BSD-3-Clause 6 2 2,029
58.1 thousand (month) Nov 20 2016 (3 years ago)
1,308 2 24 MIT
Dec 27 2011 1.6 thousand (month) 3.0.0(1 year, 10 months ago)

Gocrawl is a polite, slim and concurrent web crawler library written in Go. It is designed to be simple and easy to use, while still providing a high degree of flexibility and control over the crawling process.

One of the key features of Gocrawl is its politeness, which means that it obeys the website's robots.txt file and respects the crawl-delay specified in the file. It also takes into account the website's last modified date, if any, to avoid recrawling the same page. This helps to reduce the load on the website and prevent any potential legal issues. Gocrawl is also highly concurrent, which allows it to efficiently crawl large numbers of pages in parallel. This helps to speed up the crawling process and reduce the time required to complete the task.

The library also offers a high degree of flexibility in customizing the crawling process. It allows you to specify custom callbacks and handlers for handling different types of pages, such as error pages, redirects, and so on. This allows you to handle and process the pages as per your requirement. Additionally, Gocrawl provides various functionalities such as support for cookies, user-agent, auto-detection of links, and auto-detection of sitemaps.

Wombat is a Ruby gem that makes it easy to scrape websites and extract structured data from HTML pages. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Wombat is its ability to extract structured data from HTML pages using a simple, CSS-like syntax. It allows you to define a set of rules for extracting data from a page, and then automatically applies those rules to the page's HTML to extract the desired data. This makes it easy to extract data from even complex and dynamic pages, without having to write a lot of custom code.

In addition to its data extraction capabilities, Wombat also provides a variety of other features that can simplify the web scraping process. It can automatically follow links and scrape multiple pages, it can handle pagination and AJAX requests, and it can handle cookies and authentication. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


// Only enqueue the root and paths beginning with an "a"
var rxOk = regexp.MustCompile(`http://duckduckgo\.com(/a.*)?$`)

// Create the Extender implementation, based on the gocrawl-provided DefaultExtender,
// because we don't want/need to override all methods.
type ExampleExtender struct {
    gocrawl.DefaultExtender // Will use the default implementation of all but Visit and Filter
}

// Override Visit for our need.
func (x *ExampleExtender) Visit(ctx *gocrawl.URLContext, res *http.Response, doc *goquery.Document) (interface{}, bool) {
    // Use the goquery document or res.Body to manipulate the data
    // ...

    // Return nil and true - let gocrawl find the links
    return nil, true
}

// Override Filter for our need.
func (x *ExampleExtender) Filter(ctx *gocrawl.URLContext, isVisited bool) bool {
    return !isVisited && rxOk.MatchString(ctx.NormalizedURL().String())
}

func ExampleCrawl() {
    // Set custom options
    opts := gocrawl.NewOptions(new(ExampleExtender))

    // should always set your robot name so that it looks for the most
    // specific rules possible in robots.txt.
    opts.RobotUserAgent = "Example"
    // and reflect that in the user-agent string used to make requests,
    // ideally with a link so site owners can contact you if there's an issue
    opts.UserAgent = "Mozilla/5.0 (compatible; Example/1.0; +http://example.com)"

    opts.CrawlDelay = 1 * time.Second
    opts.LogFlags = gocrawl.LogAll

    // Play nice with ddgo when running the test!
    opts.MaxVisits = 2

    // Create crawler and start at root of duckduckgo
    c := gocrawl.NewCrawlerWithOptions(opts)
    c.Run("https://duckduckgo.com/")

    // Remove "x" before Output: to activate the example (will run on go test)

    // xOutput: voluntarily fail to see log output
}
require 'wombat'

Wombat.crawl do
  base_url "https://www.github.com"
  path "/"

  headline xpath: "//h1"
  subheading css: "p.alt-lead"

  what_is({ css: ".one-fourth h4" }, :list)

  links do
    explore xpath: '/html/body/header/div/div/nav[1]/a[4]' do |e|
      e.gsub(/Explore/, "Love")
    end

    features css: '.nav-item-opensource'
    business css: '.nav-item-business'
  end
end
will result in:
{
  "headline"=>"How people build software",
  "subheading"=>"Millions of developers use GitHub to build personal projects, support their businesses, and work together on open source technologies.",
  "what_is"=>[
    "For everything you build",
    "A better way to work",
    "Millions of projects",
    "One platform, from start to finish"
  ],
  "links"=>{
    "explore"=>"Love",
    "features"=>"Open source",
    "business"=>"Business"
  }
}

Alternatives / Similar


Was this page helpful?