crwlr-crawlervswombat
This library provides kind of a framework and a lot of ready to use, so-called steps, that you can use as building blocks, to build your own crawlers and scrapers with.
Some features: - Crawler Politeness innocent (respecting robots.txt, throttling,...) - Load URLs using - a (PSR-18) HTTP client (default is of course Guzzle) - or a headless browser (chrome) to get source after Javascript execution - Get absolute links from HTML documents link - Get sitemaps from robots.txt and get all URLs from those sitemaps - Crawl (load) all pages of a website spider - Use cookies (or don't) cookie - Use any HTTP methods (GET, POST,...) and send any headers or body - Iterate over paginated list pages repeat - Extract data from: - HTML and also XML (using CSS selectors or XPath queries) - JSON (using dot notation) - CSV (map columns) - Extract schema.org structured data in JSON-LD format from HTML documents - Keep memory usage low by using PHP Generators muscle - Cache HTTP responses during development, so you don't have to load pages again and again after every code change - Get logs about what your crawler is doing (accepts any PSR-3 LoggerInterface)
Wombat is a Ruby gem that makes it easy to scrape websites and extract structured data from HTML pages. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.
One of the main features of Wombat is its ability to extract structured data from HTML pages using a simple, CSS-like syntax. It allows you to define a set of rules for extracting data from a page, and then automatically applies those rules to the page's HTML to extract the desired data. This makes it easy to extract data from even complex and dynamic pages, without having to write a lot of custom code.
In addition to its data extraction capabilities, Wombat also provides a variety of other features that can simplify the web scraping process. It can automatically follow links and scrape multiple pages, it can handle pagination and AJAX requests, and it can handle cookies and authentication. It also provides a built-in support for parallelism and queueing to speed up the scraping process.
Example Use
<?php
require_once 'vendor/autoload.php';
use Crwlr\Crawler;
$crawler = new Crawler();
$crawler->get('https://example.com', ['User-Agent' => 'webscraping.fyi']);
// more links can be followed:
$crawler->followLinks();
// and current page can be parsed:
$response = $crawler->response();
$title = $crawler->filter('title')->text();
echo $response->getContent();
</div>
<div class="lib-example" markdown>
```ruby
require 'wombat'
Wombat.crawl do
base_url "https://www.github.com"
path "/"
headline xpath: "//h1"
subheading css: "p.alt-lead"
what_is({ css: ".one-fourth h4" }, :list)
links do
explore xpath: '/html/body/header/div/div/nav[1]/a[4]' do |e|
e.gsub(/Explore/, "Love")
end
features css: '.nav-item-opensource'
business css: '.nav-item-business'
end
end
{
"headline"=>"How people build software",
"subheading"=>"Millions of developers use GitHub to build personal projects, support their businesses, and work together on open source technologies.",
"what_is"=>[
"For everything you build",
"A better way to work",
"Millions of projects",
"One platform, from start to finish"
],
"links"=>{
"explore"=>"Love",
"features"=>"Open source",
"business"=>"Business"
}
}