panthervsspidr
Panther is a convenient standalone library to scrape websites and to run end-to-end tests using real browsers.
Panther is super powerful. It leverages the W3C's WebDriver protocol to drive native web browsers such as Google Chrome and Firefox.
Panther is very easy to use, because it implements Symfony's popular BrowserKit and DomCrawler APIs, and contains all the features you need to test your apps. It will sound familiar if you have ever created a functional test for a Symfony app: as the API is exactly the same! Keep in mind that Panther can be used in every PHP project, as it is a standalone library.
Panther automatically finds your local installation of Chrome or Firefox and launches them, so you don't need to install anything else on your computer, a Selenium server is not needed!
In test mode, Panther automatically starts your application using the PHP built-in web-server. You can focus on writing your tests or web-scraping scenario and Panther will take care of everything else.
Features:
- executes the JavaScript code contained in webpages
- supports everything that Chrome (or Firefox) implements
- allows taking screenshots
- can wait for asynchronously loaded elements to show up
- lets you run your own JS code or XPath queries in the context of the loaded page
- supports custom Selenium server installations
- supports remote browser testing services including SauceLabs and BrowserStack
Spidr is a Ruby gem that provides a simple and flexible way to spider and scrape websites. It allows you to easily navigate through a website, following links and scraping data as you go. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.
One of the main features of Spidr is its ability to spider a website, following all the links on a page and visiting all the pages of a website. This allows you to easily and quickly scrape large amounts of data from a website, without having to manually specify which pages to visit.
In addition to its spidering capabilities, Spidr also provides a variety of other features that can simplify the web scraping process. It can automatically filter which links to follow and which pages to visit, it can handle cookies and authentication, and it can automatically store the scraped data in a database or file. It also provides a built-in support for parallelism and queueing to speed up the scraping process.
Example Use
<?php
use Symfony\Component\Panther\Client;
require __DIR__.'/vendor/autoload.php'; // Composer's autoloader
$client = Client::createChromeClient();
// Or, if you care about the open web and prefer to use Firefox
$client = Client::createFirefoxClient();
$client->request('GET', 'https://api-platform.com'); // Yes, this website is 100% written in JavaScript
$client->clickLink('Get started');
// Wait for an element to be present in the DOM (even if hidden)
$crawler = $client->waitFor('#installing-the-framework');
// Alternatively, wait for an element to be visible
$crawler = $client->waitForVisibility('#installing-the-framework');
echo $crawler->filter('#installing-the-framework')->text();
$client->takeScreenshot('screen.png'); // Yeah, screenshot!
require 'spidr'
Spidr.start_at("http://example.com") do |spider|
spider.every_page do |page|
puts "Visiting: #{page.url}"
# Extract data from the page using Nokogiri
doc = Nokogiri::HTML(page.body)
title = doc.css("title").text
puts "Title: #{title}"
end
end