panthervsscrapyd
Panther is a convenient standalone library to scrape websites and to run end-to-end tests using real browsers.
Panther is super powerful. It leverages the W3C's WebDriver protocol to drive native web browsers such as Google Chrome and Firefox.
Panther is very easy to use, because it implements Symfony's popular BrowserKit and DomCrawler APIs, and contains all the features you need to test your apps. It will sound familiar if you have ever created a functional test for a Symfony app: as the API is exactly the same! Keep in mind that Panther can be used in every PHP project, as it is a standalone library.
Panther automatically finds your local installation of Chrome or Firefox and launches them, so you don't need to install anything else on your computer, a Selenium server is not needed!
In test mode, Panther automatically starts your application using the PHP built-in web-server. You can focus on writing your tests or web-scraping scenario and Panther will take care of everything else.
Features:
- executes the JavaScript code contained in webpages
- supports everything that Chrome (or Firefox) implements
- allows taking screenshots
- can wait for asynchronously loaded elements to show up
- lets you run your own JS code or XPath queries in the context of the loaded page
- supports custom Selenium server installations
- supports remote browser testing services including SauceLabs and BrowserStack
Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.
You can also see the logs of completed spiders, and manage spider settings and
configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider
runs, and view the status of running spiders.
You can install the package via pip by running pip install scrapyd
and
then you can run the package by running scrapyd
command in your command prompt.
By default, it will start a web server on port 6800, but you can specify a different port using the
`--port`` option.
Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.
for more web interface see scrapydweb
Example Use
<?php
use Symfony\Component\Panther\Client;
require __DIR__.'/vendor/autoload.php'; // Composer's autoloader
$client = Client::createChromeClient();
// Or, if you care about the open web and prefer to use Firefox
$client = Client::createFirefoxClient();
$client->request('GET', 'https://api-platform.com'); // Yes, this website is 100% written in JavaScript
$client->clickLink('Get started');
// Wait for an element to be present in the DOM (even if hidden)
$crawler = $client->waitFor('#installing-the-framework');
// Alternatively, wait for an element to be visible
$crawler = $client->waitForVisibility('#installing-the-framework');
echo $crawler->filter('#installing-the-framework')->text();
$client->takeScreenshot('screen.png'); // Yeah, screenshot!
$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2