phpscrapervsscrapyd
PHPScraper is a universal web-util for PHP. The main goal is to get stuff done instead of getting distracted with selectors, preparing & converting data structures, etc. Instead, you can just go to a website and get the relevant information for your project.
PHPScraper is a minimalistic scraper framework that is built on top of other popular scraping tools.
Features:
- Direct access to page basic features like: Meta data, Links, Images, Headings, Content, Keywords etc.
- File downloading.
- RSS, Sitemap and other feed processing.
- CSV, XML and JSON file processing.
Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.
You can also see the logs of completed spiders, and manage spider settings and
configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider
runs, and view the status of running spiders.
You can install the package via pip by running pip install scrapyd
and
then you can run the package by running scrapyd
command in your command prompt.
By default, it will start a web server on port 6800, but you can specify a different port using the
`--port`` option.
Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.
for more web interface see scrapydweb
Example Use
// create scraper object
$web = new \Spekulatius\PHPScraper\PHPScraper;
// go to URL
$web->go('https://test-pages.phpscraper.de/content/selectors.html');
// elements can be found using XPath:
echo $web->filter("//*[@id='by-id']")->text(); // "Content by ID"
// or pre-defined variables covering basic page data:
$web->links; // for all links
$web->headings;
$web->images;
$web->contentKeywords;
$web->orderedLists;
$web->unorderedLists;
$web->paragraphs;
$web->outline; // basic page outline
$web->cleanOutlineWithParagraphs; // basic page outline
$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2