dataflowkitvsscrapydweb
Dataflow kit ("DFK") is a Web Scraping framework for Gophers. It extracts data from web pages, following the specified CSS Selectors. You can use it in many ways for data mining, data processing or archiving.
Web-scraping pipeline consists of 3 general components:
- Downloading an HTML web-page. (Fetch Service)
- Parsing an HTML page and retrieving data we're interested in (Parse Service)
- Encoding parsed data to CSV, MS Excel, JSON, JSON Lines or XML format.
For fetching dataflowkit has several types of page fetchers:
- Base fetcher uses standard golang http client to fetch pages as is. It works faster than Chrome fetcher. But Base fetcher cannot render dynamic javascript driven web pages.
- Chrome fetcher is intended for rendering dynamic javascript based content. It sends requests to Chrome running in headless mode.
For parsing dataflowkit extracts data from downloaded web page following the rules listed in configuration JSON file. Extracted data is returned in CSV, MS Excel, JSON or XML format.
Some dataflowkit features:
- Scraping of JavaScript generated pages;
- Data extraction from paginated websites;
- Processing infinite scrolled pages.
- Sсraping of websites behind login form;
- Cookies and sessions handling;
- Following links and detailed pages processing;
- Managing delays between requests per domain;
- Following robots.txt directives;
- Saving intermediate data in Diskv or Mongodb. Storage interface is flexible enough to add more storage types easily;
- Encode results to CSV, MS Excel, JSON(Lines), XML formats;
- Dataflow kit is fast. It takes about 4-6 seconds to fetch and then parse 50 pages.
- Dataflow kit is suitable to process quite large volumes of data. Our tests show the time needed to parse appr. 4 millions of pages is about 7 hours.
ScrapydWeb is a web-based management tool for the Scrapyd service. It is built using the Python Flask framework and allows you to easily manage and monitor your Scrapy spider projects through a web interface.
ScrapydWeb allows you to view the status of your running spiders, view the logs of completed spiders, schedule new spider runs, and manage spider settings and configurations.
ScrapydWeb provides a simple way to manage your scraping tasks and allows you to schedule and run multiple spiders simultaneously. It also provides a user-friendly web interface that makes it easy to view the status of your spiders and monitor their progress.
You can install the package via pip by running pip install scrapydweb
and then you can run the package by
running scrapydweb command in your command prompt.
It will start a web server that you can access through your web browser at http://localhost:6800/
You will need to have Scrapyd running in order to use ScrapydWeb,
Scrapyd is a service for running Scrapy spiders, it allows you to schedule spiders to run at regular intervals
and also allows you to run spiders on remote machines.
Example Use
{
"name": "collection",
"request": {
"url": "https://example.com"
},
"fields": [
{
"name": "Title",
"selector": ".product-container a",
"extractor": {
"types": [
"text",
"href"
],
"filters": [
"trim",
"lowerCase"
],
"params": {
"includeIfEmpty": false
}
}
},
{
"name": "Image",
"selector": "#product-container img",
"extractor": {
"types": [
"alt",
"src",
"width",
"height"
],
"filters": [
"trim",
"upperCase"
]
}
},
{
"name": "Buyinfo",
"selector": ".buy-info",
"extractor": {
"types": [
"text"
],
"params": {
"includeIfEmpty": false
}
}
}
],
"paginator": {
"selector": ".next",
"attr": "href",
"maxPages": 3
},
"format": "json",
"fetcherType": "chrome",
"paginateResults": false
}