Skip to content

scrapydwebvssplash

GPL-3.0 58 1 3,048
1.2 thousand (month) Sep 30 2018 1.5.0(5 months ago)
4,039 15 403 BSD-3-Clause
Apr 25 2014 439 (month) 3.5(4 years ago)

ScrapydWeb is a web-based management tool for the Scrapyd service. It is built using the Python Flask framework and allows you to easily manage and monitor your Scrapy spider projects through a web interface.

ScrapydWeb allows you to view the status of your running spiders, view the logs of completed spiders, schedule new spider runs, and manage spider settings and configurations.

ScrapydWeb provides a simple way to manage your scraping tasks and allows you to schedule and run multiple spiders simultaneously. It also provides a user-friendly web interface that makes it easy to view the status of your spiders and monitor their progress.

You can install the package via pip by running pip install scrapydweb and then you can run the package by running scrapydweb command in your command prompt.

It will start a web server that you can access through your web browser at http://localhost:6800/ You will need to have Scrapyd running in order to use ScrapydWeb, Scrapyd is a service for running Scrapy spiders, it allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines.

Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implemented in Python 3 using Twisted and QT5.

It is built on top of the QtWebkit library and allows developers to interact with web pages in a headless mode, which means that the web pages are rendered in the background, without displaying them on the screen.

splash is particularly useful for web scraping and web testing tasks, as it allows developers to interact with web pages in a way that is very similar to how a human user would interact with the browser.

It also allows you to execute javascript and interact with web pages even if they use heavy javascript.

Unlike Selenium or Playwright, splash is powered by webkit embedded browser instead of a real browser like Chrome or Firefox. As a down-side splash requests are easy to detect and block when scraping websites with anti-scraping features.

One benefit of splash is that it seemlesly integrates with Scrapy.

Example Use


# once splash server is started it can be requested to render pages through 
# HTTP requests:
import requests

url = "http://localhost:8050/render.html"
payload = {
    'url': 'https://www.example.com',
    'timeout': 30,
    'wait': 2
}

response = requests.get(url, params=payload)

# Get the page HTML
print(response.text)

Alternatives / Similar


Was this page helpful?