Skip to content

gerapyvssplash

MIT 71 4 3,344
4.6 thousand (month) Jul 04 2017 0.9.13(1 year, 3 months ago)
4,092 15 403 BSD-3-Clause
Apr 25 2014 1.8 thousand (month) 3.5(4 years ago)

Gerapy is a Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js.

It is built on top of the Scrapy framework and provides a simple and easy-to-use interface for performing web scraping tasks. Gerapy also includes features such as support for scheduling and distributed crawling, as well as a built-in web-based dashboard for monitoring and managing scraping tasks. Additionally, Gerapy is designed to be highly extensible, allowing users to easily create custom plugins and integrations.

Overall, Gerapy is a useful tool for those looking to automate web scraping tasks and extract data from websites.

Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implemented in Python 3 using Twisted and QT5.

It is built on top of the QtWebkit library and allows developers to interact with web pages in a headless mode, which means that the web pages are rendered in the background, without displaying them on the screen.

splash is particularly useful for web scraping and web testing tasks, as it allows developers to interact with web pages in a way that is very similar to how a human user would interact with the browser.

It also allows you to execute javascript and interact with web pages even if they use heavy javascript.

Unlike Selenium or Playwright, splash is powered by webkit embedded browser instead of a real browser like Chrome or Firefox. As a down-side splash requests are easy to detect and block when scraping websites with anti-scraping features.

One benefit of splash is that it seemlesly integrates with Scrapy.

Example Use


# once splash server is started it can be requested to render pages through 
# HTTP requests:
import requests

url = "http://localhost:8050/render.html"
payload = {
    'url': 'https://www.example.com',
    'timeout': 30,
    'wait': 2
}

response = requests.get(url, params=payload)

# Get the page HTML
print(response.text)

Alternatives / Similar


Was this page helpful?