Skip to content

selenium-driverlessvssplash

NOASSERTION 11 1 538
8.2 thousand (month) Jul 22 2022 1.9.3.1(4 months ago)
4,078 15 404 BSD-3-Clause
Apr 25 2014 727 (month) 3.5(4 years ago)

Selenium Driverless is a Selenium inspired browser automation library with focus on web scraping detection bypass. It shares most of Selenium API and UX but implements several extensions that make the scraper more difficult to detect and extra usability features like: - Bypass Cloudflare - Multiple Tab scraping - Multiple context support - Proxy auth - Network interception

Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implemented in Python 3 using Twisted and QT5.

It is built on top of the QtWebkit library and allows developers to interact with web pages in a headless mode, which means that the web pages are rendered in the background, without displaying them on the screen.

splash is particularly useful for web scraping and web testing tasks, as it allows developers to interact with web pages in a way that is very similar to how a human user would interact with the browser.

It also allows you to execute javascript and interact with web pages even if they use heavy javascript.

Unlike Selenium or Playwright, splash is powered by webkit embedded browser instead of a real browser like Chrome or Firefox. As a down-side splash requests are easy to detect and block when scraping websites with anti-scraping features.

One benefit of splash is that it seemlesly integrates with Scrapy.

Example Use


# It works the same as Selenium just with a different import.
import undetected_chromedriver as uc
driver = uc.Chrome(headless=True, use_subprocess=False)
driver.get('https://nowsecure.nl')
driver.save_screenshot('screenshot.png')
driver.close()
# once splash server is started it can be requested to render pages through 
# HTTP requests:
import requests

url = "http://localhost:8050/render.html"
payload = {
    'url': 'https://www.example.com',
    'timeout': 30,
    'wait': 2
}

response = requests.get(url, params=payload)

# Get the page HTML
print(response.text)

Alternatives / Similar


Was this page helpful?