Skip to content

playwrightvsrequestium

Apache-2.0 35 5 11,763
5.5 million (month) Feb 24 2021 1.48.0(6 days ago)
1,834 2 6 BSD-3-Clause
Dec 28 2012 53.1 thousand (month) 0.4.0(8 months ago)

playwright is a Python package that allows developers to automate web browsers for end-to-end testing, web scraping, and web performance analysis. It is built on top of WebKit, Mozilla's Gecko, and Microsoft's EdgeHTML, and it is designed to be fast, reliable, and easy to use.

playwright is similar to Selenium, but it provides a more modern and powerful API, with features such as automatic waiting for elements, automatic retries, and built-in support for browser contexts, which allow you to open multiple pages in a single browser instance.

Playwright also provides an asynchronous client which makes scaling playwright-powered web scrapers easier than alternatives (like Selenium)

Requestium is a Python library that merges the power of Requests, Selenium, and Parsel into a single integrated tool for automatizing web actions.

The library was created for writing web automation scripts that are written using mostly Requests but that are able to seamlessly switch to Selenium for the JavaScript heavy parts of the website, while maintaining the session.

Requestium adds independent improvements to both Requests and Selenium, and every new feature is lazily evaluated, so its useful even if writing scripts that use only Requests or Selenium.

Example Use


from playwright import sync_playwright

# Start Playwright
with sync_playwright() as playwright:
    # Launch a browser instance
    browser = playwright.chromium.launch()
    # Open a new context (tab)
    context = browser.new_context()
    # Create a new page in the context
    page = context.new_page()

    # Navigate to a website
    page.goto("https://www.example.com")

    # Find an element by its id
    element = page.get_by_id("example-id")

    # Interact with the element
    element.click()

    # Fill an input form
    page.get_by_name("example-name").fill("example text")

    # Find and click a button
    page.get_by_xpath("//button[text()='Search']").click()

    # Wait for the page to load
    page.wait_for_selector("#results")

    # Get the page title
    print(page.title)

    # Close the browser
    browser.close()
from requestium import Session, Keys

session = Session(webdriver_path='./chromedriver',
            browser='chrome-headless',
            default_timeout=15)

# then session object can be used like requests and parsel:
title = session.get('http://samplesite.com').xpath('//title/text()').extract_first(default='Default Title')

# other advance functions like POST requests and proxy settings are also available:
s.post('http://www.samplesite.com/sample', data={'field1': 'data1'})
s.proxies.update({'http': 'http://10.11.4.254:3128', 'https': 'https://10.11.4.252:3128'})

# session can also be used like selenium as it exposes all selenium functions.
# like typing keys:
s.driver.find_element_by_xpath("//input[@class='user_name']").send_keys('James Bond', Keys.ENTER)

Alternatives / Similar


Was this page helpful?