requestiumvsselenium
Requestium is a Python library that merges the power of Requests, Selenium, and Parsel into a single integrated tool for automatizing web actions.
The library was created for writing web automation scripts that are written using mostly Requests but that are able to seamlessly switch to Selenium for the JavaScript heavy parts of the website, while maintaining the session.
Requestium adds independent improvements to both Requests and Selenium, and every new feature is lazily evaluated, so its useful even if writing scripts that use only Requests or Selenium.
Selenium is a Python package that allows developers to automate web browsers. It provides a way for developers to interact with web browsers programmatically, simulating user interactions such as clicking links, filling out forms, and navigating between pages. Selenium can be used to automate tasks such as web scraping, testing web applications, and automating repetitive tasks on websites.
Selenium is built on top of WebDriver, which is a browser automation API that allows Selenium to interact with web browsers. Selenium supports a wide variety of web browsers, including Chrome, Firefox, Safari, and Internet Explorer.
One of the main advantages of Selenium is that it can be used with many different programming languages, not only Python, and it also supports different platforms.
The package also provide a set of APIs that allows you to interact with web pages, you can locate elements, interact with them, get their properties and interact with javascript, you can use the APIs to automate the browser and interact with web pages in the same way a human user would.
Selenium is widely used in web scraping, web testing, and other automation tasks because it allows developers to automate web browsers in a way that is very similar to how a human user would interact with the browser.
Overall, Selenium is a powerful and versatile tool for automating web browsers and is widely used in web scraping, web testing, and other automation tasks.
Example Use
from requestium import Session, Keys
session = Session(webdriver_path='./chromedriver',
browser='chrome-headless',
default_timeout=15)
# then session object can be used like requests and parsel:
title = session.get('http://samplesite.com').xpath('//title/text()').extract_first(default='Default Title')
# other advance functions like POST requests and proxy settings are also available:
s.post('http://www.samplesite.com/sample', data={'field1': 'data1'})
s.proxies.update({'http': 'http://10.11.4.254:3128', 'https': 'https://10.11.4.252:3128'})
# session can also be used like selenium as it exposes all selenium functions.
# like typing keys:
s.driver.find_element_by_xpath("//input[@class='user_name']").send_keys('James Bond', Keys.ENTER)
from selenium import webdriver
# Create an instance of the webdriver
driver = webdriver.Firefox()
# Navigate to a website
driver.get("http://www.example.com")
# Find an element by its id
element = driver.find_element_by_id("example-id")
# Interact with the element
element.click()
# Find an element by its name
element = driver.find_element_by_name("example-name")
# Fill an input form
element.send_keys("example text")
# Find and click a button
driver.find_element_by_xpath("//button[text()='Search']").click()
# Wait for the page to load
driver.implicitly_wait(10)
# Get the page title
print(driver.title)
# Close the browser
driver.close()