Skip to content

requestsvstyphoeus

Apache-2.0 243 30 51,959
567.8 million (month) Feb 14 2011 2.32.3(3 months ago)
4,078 15 145 MIT
Oct 06 2009 1.1 million (month) 1.4.1(9 months ago)

The requests package is a popular library for making HTTP requests in Python. It provides a simple, easy-to-use API for sending HTTP/1.1 requests, and it abstracts away many of the low-level details of working with HTTP. One of the key features of requests is its simple API. You can send a GET request with a single line of code:

import requests
response = requests.get('https://webscraping.fyi/lib/requests/')
requests makes it easy to send data along with your requests, including JSON data and files. It also automatically handles redirects and cookies, and it can handle both basic and digest authentication. Additionally, it's also providing powerful functionality for handling exceptions, managing timeouts and session, also handling a wide range of well-known content-encoding types. One thing to keep in mind is that requests is a synchronous library, which means that your program will block (stop execution) while waiting for a response. In some situations, this may not be desirable, and you may want to use an asynchronous library like httpx or aiohttp. You can install requests package via pip package manager:
pip install requests
requests is a very popular library and has a large and active community, which means that there are many third-party libraries that build on top of it, and it has a wide range of usage.

Typhoeus is a Ruby library that allows you to make parallel HTTP requests, which can greatly speed up the process of making multiple requests to different servers. It is built on top of the C library libcurl, which is known for its high performance and reliability.

One of the main features of Typhoeus is its ability to make parallel requests. This means that it can send multiple requests at the same time, and wait for all of them to finish before returning the results. This can greatly reduce the time it takes to make multiple requests, as it eliminates the need to wait for each request to complete before sending the next one.

In addition to its parallelism feature, Typhoeus also provides a convenient and easy-to-use Ruby interface for making HTTP requests. It supports all of the common HTTP methods (GET, POST, PUT, DELETE, etc.) and allows you to set various request options, such as headers, timeouts, and authentication. It also supports streaming responses, which allows you to process large responses piece by piece, rather than loading the entire response into memory at once.

Typhoeus is also supports HTTP/2 protocol which provides faster load times and reduced network usage. It also supports streaming which is an essential feature for large data transfer.

Typhoeus is well-documented, actively maintained, and has a large and active community of users. It is widely used in the Ruby ecosystem and is a popular choice for building high-performance web scraping and data-gathering applications.

Note that Typhoeus can also be used as an adapter in popular alternative package Faraday.

Highlights


syncease-of-useno-http2no-asyncpopular
http2asyncuses-curl

Example Use


import requests

# get request:
response = requests.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"

# requests can automatically convert json responses to Python dictionaries:
response = requests.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}

# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})

# Session object can be used to automatically keep track of cookies and set defaults:
from requests import Session
s = Session()
s.headers = {"User-Agent": "webscraping.fyi"}
s.get('http://httpbin.org/cookies/set/foo/bar')
print(s.cookies['foo'])
'bar'
print(s.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
# GET request
Typhoeus.get("www.example.com")
# POST request
Typhoeus.post("www.example.com/posts", body: { title: "test post", content: "this is my test"})

# make parallel requests:
# hydra is a request queue manager
hydra = Typhoeus::Hydra.hydra
# create request object
first_request = Typhoeus::Request.new("http://example.com/posts/1")
# add complete callbacks
first_request.on_complete do |response|
  # callbacks can queue new requests
  third_url = response.body
  third_request = Typhoeus::Request.new(third_url)
  hydra.queue third_request
end
second_request = Typhoeus::Request.new("http://example.com/posts/2")
# queue requests:
hydra.queue first_request
hydra.queue second_request

hydra.run # this is a blocking call that returns once all requests are complete

Alternatives / Similar


Was this page helpful?