Skip to content

hrequestsvsguzzle

MIT 51 1 1,001
33.3 thousand (month) Feb 23 2022 0.9.2(2024-12-01 02:55:27 ago)
23,447 13 34 MIT
Nov 14 2011 536.5 thousand (month) 7.10.0(2025-08-23 22:36:01 ago)

hrequests is a feature rich modern replacement for a famous requests library for Python. It provides a feature rich HTTP client capable of resisting popular scraper identification techniques: - Seamless transition between headless browser and http client based requests - Integrated HTML parser - Mimicking of real browser TLS fingerprints - Javascript rendering - HTTP2 support - Realistic browser headers

Guzzle is a PHP HTTP client library that makes it easy to send HTTP requests and trivial to integrate with web services. It allows you to send HTTP/1.1 requests with various methods like GET, POST, PUT, DELETE, and others.

Guzzle also supports sending both synchronous and asynchronous requests, caching, and even has built-in support for OAuth 1.0a. Additionally, it can handle different HTTP errors and handle redirects automatically. It also has built-in support for serializing and deserializing data using formats like JSON and XML, as well as sending multipart file uploads.

Overall Guzzle is an easy to use and powerful library for working with HTTP in PHP.

Highlights


bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


hrequests has almost identical API and UX as requests and here's a quick overview: ```python import hrequests # perform HTTP client requests resp = hrequests.get('https://httpbin.org/html') print(resp.status_code) # 200 # use headless browsers and sessions: session = hrequests.Session('chrome', version=122, os="mac") # supports asyncio and easy concurrency requests = [ hrequests.async_get('https://www.google.com/', browser='firefox'), hrequests.async_get('https://www.duckduckgo.com/'), hrequests.async_get('https://www.yahoo.com/'), hrequests.async_get('https://www.httpbin.org/'), ] responses = hrequests.map(requests, size=3) # max 3 conccurency ```
```php use GuzzleHttp\Client; // Create a client session: $client = new Client(); // can also set session details like headers $client = new Client([ 'headers' => [ 'User-Agent' => 'webscraping.fyi', ] ]); // GET request: $response = $client->get('http://httpbin.org/get'); // print all details var_dump($response); // or the important bits printf("status: %s\n", $response->getStatusCode()); printf("headers: %s\n", json_encode($response->getHeaders(), JSON_PRETTY_PRINT)); printf("body: %s", $response->getBody()->getContents()); // POST request $response = $client->post( 'https://httpbin.org/post', // for JSON use json argument: ['json' => ['query' => 'foobar', 'page' => 2]] // or formdata use form_params: // ['form_params' => ['query' => 'foobar', 'page' => 2]] ); // For ASYNC requests getAsync function can be used: $promise1 = $client->getAsync('https://httpbin.org/get'); $promise2 = $client->getAsync('https://httpbin.org/get?foo=bar'); // await it: $results = Promise\unwrap([$promise1, $promise2]); foreach ($results as $result) { echo $result->getBody(); } // or add promise callback Promise\each([$promise1, $promise2], function ($response, $index, $callable) { echo $response->getBody(); }); ```

Alternatives / Similar


Was this page helpful?