Skip to content

photonvsruia

GPL-3.0 61 3 12,807
1.4 thousand (month) Aug 24 2018 1.1.9(2018-10-21 03:39:17 ago)
1,743 3 9 Apache-2.0
Oct 17 2018 414 (month) 0.8.5(2022-09-06 08:54:56 ago)

Photon is a Python library for web scraping. It is designed to be lightweight and fast, and can be used to extract data from websites and web pages. Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

Ruia is an async web scraping micro-framework, written with asyncio and aiohttp, aims to make crawling url as convenient as possible.

Ruia is inspired by scrapy however instead of Twisted it's based entirely on asyncio and aiohttp.

It also supports various features like cookies, headers, and proxy, which makes it very useful in dealing with complex web scraping tasks.

Example Use


```python from photon import Photon #Create a new Photon instance ph = Photon() #Extract data from a specific element of the website url = "https://www.example.com" selector = "div.main" data = ph.get_data(url, selector) #Print the extracted data print(data) #Extract data from multiple websites asynchronously urls = ["https://www.example1.com", "https://www.example2.com"] data = ph.get_data_async(urls) ```
```python #!/usr/bin/env python """ Target: https://news.ycombinator.com/ pip install aiofiles """ import aiofiles from ruia import AttrField, Item, Spider, TextField class HackerNewsItem(Item): target_item = TextField(css_select="tr.athing") title = TextField(css_select="a.storylink") url = AttrField(css_select="a.storylink", attr="href") async def clean_title(self, value): return value.strip() class HackerNewsSpider(Spider): start_urls = [ "https://news.ycombinator.com/news?p=1", "https://news.ycombinator.com/news?p=2", ] concurrency = 10 # aiohttp_kwargs = {"proxy": "http://0.0.0.0:1087"} async def parse(self, response): async for item in HackerNewsItem.get_items(html=await response.text()): yield item async def process_item(self, item: HackerNewsItem): async with aiofiles.open("./hacker_news.txt", "a") as f: self.logger.info(item) await f.write(str(item.title) + "\n") if __name__ == "__main__": HackerNewsSpider.start(middleware=None) ```

Alternatives / Similar


Was this page helpful?