Skip to content

dudevsscrapyd

AGPL-3.0 28 2 424
414 (month) Feb 20 2022 0.1.3(1 year, 2 months ago)
2,955 7 7 BSD-3-Clause
Sep 04 2013 46.7 thousand (month) 1.5.0(25 days ago)

Dude (dude uncomplicated data extraction) is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax.

The simplest web scraper will look like this:

from dude import select


@select(css="a")
def get_link(element):
    return {"url": element.get_attribute("href")}

dude supports multiple parser backends: - playwright
- lxml
- parsel - beautifulsoup - pyppeteer - selenium

Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.

You can also see the logs of completed spiders, and manage spider settings and configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider runs, and view the status of running spiders. You can install the package via pip by running pip install scrapyd and then you can run the package by running scrapyd command in your command prompt. By default, it will start a web server on port 6800, but you can specify a different port using the `--port`` option.

Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.

for more web interface see scrapydweb

Example Use


from dude import select

"""
This example demonstrates how to use Parsel + async HTTPX
To access an attribute, use:
    selector.attrib["href"]
You can also access an attribute using the ::attr(name) pseudo-element, for example "a::attr(href)", then:
    selector.get()
To get the text, use ::text pseudo-element, then:
    selector.get()
"""


@select(css="a.url", priority=2)
async def result_url(selector):
    return {"url": selector.attrib["href"]}


# Option to get url using ::attr(name) pseudo-element
@select(css="a.url::attr(href)", priority=2)
async def result_url2(selector):
    return {"url2": selector.get()}


@select(css=".title::text", priority=1)
async def result_title(selector):
    return {"title": selector.get()}


@select(css=".description::text", priority=0)
async def result_description(selector):
    return {"description": selector.get()}


if __name__ == "__main__":
    import dude

    dude.run(urls=["https://dude.ron.sh"], parser="parsel")
$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2

Alternatives / Similar


Was this page helpful?