Skip to content


MIT 11 2 5,894
3.9 thousand (month) Jul 26 2019 1.1.14(1 year, 8 months ago)
2,833 7 31 BSD
1.4.3(6 months ago) Sep 04 2013 331.5 thousand (month)

Autoscraper project is made for automatic web scraping to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. This data can be text, url or any html tag value of that page. It learns the scraping rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages.

Autoscraper is minimalistic and auto-generative approach to web scraping. For example, here's a scraper that finds all titles on a page:

from autoscraper import AutoScraper

url = ''

# We can add one or multiple candidates here.
# You can also put urls here to retrieve urls.
wanted_list = ["What are metaclasses in Python?"]

scraper = AutoScraper()
result =, wanted_list)

Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.

You can also see the logs of completed spiders, and manage spider settings and configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider runs, and view the status of running spiders. You can install the package via pip by running pip install scrapyd and then you can run the package by running scrapyd command in your command prompt. By default, it will start a web server on port 6800, but you can specify a different port using the `--port`` option.

Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.

for more web interface see scrapydweb



Example Use

$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2

Alternatives / Similar