Basic setup to run ScrapyD + Django and save it in Django Models.
- Basic structure of a Django project.
- Basic structure for scrapy and scrapyd.
- Configuration of scrapy in order to access Django models objects.
- Basic scrapy pipeline to save crawled objets to Django models.
- Basic spider definition
- Basic demo from the oficial tutorial that crawls data from http://quotes.toscrape.com
1 - Create venv
$ py -m venv ./.venv or python/python3
2 - Run venv
On Windows
$ .\.venv\Scripts\Activate.ps1
On Mac/Linux
$ source .\.venv\Scripts\activate
3 - Install requirements
$ pip install -r requirements.txt
4 - Configure the database
$ python manage.py migrate
5 - Create superuser to login in the django admin
$ python manage.py createsuperuser
In order to start this project you will need to have running Django and Scrapyd at the same time.
In order to run Django
$ python manage.py runserver
In order to run Scrapyd
$ cd scrapy_app
$ scrapyd
Project also have ready to use Docker File and docker compose . To run it launch docker app on machine and type
docker compose up
This command will build docker image and run docker compose file. After sucesfull you will be able to see django server under http://localhost:8000 and scrapyd under http://localhost:6800
Django is running on: http://127.0.0.1:8000 Scrapyd is running on: http://127.0.0.1:6800
At this point you will be able to send job request to Scrapyd. This project is setup with a demo spider from the oficial tutorial of scrapy. To run it you must send a http request to Scrapyd with the job info
curl http://127.0.0.1:6800/schedule.json -d project=default -d spider=quote
Now go to http://127.0.0.1:8000/admin and login using the superuser you created before. The crawled data will be automatically be saved in the Django models.