this project scrapes a list of websites I used to crawl most often if this project helped you, please give it a star, thanks :)
- douban
- douban_oss
- googleplay
- cnbeta
- ka
- cnblogs
google play
uses the crawl spider and pymongodouban
use the images pipeline to download image (use the headers in case of being banned), after finish it will output the txt file of item informationcnbeta
uses sqlalchmey to save items to mysql database (or other database if sqlalchemy supports)ka
uses the kafka , this is a demo spider how to use the scrapy and kafka together , this spider will not close , if you push a message to the kafka ,the spider will start to crawl the url you just givecnblogs
use the signal handler.douban_oss
use the aliyun oss sdk upload the images pipeline download image to oss store.
for each project there is a run_spider.py script, just run it and enjoy :)
python run_spider.py