Python Async Bangumi Crawler
Python 异步 Bangumi(番组计划/班固米) 爬虫
This project is one of the author's final course designs. So it doesn't have any timeliness and for reference only.
本项目为作者的期末课程设计之一,不具备时效性,仅供参考。
Step 1
pip install -r requirements.txt
Step 2
python crawler.py
Now the two csv data set will be saved in data folder.
现在两个csv格式的数据集将会存放在data文件夹中。
crawler.py
: Main crawler script.
MIT License