分享

使用scrapy框架爬取图片网全站图片(二十多万张),并打包成exe可执行文件

 文炳春秋 2020-04-02

https://www.cnblogs.com/nmsghgnv/p/12349117.html

目标网站:https://www./

本文代码已上传至git和百度网盘,链接分享在文末

网站概览

 目标,使用scrapy框架抓取全部图片并分类保存到本地。

1.创建scrapy项目

scrapy startproject images

2.创建spider

cd images
scrapy genspider mn52 www.

创建后结构目录如下

 3.定义item定义爬取字段

复制代码
# -*- coding: utf-8 -*-# Define here the models for your scraped items#
# See documentation in:# https://doc./en/latest/topics/items.htmlimport scrapyclass ImagesItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()image_url = scrapy.Field()
    field_name = scrapy.Field()
    detail_name = scrapy.Field()
复制代码

这里定义三个字段,分别为图片地址,图片类别名称和图片详细类别名称

4.编写spider

复制代码
# -*- coding: utf-8 -*-import scrapyimport requestsfrom lxml import etreefrom images.items import ImagesItemclass Mn52Spider(scrapy.Spider):
    name = 'mn52'# allowed_domains = ['www.']# start_urls = ['http://www./']def start_requests(self):
        response = requests.get('https://www./')
        result = etree.HTML(response.text)
        li_lists = result.xpath('//*[@id="bs-example-navbar-collapse-2"]/div/ul/li')for li in li_lists:
            url = li.xpath('./a/@href')[0]
            field_name = li.xpath('./a/text()')[0]print('https://www.' + url,field_name)yield scrapy.Request('https://www.' + url,meta={'field_name':field_name},callback=self.parse)def parse(self, response):
        field_name = response.meta['field_name']
        div_lists = response.xpath('/html/body/section/div/div[1]/div[2]/div')for div_list in div_lists:
            detail_urls = div_list.xpath('./div/a/@href').extract_first()
            detail_name = div_list.xpath('./div/a/@title').extract_first()yield scrapy.Request(url='https:' + detail_urls,callback=self.get_image,meta={'detail_name':detail_name,'field_name':field_name})
        url_lists = response.xpath('/html/body/section/div/div[3]/div/div/nav/ul/li')for url_list in url_lists:
            next_url = url_list.xpath('./a/@href').extract_first()if next_url:yield scrapy.Request(url='https:' + next_url,callback=self.parse,meta=response.meta)def get_image(self,response):
        field_name = response.meta['field_name']
        image_urls = response.xpath('//*[@id="originalpic"]/img/@src').extract()for image_url in image_urls:
            item = ImagesItem()
            item['image_url'] = 'https:' + image_url
            item['field_name'] = field_name
            item['detail_name'] = response.meta['detail_name']# print(item['image_url'],item['field_name'],item['detail_name'])yield item
复制代码

逻辑思路:spider中重写start_requests获取起始路由,这里起始路由设为图片的一级分类如性感美女,清纯美女,萌宠图片等的地址,使用requests库请求来解析获取。然后把url和一级分类名称传递给parse,在parse中解析获取图片的二级分类地址和二级分类名称,并判断是否含有下一页信息,这里使用scrapy框架自动去重原理,所以不用再写去重逻辑。获取到的数据传递给get_image,在get_image中解析出图片路由并赋值到item字段中然后yield item.

5.pipelines管道文件下载图片

复制代码
# -*- coding: utf-8 -*-# Define your item pipelines here#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting# See: https://doc./en/latest/topics/item-pipeline.htmlimport scrapyimport os# 导入scrapy 框架里的 管道文件的里的图像 图像处理的专用管道文件from scrapy.pipelines.images import ImagesPipeline# 导入图片路径名称from images.settings import IMAGES_STORE as images_storeclass Image_down(ImagesPipeline):def get_media_requests(self, item, info):yield scrapy.Request(url=item['image_url'])def item_completed(self, results, item, info):print(results)
        image_url = item['image_url']
        image_name = image_url.split('/')[-1]
        old_name_list = [x['path'] for t, x in results if t]# 真正的原图片的存储路径old_name = images_store + old_name_list[0]
        image_path = images_store + item['field_name'] + '/' + item['detail_name'] + '/'# 判断图片存放的目录是否存在if not os.path.exists(image_path):# 根据当前页码创建对应的目录os.makedirs(image_path)# 新名称new_name = image_path + image_name# 重命名os.rename(old_name, new_name)return item
复制代码

分析:scrapy内置的ImagesPipeline自己生成经过md5加密的图片名称,这里导入os模块来使图片下载到自定义的文件夹和名称,需要继承ImagesPipeline类。

6.settings中设置所需参数(部分)

复制代码
#USER_AGENT = 'images (+http://www.)'USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'#指定输出某一种日志信息LOG_LEVEL = 'ERROR'#将日志信息存储到指定文件中,不在终端输出LOG_FILE = 'log.txt'# Obey robots.txt rulesROBOTSTXT_OBEY = False
IMAGES_STORE = './mn52/'ITEM_PIPELINES = {   # 'images.pipelines.ImagesPipeline': 300,
   'images.pipelines.Image_down': 301,
}
复制代码

说明:settings中主要定义了图片的存储路径,user_agent信息,打开下载管道。

7.创建crawl.py文件,运行爬虫

复制代码
from scrapy.crawler import CrawlerProcessfrom scrapy.utils.project import get_project_settings

process = CrawlerProcess(get_project_settings())# mn52是爬虫名process.crawl('mn52')
process.start()
复制代码

这里没有使用cmdline方法运行程序,因为涉及到后面的打包问题,使用cmdline方法无法打包。

8.至此,scrapy爬虫已编写完毕,右键运行crawl文件启动爬虫,图片就会下载。

控制台输出:

 打开文件夹查看图片

 

 

 全站下载完成的话大概有22万张图片

 9.打包scrapy爬虫。

scrapy爬虫作为一个框架爬虫也是可以被打包成exe文件的,是不是很神奇/😜

打包步骤:

1>在你的python安装目录下找到scrapy复制其中的mime.types和VERSION文件到项目目录下,一般路径如下

 2>在项目下创建generate_cfg.py文件,用来生成scrapy.cfg

复制代码
data = '''[settings]
default = images.settings
[deploy]
# url = http://localhost:6800/
project = images'''with open('scrapy.cfg', 'w') as f:
    f.write(data)
复制代码

完成后项目结构如下

3>crawl.py文件内引入项目所需的库

复制代码
import scrapy.spiderloaderimport scrapy.statscollectorsimport scrapy.logformatterimport scrapy.dupefiltersimport scrapy.squeuesimport scrapy.extensions.spiderstateimport scrapy.extensions.corestatsimport scrapy.extensions.telnetimport scrapy.extensions.logstatsimport scrapy.extensions.memusageimport scrapy.extensions.memdebugimport scrapy.extensions.feedexportimport scrapy.extensions.closespiderimport scrapy.extensions.debugimport scrapy.extensions.httpcacheimport scrapy.extensions.statsmailerimport scrapy.extensions.throttleimport scrapy.core.schedulerimport scrapy.core.engineimport scrapy.core.scraperimport scrapy.core.spidermwimport scrapy.core.downloaderimport scrapy.downloadermiddlewares.statsimport scrapy.downloadermiddlewares.httpcacheimport scrapy.downloadermiddlewares.cookiesimport scrapy.downloadermiddlewares.useragentimport scrapy.downloadermiddlewares.httpproxyimport scrapy.downloadermiddlewares.ajaxcrawlimport scrapy.downloadermiddlewares.chunkedimport scrapy.downloadermiddlewares.decompressionimport scrapy.downloadermiddlewares.defaultheadersimport scrapy.downloadermiddlewares.downloadtimeoutimport scrapy.downloadermiddlewares.httpauthimport scrapy.downloadermiddlewares.httpcompressionimport scrapy.downloadermiddlewares.redirectimport scrapy.downloadermiddlewares.retryimport scrapy.downloadermiddlewares.robotstxtimport osimport scrapy.spidermiddlewares.depthimport scrapy.spidermiddlewares.httperrorimport scrapy.spidermiddlewares.offsiteimport scrapy.spidermiddlewares.refererimport scrapy.spidermiddlewares.urllengthfrom scrapy.pipelines.images import ImagesPipelineimport scrapy.pipelinesfrom images.settings import IMAGES_STORE as images_storeimport scrapy.core.downloader.handlers.httpimport scrapy.core.downloader.contextfactoryimport requestsfrom lxml import etreefrom scrapy.crawler import CrawlerProcessfrom scrapy.utils.project import get_project_settings

process = CrawlerProcess(get_project_settings())# mn52是爬虫名process.crawl('mn52')
process.start()
复制代码

10.使用pyinstaller第三方打包库打包

如果没有安装的话使用pip安装

pip install pyinstaller

在项目下使用打包命令

pyinstaller -F --add-data=mime.types;scrapy --add-data=VERSION;scrapy --add-data=images/*py;images --add-data=images/spiders/*.py;images/spiders --runtime-hook=generate_cfg.py crawl.py

一共用了4个--add-data命令以及1个--runtime-hook命令

第一个--add-data用来mime.types文件,添加后放在临时文件夹中的scrapy文件夹内,

第二个用来添加VERSION文件,添加后的文件同mime.types相同

第三个用来添加scrapy的代码——images文件夹下的所有py文件,*代表通配符

第四个用来添加scrapy的代码——mn52文件夹下的所有py文件

--runtime-hook用于添加运行钩子,也就是这里的generate_cfg.py文件

命令执行完毕后在dist文件夹内生成可执行文件,双击可执行文件,同路径下生成scrapy_cfg部署文件和mn52图片文件夹,运行么有问题

代码地址git:https://github.com/terroristhouse/crawler

exe可执行文件地址(无需python环境):

链接:https://pan.baidu.com/s/19TkeDY9EHMsFPULudOJRzw

python系列教程:

链接:https://pan.baidu.com/s/10eUCb1tD9GPuua5h_ERjHA

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多