IT博客汇
  • 首页
  • 精华
  • 技术
  • 设计
  • 资讯
  • 扯淡
  • 权利声明
  • 登录 注册

    爬虫 工具——Scrapy 介绍

    Eric Sheng\'s Blog发表于 2014-08-07 13:35:00
    love 0
    • 基于Python的crawler
      • 考察垂直爬虫的几个原则:
      • scrapy的架构图:
      • scrapy 包含的component如下:
      • Frequently Asked Questions
      • References

    基于Python的crawler

    考虑到垂直爬虫及站内搜索的重要性,重新思考一下项目爬虫的技术架构及实现方案

    考察垂直爬虫的几个原则:

    性能较高:较好支持多线程并发处理;支持异步、非阻塞socket;支持分布式爬取;爬取调度算法性能较高;内存使用效率较高,不要老是出现out of memory问题;
    架构优美:组件式设计式架构,扩展方便;架构设计精巧。至少值得花时间去学习架构设计思想。
    扩展方便:能够与现有框架较好集成;由于是垂直爬虫,需要针对不同的网页定制爬取规则集逻辑,需要能够方便测试,不要老是重新编译,因此最好支持python等脚本语言
    功能全面:内置支持ajax/javascript爬取、登录认证、深度爬取设置、类似heritrix的爬取过滤器(filter)、页面压缩处理等
    管理功能:提供爬虫管理接口,能够实时监控和管理爬取

    厌烦了基于java的爬虫方案,尤其是考虑到python在网络编程上的易用性,因此打算考察基于python做新版本爬虫的可行性,刚好把久不使用的python捡起来。

    整理了一下目前基于python的crawler,大致有如下一些现成的项目方案可供参考:

    • Mechanize:http://wwwsearch.sourceforge.net/mechanize/

    • Twill:http://twill.idyll.org/

    • Scrapy:http://scrapy.org

    • HarvestMan:http://www.harvestmanontheweb.com/

    • Ruya:http://ruya.sourceforge.net/

    • psilib:http://pypi.python.org/pypi/spider.py/0.5

    • BeautifulSoup + urllib2:http://www.crummy.com/software/BeautifulSoup/

    比较之后,选择Scrapy作为重点考察学习对象,尽管没有Mechanize及Harvestman成熟,但从其架构来看,还是很有前途的,尤其是基于twisted高性能框架的架构,很有吸引力。

    scrapy的架构图:

    scrapy架构

    scrapy 包含的component如下:

    Components

    • Scrapy Engine
      The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the Data Flow section below for more details.

    • Scheduler
      The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

    • Downloader
      The Downloader is responsible for fetching web pages and feeding them to the engine which, in turns, feeds them to the spiders.

    • Spiders
      Spiders are custom classes written by Scrapy users to parse response and extract items (aka scraped items) from them or additional URLs (requests) to follow. Each spider is able to handle a specific domain (or group of domains). For more information see Spiders.

    • Item Pipeline
      The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see Item Pipeline.

    • Downloader middlewares
      Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the downloader, and responses that pass from Downloader to the Engine. They provide a convenient mechanism for extending Scrapy functionality by plugging custom code. For more information see Downloader Middleware.

    • Spider middlewares
      Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests). They provide a convenient mechanism for extending Scrapy functionality by plugging custom code. For more information see Spider Middleware.

    • Scheduler middlewares
      Spider middlewares are specific hooks that sit between the Engine and the Scheduler and process requests when they pass from the Engine to the Scheduler and vice-versa. They provide a convenient mechanism for extending Scrapy functionality by plugging custom code.

    Frequently Asked Questions

    How does Scrapy compare to BeautifulSoup or lxml?

    BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy is an application framework for writing web spiders that crawl web sites and extract data from them.

    Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.

    In other words, comparing BeautifulSoup (or lxml) to Scrapy is like comparing jinja2 to Django.

    How is BeautifulSoup different from Scrapy?[3]

    Scrapy is a Web-spider or web scraper framework, You give Scrapy a root URL to start crawling, then you can specify constraints on how many number of Urls you want to crawl and fetch,etc., It is a complete framework for Web-scrapping or crawling.

    While

    Beautiful Soup is a parsing library which also does pretty job of fetching contents from Url and allows you to parse certain parts of them without any hassle. It only fetches the contents of the URL that you give and stops. It does not crawl unless you manually put it inside a infinite loop with certain criteria.

    In simple words, with Beautiful Soup you can build something similar to Scrapy.
    Beautiful Soup is a library while Scrapy is a complete framework.

    References

    [1]基于python的crwaler
    [2]doc.scarpy.org
    [3]How-is-BeautifulSoup-different-from-Scrapy



沪ICP备19023445号-2号
友情链接