我有一个蜘蛛,它从Redis列表中获取URL。
当找不到网址时,我想很好地关闭Spider。我尝试实现CloseSpider
异常,但似乎还没有达到目的
def start_requests(self):
while True:
item = json.loads(self.__pop_queue())
if not item:
raise CloseSpider("Closing spider because no more urls to crawl")
try:
yield scrapy.http.Request(item['product_url'], meta={'item': item})
except ValueError:
continue
即使我提出了CloseSpider异常,但仍然出现以下错误:
root@355e42916706:/scrapper# scrapy crawl general -a country=my -a log=file
2017-07-17 12:05:13 [scrapy.core.engine] ERROR: Error while obtaining start requests
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 127, in _next_request
request = next(slot.start_requests)
File "/scrapper/scrapper/spiders/GeneralSpider.py", line 20, in start_requests
item = json.loads(self.__pop_queue())
File "/usr/local/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
此外,我还尝试在同一函数中捕获TypeError,但它也不起作用。
有什么建议的方法来处理这个吗
谢谢
您需要检查是否self.__pop_queue()
返回了某些东西,然后再给予json.loads()
(或TypeError
在调用时捕获),例如:
def start_requests(self):
while True:
item = self.__pop_queue()
if not item:
raise CloseSpider("Closing spider because no more urls to crawl")
try:
item = json.loads(item)
yield scrapy.http.Request(item['product_url'], meta={'item': item})
except (ValueError, TypeError): # just in case the 'item' is not a string or buffer
continue
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句