2012-07-06 192 views
0

嗨,我使用scrapy刮計算的總時間用於運行scrapy

我寫了蜘蛛的網站,獲取所有信息,並通過pipeline.py

管道保存到CSV文件。 PY代碼

class Examplepipeline(object): 

    def __init__(self): 
     dispatcher.connect(self.spider_opened, signal=signals.spider_opened) 
     dispatcher.connect(self.spider_closed, signal=signals.spider_closed) 

    def spider_opened(self, spider): 
     log.msg("opened spider %s at time %s" % (spider.name,datetime.now().strftime('%H-%M-%S'))) 
     self.exampledotcomCsv = csv.writer(open("csv's/%s(%s).csv"% (spider.name,datetime.now().strftime("%d/%m/%Y,%H-%M-%S")), "wb"), 
        delimiter=',', quoting=csv.QUOTE_MINIMAL) 
     self.exampledotcomCsv.writerow(['field1', 'field2','field3','field4'])   

    def process_item(self, item, spider): 
      log.msg("Processsing item " + item['title'], level=log.DEBUG) 
      self.brandCategoryCsv.writerow([item['field1'].encode('utf-8'), 
            [i.encode('utf-8') for i in item['field2']], 
            item['field3'].encode('utf-8'), 
            [i.encode('utf-8') for i in item['field4']] 
            ]) 
      return item 


    def spider_closed(self, spider): 
     log.msg("closed spider %s at %s" % (spider.name,datetime.now().strftime('%H-%M-%S'))) 

在上面的代碼,我可以能夠得到start time and end time of spider,但關閉後,蜘蛛,我想計算並顯示由蜘蛛是採取 那麼我該怎麼做,我們可以在spider_closed方法中編寫這個功能?

請讓我知道這一點。

回答

1

爲什麼不:

def spider_opened(self, spider): 
    spider.started_on = datetime.now() 
    ... 

def spider_closed(self, spider): 
    work_time = datetime.now() - spider.started_on 
    ... 
+0

感謝您的答覆,如何從spider_opened,方法spider_closed方法訪問值...... – 2012-07-06 12:52:47

+0

如果我正確理解你的問題,我的回答我表明:' spider.started_on = datetime.now()' - 我們在蜘蛛對象內保留一個值,在'spider_closed'中我們從蜘蛛中檢索這個值。 – warvariuc 2012-07-06 13:19:15

+0

非常感謝我的理念,並會讓你知道是否有任何.......... – 2012-07-09 05:34:22