2013-08-27 40 views
0

我有以下簡單mrjob腳本,它讀出由線大文件中的行,在每行上執行操作,並打印輸出:如何創建一個hadoop亞軍?

#!/usr/bin/env python                           

from mrjob.job import MRJob 

class LineProcessor(MRJob): 
    def mapper(self, _, line): 
     yield (line.upper(), None) # toy example: mapper just uppercase the line 

if __name__ == '__main__': 
    # mr_job = LineProcessor(args=['-r', 'hadoop', '/path/to/input']) # error! 
    mr_job = LineProcessor(args=['/path/to/input']) 
    with mr_job.make_runner() as runner: 
     runner.run() 
     for line in runner.stream_output(): 
      key, value = mr_job.parse_output_line(line) 
      print key.encode('utf-8') # don't care about value in my case 

(這只是一個玩具示例;處理每一行是昂貴在我的真實情況下,這就是爲什麼我想要分佈式運行的原因。)

它只用作本地進程。如果我嘗試使用'-r', 'hadoop'(見註釋以上)我得到以下奇怪的錯誤:

File "mrjob/runner.py", line 727, in _get_steps 
    'error getting step information: %s', stderr) 
Exception: ('error getting step information: %s', 'Traceback (most recent call last):\n File "script.py", line 11, in <module>\n with mr_job.make_runner() as runner:\n File "mrjob/job.py", line 515, in make_runner\n " __main__, which doesn\'t work." % w)\nmrjob.job.UsageError: make_runner() was called with --steps. This probably means you tried to use it from __main__, which doesn\'t work.\n') 

我怎樣才能真正在Hadoop上運行它,即創建一個HadoopJobRunner

+0

是不是有沒有把它作爲Hadoop Streaming工作運行的原因?例如:http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/ –

+0

感謝您的鏈接。我更喜歡mrjob,主要是爲了方便和簡單。我不想手動將數據複製到hdfs和從hdfs複製數據。我希望能夠輕鬆控制輸出格式。我想要一個Python腳本中的一切。我想輕鬆地在本地運行(用於測試)和在hadoop上運行它之間切換。 – Frank

+0

弗蘭克你的問題似乎完全一樣。我試圖瞭解如何爲不同類型的hadoop sanboxes/clusters/emr /(azure?)配置跑步者。你有沒有進一步的研究? – Enzo

回答

0

是否缺少

def steps(self): 
     return [self.mr(
          mapper_init = ... 
          mapper = self.mapper, 
          combiner = ..., 
          reducer = ..., 
       )] 
在LineProcessor