2014-08-28 76 views
0

我在我的S3存儲桶中的兩個單獨的文件夾中有一個小的JSON文件。我對這兩個分別使用相同的映射器運行相同的命令。瞭解與GZ文件的Hadoop行爲

師範大學JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://mybucket/normaltest -output smalltest-output 
14/08/28 08:33:53 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively 
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar6225144044327095484/] [] /tmp/streamjob6947060448653690043.jar tmpDir=null 
14/08/28 08:33:56 INFO mapred.JobClient: Default number of map tasks: null 
14/08/28 08:33:56 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160 
14/08/28 08:33:56 INFO mapred.JobClient: Default number of reduce tasks: 0 
14/08/28 08:33:56 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache 
14/08/28 08:33:56 INFO mapred.JobClient: Setting group to hadoop 
14/08/28 08:33:56 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library 
14/08/28 08:33:56 WARN lzo.LzoCodec: Could not find build properties file with revision hash 
14/08/28 08:33:56 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN] 
14/08/28 08:33:56 WARN snappy.LoadSnappy: Snappy native library is available 
14/08/28 08:33:56 INFO snappy.LoadSnappy: Snappy native library loaded 
14/08/28 08:33:58 INFO mapred.FileInputFormat: Total input paths to process : 1 
14/08/28 08:33:58 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred] 
14/08/28 08:33:58 INFO streaming.StreamJob: Running job: job_201408260907_0053 
14/08/28 08:33:58 INFO streaming.StreamJob: To kill this job, run: 
14/08/28 08:33:58 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0053 
14/08/28 08:33:58 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0053 
14/08/28 08:33:59 INFO streaming.StreamJob: map 0% reduce 0% 
14/08/28 08:34:23 INFO streaming.StreamJob: map 1% reduce 0% 
14/08/28 08:34:26 INFO streaming.StreamJob: map 2% reduce 0% 
14/08/28 08:34:29 INFO streaming.StreamJob: map 9% reduce 0% 
14/08/28 08:34:32 INFO streaming.StreamJob: map 45% reduce 0% 
14/08/28 08:34:35 INFO streaming.StreamJob: map 56% reduce 0% 
14/08/28 08:34:36 INFO streaming.StreamJob: map 57% reduce 0% 
14/08/28 08:34:38 INFO streaming.StreamJob: map 84% reduce 0% 
14/08/28 08:34:39 INFO streaming.StreamJob: map 85% reduce 0% 
14/08/28 08:34:41 INFO streaming.StreamJob: map 99% reduce 0% 
14/08/28 08:34:44 INFO streaming.StreamJob: map 100% reduce 0% 
14/08/28 08:34:50 INFO streaming.StreamJob: map 100% reduce 100% 
14/08/28 08:34:50 INFO streaming.StreamJob: Job complete: job_201408260907_0053 
14/08/28 08:34:50 INFO streaming.StreamJob: Output: smalltest-output 

smalltest-output,我得到含有處理JSON的一部分幾個小文件。

GZIPed JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://weblablatency/gztest -output smalltest-output 
14/08/28 08:39:45 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively 
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar2539293594337011579/] [] /tmp/streamjob301144784484156113.jar tmpDir=null 
14/08/28 08:39:48 INFO mapred.JobClient: Default number of map tasks: null 
14/08/28 08:39:48 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160 
14/08/28 08:39:48 INFO mapred.JobClient: Default number of reduce tasks: 0 
14/08/28 08:39:48 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache 
14/08/28 08:39:48 INFO mapred.JobClient: Setting group to hadoop 
14/08/28 08:39:48 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library 
14/08/28 08:39:48 WARN lzo.LzoCodec: Could not find build properties file with revision hash 
14/08/28 08:39:48 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN] 
14/08/28 08:39:48 WARN snappy.LoadSnappy: Snappy native library is available 
14/08/28 08:39:48 INFO snappy.LoadSnappy: Snappy native library loaded 
14/08/28 08:39:50 INFO mapred.FileInputFormat: Total input paths to process : 1 
14/08/28 08:39:51 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred] 
14/08/28 08:39:51 INFO streaming.StreamJob: Running job: job_201408260907_0055 
14/08/28 08:39:51 INFO streaming.StreamJob: To kill this job, run: 
14/08/28 08:39:51 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0055 
14/08/28 08:39:51 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0055 
14/08/28 08:39:52 INFO streaming.StreamJob: map 0% reduce 0% 
14/08/28 08:40:20 INFO streaming.StreamJob: map 100% reduce 0% 
14/08/28 08:40:26 INFO streaming.StreamJob: map 100% reduce 100% 
14/08/28 08:40:26 INFO streaming.StreamJob: Job complete: job_201408260907_0055 

在smalltest輸出我得到一個正確解析的文件,但作爲一個單獨的文件。

爲什麼這種差異,併發生了什麼事?我的工作沒有在gz案件中正確分發?

在我的實際使用情況下,我需要處理〜2000 GZ文件總計繞4GB未壓縮;每4小時一次。所以我不能承受任何性能問題,因爲壓縮。

+0

是的,正如克萊門特指出的那樣,有人曾經談論過Bazillions的時代。我想補充一點,一般來說,comprehsion不會減慢工作速度,實際上它可以加快工作速度。這是因爲現代CPU和庫可以比讀取磁盤更快地解壓縮數據 - 通常是磁盤io上的作業瓶頸,而不是cpu。 – samthebest 2014-08-29 10:10:41

回答

1

Gzip已沒有裂開的。你會發現超過百萬條關於這個問題的文章和問題,所以我不會詳細討論。

的選項有:

  • 不要用gzip(不壓縮或使用其他裂開的壓縮格式)
  • 使用一個黑客,使GZip壓縮可分裂,像https://github.com/nielsbasjes/splittablegzip。每個映射器都必須從頭開始讀取文件,這是一個折衷。閱讀文檔以瞭解更多信息。

這取決於你做什麼,但對於大多數處理4GB的數據是沒有什麼。我會確保我真的需要象Hadoop這樣的大象作爲我的用例。它具有可擴展性,但複雜,工作很痛苦,對於小數據集通常會變慢。

+0

因此,考慮到我在輸入目錄中提供6000個文件的Hadoop,我可以指望它將整個文件的負載分開,如果不是文件塊,對吧? – user1265125 2014-08-28 10:13:47

+0

基本上,是的。 – 2014-08-28 10:15:12