2016-11-28 148 views
0

我不斷收到此錯誤試圖查詢使用色調 數據失敗時返回碼2系統日誌選項卡Cloudera的蜂巢:失敗:執行錯誤,返回:執行錯誤,從org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce的

錯誤日誌下代碼2從org.apache.hadoop.hive.ql.exec.mr.MapRedTask的MapReduce

從色調工作的瀏覽器是太大了,貼在這裏 http://pastebin.com/h8tgYuzR

來自終端的錯誤

hive> SELECT count(*) FROM tweets; 
Query ID = cloudera_20161128145151_137efb02-413b-4457-b21d-084101b77091 
Total jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
    set mapreduce.job.reduces=<number> 
Starting Job = job_1480364897609_0003, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1480364897609_0003/ 
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1480364897609_0003 
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 
2016-11-28 14:52:09,804 Stage-1 map = 0%, reduce = 0% 
2016-11-28 14:53:10,955 Stage-1 map = 0%, reduce = 0% 
2016-11-28 14:53:13,213 Stage-1 map = 100%, reduce = 100% 
Ended Job = job_1480364897609_0003 with errors 
Error during job, obtaining debugging information... 
Job Tracking URL: http://quickstart.cloudera:8088/proxy/application_1480364897609_0003/ 
Examining task ID: task_1480364897609_0003_m_000000 (and more) from job job_1480364897609_0003 

Task with the most failures(4): 
----- 
Task ID: 
    task_1480364897609_0003_m_000000 

URL: 
    http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1480364897609_0003&tipid=task_1480364897609_0003_m_000000 
----- 
Diagnostic Messages for this Task: 
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable Objavro.schema� 
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179) 
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable Objavro.schema� 
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:505) 
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170) 
    ... 8 more 
Caused by: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character ('O' (code 79)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') 
at [Source: [email protected]; line: 1, column: 2] 
    at com.cloudera.hive.serde.JSONSerDe.deserialize(JSONSerDe.java:128) 
    at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.readRow(MapOperator.java:136) 
    at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.access$200(MapOperator.java:100) 
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:496) 
    ... 9 more 
Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('O' (code 79)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') 
at [Source: [email protected]; line: 1, column: 2] 
    at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1291) 
    at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385) 
    at org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:306) 
    at org.codehaus.jackson.impl.ReaderBasedParser._handleUnexpectedValue(ReaderBasedParser.java:630) 
    at org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:364) 
    at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2439) 
    at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396) 
    at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602) 
    at com.cloudera.hive.serde.JSONSerDe.deserialize(JSONSerDe.java:126) 
    ... 12 more 


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask 
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL 
Total MapReduce CPU Time Spent: 0 msec 

下面是表

CREATE EXTERNAL TABLE tweets (
    id BIGINT, 
    created_at STRING, 
    source STRING, 
    favorited BOOLEAN, 
    retweet_count INT, 
    retweeted_status STRUCT< 
     text:STRING, 
     user:STRUCT<screen_name:STRING,name:STRING>>, 
    entities STRUCT< 
     urls:ARRAY<STRUCT<expanded_url:STRING>>, 
     user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>, 
     hashtags:ARRAY<STRUCT<text:STRING>>>, 
    text STRING, 
    user STRUCT< 
     screen_name:STRING, 
     name:STRING, 
     friends_count:INT, 
     followers_count:INT, 
     statuses_count:INT, 
     verified:BOOLEAN, 
     utc_offset:INT, 
     time_zone:STRING>, 
    in_reply_to_screen_name STRING 
) 
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' 
LOCATION '/user/cloudera/flume/tweets'; 

數據從文件我試圖加載http://pastebin.com/g7eg1BaP

回答

0

我有一個表是用AVRO作爲數據類型,但非的Avro定義的感覺文件已加載到表中。

請記住,Hive是「讀取模式」,而不是「加載模式」。它只會在作業運行時檢查模式,而不是在加載或定義期間檢查模式。

請發佈您嘗試加載的文件中使用的CREATE TABLE命令和幾條記錄。

希望這會有所幫助。

+0

我添加了create table命令,還有一個pastebin鏈接,我試圖加載一些文件 – noname

+0

我實際上認爲我的數據已損壞,如果您查看所有奇怪的字符http://tinypic.com/r/ 2efs21j/9 – noname

+0

你在「CREATE TABLE ...」之前註冊了JSONSerDe jar嗎? – AkashNegi

相關問題