2014-03-19 51 views
1

我試圖從hive集羣導入一些數據到另一個具有多個映射器的HDFS集羣。我正在使用下面的命令導入數據。Sqoop錯誤無關輸入「t1」期待EOF接近'<EOF>'

/opt/isv/app/pkgs/sqoop-1.4.4.bin__hadoop-1.0.0/bin/sqoop import --connect jdbc:hive://XXXXXX.com:10000/strrecommender --driver org。 a_option.jdbc.HiveDriver -e'從strrecommender.sltrn_dtl_full中選擇upc_cd,sltrn_dt,sltrn_id,loc_id,pos_rgstr_id,hh_id,其中TO_DATE(part_dt)> =「2011-03-04」AND TO_DATE(part_dt)<「2011 -03-11「AND $ CONDITIONS」--target-dir/user/rxg3437/QADataThroughSqoopWeekly/ramesh -m 2 --split-by sltrn_dt

此命令在內部生成另一個查詢以獲取最小和最大日期。

SELECT MIN(sltrn_dt),MAX(sltrn_dt)FROM(自strrecommender.sltrn_dtl_full其中TO_DATE(part_dt)> = 「2011-03-04」 AND TO_DATE(part_dt選擇upc_cd,sltrn_dt,sltrn_id,loc_id,pos_rgstr_id,hh_id )AND(1 = 1))爲T1

這個查詢與下面的錯誤而失敗:

14/03/19 11時43分12秒ERROR tool.ImportTool:遇到IOException異常運行導入作業:JAVA。 io.IOException:java.sql.SQLException:查詢返回的非零代碼:40000,原因:FAILED:ParseExce ption line 1:195無關輸入't1'預計EOF接近''

at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:170) 
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) 
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) 
    at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179) 
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) 
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) 
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) 
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) 
    at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186) 
    at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159) 
    at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239) 
    at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645) 
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415) 
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502) 
    at org.apache.sqoop.Sqoop.run(Sqoop.java:145) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181) 
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220) 
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) 
    at org.apache.sqoop.Sqoop.main(Sqoop.java:238) 

所致:java.sql.SQLException中:查詢返回非零代碼:40000,原因:FAILED:ParseException的第1行:195外部輸入 'T1' 期待EOF '' 附近

at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:194) 
    at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:145) 
    ... 23 more 
莫非

任何人都請幫忙?

回答

0

您不應該使用-e來查詢,而是使用--query。這是sqoop官方文檔的示例:

17.3. Example Invocations 
    Select ten records from the employees table: 
    $ sqoop eval --connect jdbc:mysql://db.example.com/corp \ 
    --query "SELECT * FROM employees LIMIT 10" 

    Insert a row into the foo table: 
    $ sqoop eval --connect jdbc:mysql://db.example.com/corp \ 
    -e "INSERT INTO foo VALUES(42, 'bar')" 
相關問題