2017-08-28 683 views
-1

我試圖執行一個查詢,該函數的功能類似於lead .. over ..分區和Union。當我試圖在impala上運行它時,此查詢效果很好,但在Hive上失敗。使用SparkSQL閱讀Impala表

我需要編寫一個執行此查詢星火工作。它在SparkSQL中也失敗了,我的假設是因爲Spark 1.6在內部使用HiveQL來完成上述任務。

有一些不同的方式來讀取SparkSQL黑斑羚表?因爲在Hive中工作的基本查詢和Both都可以很好地與SprkSQL一起工作。

FYR我想查詢運行:

SELECT issue_id, 
    CASE WHEN COALESCE(lead(created, 1) OVER (PARTITION BY issue_id ORDER BY created ASC, 
    field_sequence ASC), '') = '' THEN 'to' ELSE LEAD('from', 1) OVER (PARTITION BY issue_id ORDER BY created ASC, field_sequence ASC) END Status, 
    created StartDate, 
    LEAD(created, 1) OVER (PARTITION BY issue_id ORDER BY created ASC, field_sequence ASC) EndDate 
FROM (
    SELECT issue_id, created, field, 'from', 'to', field_sequence FROM tab1 WHERE COALESCE(LOWER(field), '') = 'status' 
    UNION 
    SELECT issue_id, updated_date created, '' field, '' 'from', '' 'to', 0 field_sequence FROM tab2 
) hc WHERE hc.issue_id = '123' 

和錯誤消息:

Traceback (most recent call last): 
     File "<stdin>", line 1, in <module> 
     File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/pyspark/sql/context.py", line 580, in sql 
     return DataFrame(self._ssql_ctx.sql(sqlQuery), self) 
     File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
     File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/pyspark/sql/utils.py", line 45, in deco 
     return f(*a, **kw) 
     File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
    py4j.protocol.Py4JJavaError: An error occurred while calling o83.sql. 
    : java.lang.RuntimeException: [1.55] failure: ``)'' expected but identifier OVER found 

    at scala.sys.package$.error(package.scala:27) 
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36) 
    at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67) 
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) 
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) 
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114) 
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113) 
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) 
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) 
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) 
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) 
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) 
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) 
    at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890) 
    at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110) 
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34) 
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) 
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) 
    at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43) 
    at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231) 
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
+0

你會得到什麼錯誤? – MaFF

+0

在Pyspark:失敗:'')''預期但標識符OVER發現 –

+0

當我在Hive中執行它時,它說我需要使用關鍵字ALL。所以,當我做到這一點,它給我的錯誤有關:在編譯語句錯誤:失敗:RuntimeException的拋出java.lang.ClassNotFoundException:org.apache.kudu.mapreduce.KuduTableInputFormat –

回答

0

你定義Status時缺少一個AS,你錯過了在過去的幾個逗號選擇語句。此外,​​3210是無用的,你可以使用IF ELSE,因爲只有一種情況。

你應該打破你的計算,所以你不要有嵌套select條款,他們是低效的。

+0

我沒有得到有關AS的第一條評論。你可以在響應中添加你的建議查詢嗎?這將有助於.. –

+0

你能給我一個可重複的例子(只是打印出'tab1'的第一行) – MaFF