2016-08-02 58 views
0

我正在運行Spark 1.6版本,我正在用spark試驗遠程數據過程。使用JDBC從遠程數據庫獲取數據後,我使用registerTempTable('')方法創建了spark數據框並將其臨時保存爲表格。到目前爲止,它正在工作。當我運行火花上下文中的查詢我收到此錯誤:Spark SQL執行失敗。獲取java.lang.RuntimeException:[1.227]失敗:`'union''預期,但'。'找到

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/home/ubuntu/spark-1.6.2-bin-hadoop2.6/python/pyspark/sql/context.py", line 580, in sql 
    return DataFrame(self._ssql_ctx.sql(sqlQuery), self) 
    File "/home/ubuntu/spark-1.6.2-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    File "/home/ubuntu/spark-1.6.2-bin-hadoop2.6/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/home/ubuntu/spark-1.6.2-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o21.sql. 
: java.lang.RuntimeException: [1.227] failure: ``union'' expected but `.' found 

我使用pyspark在命令提示符處,這裏是我的代碼:

from pyspark import SQLContext 
sqlContext = SQLContext(sc) 

df = sqlContext.read.format('jdbc').options( 
    url='jdbc:sqlserver://<ipaddress>;user=xyz;password=pw', 
    dbtable='JOURNAL' 
).load() 
df.registerTempTable('JOURNAL') 

df = sqlContext.read.format('jdbc').options(
    url='jdbc:sqlserver:<ipaddress>;user=xyz;password=pw', 
    dbtable='GHIS' 
).load() 
df.registerTempTable('GHIS') 

df = sqlContext.read.format('jdbc').options(
    url='jdbc:sqlserver:<ip address>;user=xyz;password=pw', 
    dbtable='LEAS' 
).load() 
df.registerTempTable('LEAS') 

高達這麼遠我得到和加載數據

現在,在這裏我有問題:

doubtaccount = sqlContext.sql("SELECT ENTITYID as EntityID,SUBSTRING(DESCRPN,1,CHARINDEX('-',DESCRPN,1)-1) as BldgID,SUBSTRING(DESCRPN,CHARINDEX('-',DESCRPN,1)+1,20) as LeaseID,PERIOD*100+15 as TxnDateInt,PERIOD as Period,0-AMT as BDAmt FROM BI_Staging.dbo.JOURNAL where SOURCE = 'DA' and ACCTNUM = 'RE078201000' and STATUS = 'P' ") 

當我運行這個查詢時,我遇到了上述問題。我在堆棧溢出中搜索了類似的錯誤,但我沒有找到任何。我的查詢有什麼問題嗎?這實際上在數據庫中工作。

回答

1

編輯

當你做df.registerTempTable('JOURNAL'),您所提供的數據幀,以sqlContext名爲JOURNAL所以您的查詢必須用此名稱訪問它:

doubtaccount = sqlContext.sql("SELECT ... FROM JOURNAL where ... ") 

但是,請注意該查詢將由spark分析,而不是由數據庫分析,因此語法必須與hive兼容。

如果你想在數據庫中的數據加載到火花之前執行您的查詢,你可以這樣做的dbtable選項裏面:

query = "(SELECT ... FROM BI_Staging.dbo.JOURNAL where ...) AS JOURNAL" 

df = sqlContext.read.format('jdbc').options(
    url = 'jdbc:sqlserver://<ipaddress>;user=xyz;password=pw', 
    dbtable = query 
).load() 

df.registerTempTable('JOURNAL')