0
我在蜂巢中有一個名爲PHAR的表。 當我試圖通過spark-submit(Spark版本1.6)創建一個視圖/表時,我得到了下面的錯誤。 查詢在配置單元/直線外殼中運行良好。org.apache.spark.sql.AnalysisException:無法解析給定輸入列的'ph.pharmacy_id':[];第1行pos
User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ph.pharmacy_id' given input columns: []; line 1 pos
描述來看這將導致錯誤:
create view if not exists V_TMP as
select ph.pharmacy_id
, ph.pharmacy_nm
, ph.pharmacy_addr_line_1_txt
, ph.pharmacy_addr_line_2_txt
, ph.pharmacy_addr_line_3_txt
, ph.pharmacy_addr_line_4_txt --optional
from PHAR ph ;
表說明:
hive> desc PHAR ;
+---------------------------+---------------+----------+--+
| col_name | data_type | comment |
+---------------------------+---------------+----------+--+
| pharmacy_id | varchar(510) | |
| pharmacy_nm | varchar(205) | |
| pharmacy_addr_line_1_txt | varchar(200) | |
| pharmacy_addr_line_2_txt | varchar(50) | |
| pharmacy_addr_line_3_txt | varchar(50) | |
| pharmacy_addr_line_4_txt | varchar(50) | |
+---------------------------+---------------+----------+--+
有趣的是,它不能夠解決任何列:給定的輸入列:[];一個空的列表。我在google/stackoverflow上搜索了類似的問題,但在那些列中,至少有一些列出現。 –