2017-07-15 453 views
0

我是新手,在配置單元上使用json數據。我正在開發一個獲取json數據並將其存儲到配置單元表的Spark應用程序。我有這樣一個JSON:java.lang.ClassCastException:org.apache.hadoop.hive.ql.io.orc.OrcStruct不能轉換爲org.apache.hadoop.io.Text。 json serde錯誤

Json of Jsons

,看起來像這樣展開時:

hierarchy

我能夠讀取JSON成數據幀,並將其保存在HDFS的位置。但讓數據讀取是非常困難的。

我在網上例如搜索後,我試着這樣做:

使用STRUCT所有JSON字段,然後訪問使用column.element的元素。

例如:

web_app_security將是在表內(類型STRUCT的)的柱,並在它的另一jsons像config_web_cms_authentication, web_threat_intel_alert_external的名稱也將是的Structs(與ratingrating_numeric作爲字段) 。

我試着用json serde創建表。這裏是我的表格定義:

CREATE EXTERNAL TABLE jsons (
web_app_security struct<config_web_cms_authentication: struct<rating: string, rating_numeric: float>, web_threat_intel_alert_external: struct<rating: string, rating_numeric: float>, web_http_security_headers: struct<rating: string, rating_numeric: float>, rating: string, rating_numeric: float>, 
dns_security struct<domain_hijacking_protection: struct<rating: string, rating_numeric: float>, rating: string, rating_numeric: float, dns_hosting_providers: struct<rating:string, rating_numeric: float>>, 
email_security struct<rating: string, email_encryption_enabled: struct<rating: string, rating_numeric: float>, rating_numeric: float, email_hosting_providers: struct<rating: string, rating_numeric: float>, email_authentication: struct<rating: string, rating_numeric: float>>, 
threat_intell struct<rating: string, threat_intel_alert_internal_3: struct<rating: string, rating_numeric: float>, threat_intel_alert_internal_1: struct<rating: string, rating_numeric: float>, rating_numeric: float, threat_intel_alert_internal_12: struct<rating: string, rating_numeric: float>, threat_intel_alert_internal_6: struct<rating: string, rating_numeric: float>>, 
data_loss struct<data_loss_6: struct<rating: string, rating_numeric: float>, rating: string, data_loss_36plus: struct<rating: string, rating_numeric: float>, rating_numeric: float, data_loss_36: struct<rating: string, rating_numeric: float>, data_loss_12: struct<rating: string, rating_numeric: float>, data_loss_24: struct<rating: string, rating_numeric: float>>, 
system_hosting struct<host_hosting_providers: struct<rating: string, rating_numeric: float>, hosting_countries: struct<rating: string, rating_numeric: float>, rating: string, rating_numeric: float>, 
defensibility struct<attack_surface_web_ip: struct<rating: string, rating_numeric: float>, shared_hosting: struct<rating: string, rating_numeric: float>, defensibility_hosting_providers: struct<rating: string, rating_numeric: float>, rating: string, rating_numeric: float, attack_surface_web_hostname: struct<rating: string, rating_numeric: float>>, 
software_patching struct<patching_web_cms: struct<rating: string, rating_numeric: float>, rating: string, patching_web_server: struct<rating: string, rating_numeric: float>, patching_vuln_open_ssl: struct<rating: string, rating_numeric: float>, patching_app_server: struct<rating: string, rating_numeric: float>, rating_numeric: float>, 
governance struct<governance_customer_base: struct<rating: string, rating_numeric: float>, governance_security_certifications: struct<rating: string, rating_numeric: float>, governance_regulatory_requirements: struct<rating: string, rating_numeric: float>, rating: string, rating_numeric: float> 
)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' 
STORED AS orc 
LOCATION 'hdfs://nameservice1/data/gis/final/rr_current_analysis' 

我試着用json serde解析行。我一直保存到表中的一些數據後,我收到以下錯誤,當我嘗試進行查詢:

Error: java.io.IOException: java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.OrcStruct cannot be cast to org.apache.hadoop.io.Text (state=,code=0) 

我不知道如果我這樣做的正確方法。

我願意以任何其他方式將數據存儲到表中。任何幫助,將不勝感激。謝謝。

回答

1

那是因爲你是混合ORC作爲存儲(STORED AS orc)和JSON作爲SERDE(ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe')覆蓋ORC的默認OrcSerde SERDE,而不是輸入(OrcInputFormat)和輸出(OrcOutputFormat)格式。

您或者需要使用ORC存儲,而不會覆蓋默認的SerDe。在這種情況下,確保Spark應用程序寫入ORC表中,而不是JSON。

或者,如果您希望數據以JSON格式存儲,請將JsonSerDe與純文本文件一起用作存儲(STORED AS TEXTFILE)。


蜂巢開發指南對解釋SERDE和存儲是如何工作的 - https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-HiveSerDe

+0

感謝您的回答。我treid保存我的數據框作爲文本文件使用'df.rdd.saveAsTextFile(「路徑」)'但我得到他的錯誤'org.apache.hadoop.mapred.FileAlreadyExistsException:輸出目錄已存在'我不知道爲什麼它試圖爲每個數據幀創建一個新目錄,而不是在給定路徑中創建一個新文件。有沒有更好的方式將數據框保存爲文本文件?或者有什麼辦法可以將數據框保存爲csv並給出適當的表定義來讀取csv文件並使用json serde? @Sergey Khudyakov –

+0

@HemanthAnnavarapu來看看'df.write',特別是'df.write.mode(SaveMode)'。我不知道你爲什麼現在提到CSV文件,但我強烈建議你先閱讀Hive Developer Guide(答案中的鏈接)和Spark DataFrame API文檔。 「更好的方法」實際上取決於你想要達到什麼樣的目標,你希望在Hive中擁有什麼類型的表格等。 –

+0

我正在嘗試使用csv,因爲我已經搜索了將文本保存爲文本以及大多數他們指的是使用'DF.write.format(「org.databricks.spark.csv」)。save(「path」)''的例子。由於'Df.saveAsTextFile'不起作用,我試着用csv。有什麼辦法可以在'csv'文件上使用'serde'?或者,我可以使用'orc'格式來覆蓋'輸入格式'嗎? @Sergey Khudyakov –

相關問題