從S3A讀平面文件時我有,我後來寫進鑲木格式,並存儲在S3A桶使用多個INT8和字符串列的模式。重複列例外使用星火
當我嘗試讀取使用SqlContext.read.option("mergeSchema","false").parquet("s3a://....")
此拼花文件,我得到下面的異常。
我一直在使用拼花地板工具(模式和元選項)來讀取文件拼花嘗試過,但我得到一個未知的命令錯誤。
*Exception in thread "main" org.apache.spark.sql.AnalysisException: Duplicate column(s) : "Int8", "String" found, cannot save to parquet format;
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.checkConstraints(ParquetRelation.scala:190)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.dataSchema(ParquetRelation.scala:199)
at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:561)
at org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:560)
at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
at org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.scala:395)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:267)
at org.apache.spark.sql.SQLContext.parquetFile(SQLContext.scala:1052)
:
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)*
如何確保實木複合地板文件的寫入是否正確?有誰知道如何解決這個重複的列錯誤?
感謝您的快速回復。不幸的是沒有幫助。 –
粘貼如果可能的話 –
我不能粘貼整個代碼的完整代碼,但如果有幫助,我從地板成功地讀取並寫到S3A,我試圖從S3A VAL engf = sqlContext.read.option(「mergeSchema閱讀」,‘真’).parquet(‘S3A:// <拼花文件的路徑>’) –