2017-03-06 55 views
1

我在hdfs上創建了分區的parquet文件並創建了HIVE外部表。當我使用分區列上的篩選器查詢表時,spark會檢查所有分區文件而不是特定的分區。我們正在火花1.6.0。Spark分區修剪在1.6.0上不起作用

數據框:

df = hivecontext.createDataFrame([ 
    ("class1", "Economics", "name1", None), 
    ("class2","Economics", "name2", 92), 
    ("class2","CS", "name2", 92), 
    ("class1","CS", "name1", 92) 
], ["class","subject", "name", "marks"]) 

創建拼花分區:

hivecontext.setConf("spark.sql.parquet.compression.codec", "snappy") 
hivecontext.setConf("spark.sql.hive.convertMetastoreParquet", "false") 
df1.write.parquet("/transient/testing/students", mode="overwrite", partitionBy='subject') 

查詢:

df = hivecontext.sql('select * from vatmatching_stage.students where subject = "Economics"') 
df.show() 

+------+-----+-----+---------+ 
| class| name|marks| subject| 
+------+-----+-----+---------+ 
|class1|name1| 0|Economics| 
|class2|name2| 92|Economics| 
+------+-----+-----+---------+ 

df.explain(True) 

    == Parsed Logical Plan == 
    'Project [unresolvedalias(*)] 
    +- 'Filter ('subject = Economics) 
     +- 'UnresolvedRelation `vatmatching_stage`.`students`, None 

    == Analyzed Logical Plan == 
    class: string, name: string, marks: bigint, subject: string 
    Project [class#90,name#91,marks#92L,subject#89] 
    +- Filter (subject#89 = Economics) 
     +- Subquery students 
      +- Relation[class#90,name#91,marks#92L,subject#89] ParquetRelation: vatmatching_stage.students 

    == Optimized Logical Plan == 
    Project [class#90,name#91,marks#92L,subject#89] 
    +- Filter (subject#89 = Economics) 
     +- Relation[class#90,name#91,marks#92L,subject#89] ParquetRelation: vatmatching_stage.students 

    == Physical Plan == 
    Scan ParquetRelation: vatmatching_stage.students[class#90,name#91,marks#92L,subject#89] InputPaths: hdfs://dev4/transient/testing/students/subject=Art, hdfs://dev4/transient/testing/students/subject=Civil, hdfs://dev4/transient/testing/students/subject=CS, hdfs://dev4/transient/testing/students/subject=Economics, hdfs://dev4/transient/testing/students/subject=Music 

但是,如果我做HIVE瀏覽器相同的查詢,我們可以看到HIVE做分區修剪。

44 location hdfs://testing/students/subject=Economics 
45 name vatmatching_stage.students 
46 numFiles 1 
47 numRows -1 
48 partition_columns subject 
49 partition_columns.types string 

這是火花1.6.0限制還是我在這裏丟失的東西。

回答

1

發現此問題的根本原因。用於查詢表的HiveContext沒有將spark.sql.hive.convertMetastoreParquet設置爲「false」,其設置爲「true」 - 默認值

當我將它設置爲「false」時,我可以看到它使用分區修剪

+0

它是一個Spark錯誤嗎?我可以看到與EXPLAIN通過將其設置爲false,該查詢將只掃描正確的分區。 –