2016-08-22 263 views
0
val temp = sqlContext.sql(s"SELECT A, B, C, (CASE WHEN (D) in (1,2,3) THEN ((E)+0.000)/60 ELSE 0 END) AS Z from TEST.TEST_TABLE") 
val temp1 = temp.map({ temp => ((temp.getShort(0), temp.getString(1)), (USAGE_TEMP.getDouble(2), USAGE_TEMP.getDouble(3)))}) 
.reduceByKey((x, y) => ((x._1+y._1),(x._2+y._2))) 

而不是上面的代碼在hive層上進行計算(案例評估)我希望在scala中完成轉換。我會怎麼做?將hive sql查詢轉換爲Spark

填寫Map中的數據時可以做同樣的事嗎?

+0

'withColumn'method是另一種方法除了'map'通過下面sarvesh提出的方法 –

回答

1
val temp = sqlContext.sql(s"SELECT A, B, C, D, E from TEST.TEST_TABLE") 

val tempTransform = temp.map(row => { 
    val z = List[Double](1, 2, 3).contains(row.getDouble(3)) match { 
    case true => row.getDouble(4)/60 
    case _ => 0 
    } 
    Row(row.getShort(0), Row.getString(1), Row.getDouble(2), z) 
}) 

val temp1 = tempTransform.map({ temp => ((temp.getShort(0), temp.getString(1)), (USAGE_TEMP.getDouble(2), USAGE_TEMP.getDouble(3)))}) 
    .reduceByKey((x, y) => ((x._1+y._1),(x._2+y._2))) 
1

你可以使用這個語法以及

new_df = old_df.withColumn('target_column', udf(df.name)) 

如本example

val sqlContext = new SQLContext(sc) 
import sqlContext.implicits._ // for `toDF` and $"" 
import org.apache.spark.sql.functions._ // for `when` 

val df = sc.parallelize(Seq((4, "blah", 2), (2, "", 3), (56, "foo", 3), (100, null, 5))) 
    .toDF("A", "B", "C") 

val newDf = df.withColumn("D", when($"B".isNull or $"B" === "", 0).otherwise(1)) 

在你的情況下文稱,執行SQL這是數據幀像下面 val temp = sqlContext.sql(s"SELECT A, B, C, D, E from TEST.TEST_TABLE")

並申請withColumn與殼體或whenotherwise或如果需要的話火花udf

,呼叫階功能邏輯,而不是hiveudf