2017-06-12 68 views
0

如何在Spark中總結多個列?例如,在SparkR中,以下代碼用於獲取一列的總和,但如果我嘗試獲取df中兩列的總和,則會出現錯誤。在Spark中總結多列

# Create SparkDataFrame 
df <- createDataFrame(faithful) 

# Use agg to sum total waiting times 
head(agg(df, totalWaiting = sum(df$waiting))) 
##This works 

# Use agg to sum total of waiting and eruptions 
head(agg(df, total = sum(df$waiting, df$eruptions))) 
##This doesn't work 

無論SparkR或PySpark代碼將工作。

回答

0
org.apache.spark.sql.functions.sum(Column e) 

聚合函數:返回表達式中所有值的總和。

正如你所看到的,sum只需一個欄爲輸入,從而sum(df$waiting, df$eruptions)不會work.Since你婉總結數字字段,你可以做sum(df("waiting") + df("eruptions"))。如果你婉總結了各個列的值,那麼,你可以df.agg(sum(df$waiting),sum(df$eruptions)).show

2

你可以做一些像下面pyspark

>>> from pyspark.sql import functions as F 
>>> df = spark.createDataFrame([("a",1,10), ("b",2,20), ("c",3,30), ("d",4,40)], ["col1", "col2", "col3"]) 
>>> df.groupBy("col1").agg(F.sum(df.col2+df.col3)).show() 
+----+------------------+ 
|col1|sum((col2 + col3))| 
+----+------------------+ 
| d|    44| 
| c|    33| 
| b|    22| 
| a|    11| 
+----+------------------+ 
1

sparkR代碼:

library(SparkR) 
df <- createDataFrame(sqlContext,faithful) 
w<-agg(df,sum(df$waiting)),agg(df,sum(df$eruptions)) 
head(w[[1]]) 
head(w[[2]]) 
0

對於PySpark,如果你不希望明確鍵入出列:

from operator import add 
from functools import reduce 
new_df = df.withColumn('total',reduce(add, [F.col(x) for x in numeric_col_list]))