2017-04-04 99 views
0

我試圖訪問包含在一個PipelineRDD 這裏的價值觀是什麼,我開始:pySpark如何訪問值的元組的(關鍵,元組)RDD(蟒蛇)

1. RDD =(密鑰,編碼,數值)

data = [(11720, (u'I50800', 0.08229813664596274)), (11720, (u'I50801', 0.03076923076923077))] 

*強調文字* 2。我需要它組由所述第一值和把它轉化爲(密鑰,元組),其中元組=(代碼,值)

testFeatures = lab_FeatureTuples = labEvents.select( 'ITEMID', 'SUBJECT_ID', 'NORM_ITEM_CNT') \ .orderBy( '​​SUBJECT_ID', 'ITEMID')\ .rdd.map(拉姆達(ITEMID,SUBJECT_ID,NORM_ITEM_CNT):(SUBJECT_ID,(ITEMID,NORM_ITEM_CNT)))\ .groupByKey()

testFeatures = [(11720, [(u'I50800', 0.08229813664596274)), (u'I50801', 0.03076923076923077)])] 

關於tuple =(code,value),我想得到以下內容:

Cre吃了斯帕塞夫克托出來的,所以我可以用它爲SVM模型

result.take(1)

+1

請重新格式化您的代碼 –

回答

0

下面是做到這一點的一種方法:

import pyspark 
import pyspark.sql.functions as sf 
import pyspark.sql.types as sparktypes 
sc = pyspark.SparkContext() 
sqlc = pyspark.SQLContext(sc) 

data = [(11720, (u'I50800', 0.08229813664596274)), 
     (11720, (u'I50801', 0.03076923076923077))] 
rdd = sc.parallelize(data) 

df = sqlc.createDataFrame(rdd, ['idx', 'tuple']) 
df.show() 

給人,

+-----+--------------------+ 
| idx|    tuple| 
+-----+--------------------+ 
|11720|[I50800,0.0822981...| 
|11720|[I50801,0.0307692...| 
+-----+--------------------+ 

現在定義pyspark用戶定義的功效:

extract_tuple_0 = sf.udf(lambda x: x[0], returnType=sparktypes.StringType()) 
extract_tuple_1 = sf.udf(lambda x: x[1], returnType=sparktypes.FloatType()) 
df = df.withColumn('tup0', extract_tuple_0(sf.col('tuple'))) 

df = df.withColumn('tup1', extract_tuple_1(sf.col('tuple'))) 
df.show() 

給出:

+-----+--------------------+----------+------+ 
| idx|    tuple|  tup1| tup0| 
+-----+--------------------+----------+------+ 
|11720|[I50800,0.0822981...|0.08229814|I50800| 
|11720|[I50801,0.0307692...|0.03076923|I50801| 
+-----+--------------------+----------+------+