2017-01-24 108 views
1

我一直在試圖合併低於averagePoints1和kpoints2的兩個Rdds。它不斷拋出此錯誤如何在PySpark中壓縮兩個RDD?

ValueError: Can not deserialize RDD with different number of items in pair: (2, 1) 

和我試過很多東西,但我不能在兩個RDDS是相同的,有相同數量的分區。我接下來的步驟是在兩個列表上應用歐氏距離函數來度量差異,所以如果任何人知道如何解決這個錯誤或者有一個不同的方法我可以按照我真的很感激它。

在此先感謝

averagePoints1 = averagePoints.map(lambda x: x[1]) 
averagePoints1.collect() 
Out[15]: 
[[34.48939954847243, -118.17286894440112], 
[41.028994230117945, -120.46279399895184], 
[37.41157578999635, -121.60431843383599], 
[34.42627845075509, -113.87191272382309], 
[39.00897622397381, -122.63680410846844]] 

    kpoints2 = sc.parallelize(kpoints,4) 
    In [17]: 

    kpoints2.collect() 
    Out[17]: 
    [[34.0830381107, -117.960562808], 
    [38.8057258629, -120.990763316], 
    [38.0822414157, -121.956922473], 
    [33.4516748053, -116.592291648], 
    [38.1808762414, -122.246825578]] 

回答

0
newSample=newCenters.collect() #new centers as a list 
    samples=zip(newSample,sample) #sample=> old centers 
    samples1=sc.parallelize(samples) 
    totalDistance=samples1.map(lambda (x,y):distanceSquared(x[1],y)) 

爲未來的搜索,這是我跟在最後的解決方案

0
a= [[34.48939954847243, -118.17286894440112], 
[41.028994230117945, -120.46279399895184], 
[37.41157578999635, -121.60431843383599], 
[34.42627845075509, -113.87191272382309], 
[39.00897622397381, -122.63680410846844]] 
b= [[34.0830381107, -117.960562808], 
    [38.8057258629, -120.990763316], 
    [38.0822414157, -121.956922473], 
    [33.4516748053, -116.592291648], 
    [38.1808762414, -122.246825578]] 

rdda = sc.parallelize(a) 
rddb = sc.parallelize(b) 
c = rdda.zip(rddb) 
print(c.collect()) 

檢查這個答案 Combine two RDDs in pyspark

+0

kpoints2是從一個樣本RDD平均分是RDD的平均分,我將寫一個wh循環直到收斂,所以這個解決方案將無濟於事。請你有任何其他的想法! –