2017-05-06 67 views
1

我有一個txt文件。有一個包含4列的數據集。第一列的意思是電話號碼。我必須找到相同的電話號碼。這樣如何在PySpark中找到相同的元素?

... 
0544147,23,86,40.761650,29.940929 
0544147,23,104,40.768749,29.968599 
0538333,21,184,40.764679,29.929543 
05477900,21,204,40.773071,29.975010 
0561554,23,47,40.764694,29.927397 
0556645,24,6,40.821587,29.920273 
... 

和我的代碼我的txt文件是

from pyspark import SparkContext 

sc = SparkContext() 
rdd_data = sc.textFile("dataset.txt") 

data1 = [] 

lines = rdd_data.collect() 
lines = [x.strip() for x in lines] 

for line in lines: 
    data1.append([float(x.strip()) for x in line.split(',')]) 

column0 = [row[0] for row in data1] #first column is founded as a list 

所以我不知道我怎麼能在第一列獲得相同的電話號碼。我對pyspark和python太新了。提前致謝 。

回答

1
from pyspark import SparkContext 

sc = SparkContext() 
rdd_data = sc.textFile("dataset.txt") 

rdd_telephone_numbers = rdd_data.map(lambda line:line.split(",")).map(lambda line: int(line[0])) 
print (rdd_telephone_numbers.collect()) # [544147, 544147, 538333, 5477900, 561554, 556645] 

如果你想一步數據轉換的步驟的解釋:

from pyspark import SparkContext 

sc = SparkContext() 
rdd_data = sc.textFile("dataset.txt") 

rdd_data_1 = rdd_data.map(lambda line: line.split(",")) 
# this will transform every row of your dataset 
# you had these data in your dataset: 
# 0544147,23,86,40.761650,29.940929 
# 0544147,23,104,40.768749,29.968599 
# ........... 
# now you have a single RDD like this: 
# [[u'0544147', u'23', u'86', u'40.761650', u'29.940929'], [u'0544147', u'23', u'104', u'40.768749', u'29.968599'],....] 

rdd_telephone_numbers = rdd_data_1.map(lambda line: int(line[0])) 
# this will take only the first element of every line of the rdd, so now you have: 
# [544147, 544147, 538333, 5477900, 561554, 556645] 
+0

太感謝你了,那就是我想要的東西:)) – donnie

+0

另外,我想用這些coloumn爲eaxh行。我的意思是如果我想要第五個電話號碼,我需要在列表中看到第五個號碼。我的意思是我可以讓列成爲列表嗎?當我使用rdd_telephone_numbers [5]時,我想看第5個數字。我希望我可以解釋:) – donnie

+0

RDD不支持索引,因此您無法使用rdd_telephone_numbers [5]檢索元素。你可以這樣做:list_telephone_numbers = rdd_telephone_numbers.collect()然後把第五個元素作爲list_telephone_numbers [5]。但是請記住,只有當結果數組很小時才能使用collect(),因爲所有數據都被加載到驅動程序的內存中。 [見這裏的文檔](http://spark.apache.org/docs/2.1.0/api/python/pyspark.html) – titiro89

相關問題