2016-09-22 85 views
3

我有一個場景,其中有新的主題正在測試一系列結果都是字符串分類值的特徵。一旦測試完成,我需要將新數據集與所有主題的主數據集進行比較,並查找給定閾值保持(例如90%)的相似性(匹配)。熊貓:將列與數據框的所有其他列進行比較

因此,我需要能夠做的柱狀(主題明智)在新的數據在主數據集設置爲每列的新課題,加上其他的新數據集的每一個的比較因爲生產數據集有大約50萬列(和增長)和10,000行,所以可能獲得最佳性能。

下面是一些示例代碼:

master = pd.DataFrame({'Characteristic':['C1', 'C2', 'C3'], 
            'S1':['AA','BB','AB'], 
            'S2':['AB','-','BB'], 
            'S3':['AA','AB','--']}) 
new = pd.DataFrame({'Characteristic':['C1', 'C2', 'C3'], 
           'S4':['AA','BB','AA'], 
           'S5':['AB','-','BB']}) 
new_master = pd.merge(master, new, on='Characteristic', how='inner') 

def doComparison(comparison_df, new_columns, master_columns): 
    summary_dict = {} 
    row_cnt = comparison_df.shape[0] 

    for new_col_idx, new_col in enumerate(new_columns): 
     # don't compare the Characteristic column 
     if new_col != 'Characteristic': 
     print 'Evalating subject ' + new_col + ' for matches' 
     summary_dict[new_col] = [] 
     new_data = comparison_df.ix[:, new_col] 
     for master_col_idx, master_col in enumerate(master_columns): 
      # don't compare same subject or Characteristic column 
      if new_col != master_col and master_col != 'Characteristic': 
       master_data = comparison_df.ix[:, master_col] 
       is_same = (new_data == master_data) & (new_data != '--') & (master_data != '--') 
       pct_same = sum(is_same) * 100/row_cnt 
       if pct_same > 90: 
        print ' Found potential match ' + master_col + ' ' + str(pct_same) + ' pct' 
        summary_dict[new_col].append({'match' : master_col, 'pct' : pct_same}) 
    return summary_dict 

result = doComparison(new_master, new.columns, master.columns) 

這樣的作品,但我想提高效率和性能,不知道到底怎麼樣。

回答

0

考慮進行以下調整,該調整運行列表理解來構建兩個數據幀列名的所有組合,然後迭代到> 90%閾值匹配。

# LIST COMPREHENSION (TUPLE PAIRS) LEAVES OUT CHARACTERISTIC (FIRST COL) AND SAME NAMED COLS 
columnpairs = [(i,j) for i in new.columns[1:] for j in master.columns[1:] if i != j] 

# DICTIONARY COMPREHENSION TO INITIALIZE DICT OBJ 
summary_dict = {col:[] for col in new.columns[1:]} 

for p in columnpairs: 
    i, j = p 

    is_same = (new['Characteristic'] == master['Characteristic']) & \ 
       (new[i] == master[j]) & (new[i] != '--') & (master[j] != '--') 
    pct_same = sum(is_same) * 100/len(master) 

    if pct_same > 90:   
     summary_dict[i].append({'match' : j, 'pct': pct_same}) 

print(summary_dict) 
# {'S4': [], 'S5': [{'match': 'S2', 'pct': 100.0}]} 
1

另一種選擇

import numpy as np 
import pandas as pd 
from sklearn.utils.extmath import cartesian 

利用sklearn的笛卡爾功能

col_combos = cartesian([ new.columns[1:], master.columns[1:]]) 
print (col_combos) 

[['S4' 'S1'] 
['S4' 'S2'] 
['S4' 'S3'] 
['S5' 'S1'] 
['S5' 'S2'] 
['S5' 'S3']] 

創建與特色,除了在新的每一列的一個關鍵的字典。 注意,這看起來像是浪費空間。也許只是用火柴保存那些?

summary_dict = {c:[] for c in new.columns[1:]} #copied from @Parfait's answer 

熊貓/ Numpy可以很容易地比較兩個系列。
示例;

print (new_master['S4'] == new_master['S1']) 

0  True 
1  True 
2 False 
dtype: bool 

現在我們反覆通系列的連擊和計數與numpy的的count_nonzero的幫助下Trues()。其餘類似於你有什麼

for combo in col_combos: 
    match_count = np.count_nonzero(new_master[combo[0]] == new_master[combo[1]]) 
    pct_same = match_count * 100/len(new_master) 
    if pct_same > 90: 
     summary_dict[combo[0]].append({'match' : combo[1], 'pct': match_count/len(new_master)}) 

print (summary_dict) 

{'S4': [], 'S5': [{'pct': 1.0, 'match': 'S2'}]} 

我很想知道它是如何執行的。祝你好運!

相關問題