2017-09-17 131 views
2

我的數據是一組Ñ觀察到對與它們的頻率,即,每對(X ,Y 有對應一些ķ沿,次的數目(×,Y 進行了觀察。理想情況下,我想這兩者進行計算Kendall的tau和Spearman的Rho爲集這些對所有的副本,它由ķ + K + ... + K ň雙。的問題是,ķ + K 2 + ... + K Ñ,觀測的總數量,是巨大的,這樣的數據結構將不適合在存儲器中。等級相關,在Python

當然,我想有關分配的頻率我個對,ķ /(K 1 + K 2 + ... + K Ñ,作爲其權重和計算權重集—的等級相關性,但我找不到任何工具。在我遇到的加權等級相關品種(例如,scipy.stats.weightedtau)中,權重表示等級而非配對的重要性,這與我的原因無關。皮爾森的似乎有我需要的權重選項,但它不符合我的目的,因爲 x y無處與線性相關。我想知道我是否錯過了關於加權數據點的廣義相關性的一些概念。

到目前爲止,我得到的唯一想法是縮小ķ,K ,...,通過一些常見的因素Çķñ,使比例數的個對拷貝是 [K /C](這裏 []是舍入演算器,因爲我們需要使每一對拷貝整數)。通過選擇Ç使得 [K/C] + [K/C] + ... + [K Ñ/C]對可以放入存儲器中,我們然後可以計算所得到的組的相關係數tau和rho。然而,ķķĴ可以通過許多數量級不同,所以Ç可以顯著大一些ķ因此四捨五入ķ/C可能會導致信息丟失。

UPD:一個可以計算斯皮爾曼的Rho具有p值沿着具有指定頻率的權重,如下一個數據集:

def frequency_pearsonr(data, frequencies): 
    """ 
    Calculates Pearson's r between columns (variables), given the 
    frequencies of the rows (observations). 

    :param data: 2-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 2-D array with pairwise correlations, 
     2-D array with pairwise p-values 
    """ 
    df = frequencies.sum() - 2 
    Sigma = np.cov(data.T, fweights=frequencies) 
    sigma_diag = Sigma.diagonal() 
    Sigma_diag_pairwise_products = np.multiply.outer(sigma_diag, sigma_diag) 
    # Calculate matrix with pairwise correlations. 
    R = Sigma/np.sqrt(Sigma_diag_pairwise_products) 
    # Calculate matrix with pairwise t-statistics. Main diagonal should 
    # get 1/0 = inf. 
    with np.errstate(divide='ignore'): 
     T = R/np.sqrt((1 - R * R)/df) 
    # Calculate matrix with pairwise p-values. 
    P = 2 * stats.t.sf(np.abs(T), df) 

    return R, P 


def frequency_rank(data, frequencies): 
    """ 
    Ranks 1-D data array, given the frequency of each value. Same 
    values get same "averaged" ranks. Array with ranks is shaped to 
    match the input data array. 

    :param data: 1-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 1-D array with ranks 
    """ 
    s = 0 
    ranks = np.empty_like(data) 
    # Compute rank for each unique value. 
    for value in sorted(set(data)): 
     index_grid = np.ix_(data == value) 
     # Find total frequency of the value. 
     frequency = frequencies[index_grid].sum() 
     ranks[index_grid] = s + 0.5 * (frequency + 1) 
     s += frequency  

    return ranks 


def frequency_spearmanrho(data, frequencies): 
    """ 
    Calculates Spearman's rho between columns (variables), given the 
    frequencies of the rows (observations). 

    :param data: 2-D array with data 
    :param frequencies: 1-D array with frequencies 
    :return: 2-D array with pairwise correlations, 
     2-D array with pairwise p-values 
    """ 
    # Rank the columns. 
    ranks = np.empty_like(data) 
    for i, data_column in enumerate(data.T): 
     ranks[:, i] = frequency_rank(data_column, frequencies) 
    # Compute Pearson's r correlation and p-values on the ranks. 
    return frequency_pearsonr(ranks, frequencies) 


# Columns are variables and rows are observations, whose frequencies 
# are specified. 
data_col1 = np.array([1, 0, 1, 0, 1]) 
data_col2 = np.array([.67, .25, .75, .2, .6]) 
data_col3 = np.array([.1, .3, .8, .3, .2]) 
data = np.array([data_col1, data_col2, data_col3]).T 
frequencies = np.array([2, 4, 1, 3, 2]) 

# Same data, but with observations (rows) actually repeated instead of 
# their frequencies being specified. 
expanded_data_col1 = np.array([1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1]) 
expanded_data_col2 = np.array([.67, .67, .25, .25, .25, .25, .75, .2, .2, .2, .6, .6]) 
expanded_data_col3 = np.array([.1, .1, .3, .3, .3, .3, .8, .3, .3, .3, .2, .2]) 
expanded_data = np.array([expanded_data_col1, expanded_data_col2, expanded_data_col3]).T 

# Compute Spearman's rho for data in both formats, and compare. 
frequency_Rho, frequency_P = frequency_spearmanrho(data, frequencies) 
Rho, P = stats.spearmanr(expanded_data) 
print(frequency_Rho - Rho) 
print(frequency_P - P) 

上面的具體實施例表明,這兩種方法產生相同的相關性和相同的p值:

[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00] 
[ 1.11022302e-16 0.00000000e+00 -5.55111512e-17] 
[ 0.00000000e+00 -5.55111512e-17 0.00000000e+00]] 
[[ 0.00000000e+00 -1.35525272e-19 4.16333634e-17] 
[ -9.21571847e-19 0.00000000e+00 -5.55111512e-17] 
[ 4.16333634e-17 -5.55111512e-17 0.00000000e+00]] 
+0

要計算加權Spearman秩相關係數,你可以簡單地預排名的x和y的值,然後推到那些'pearsonr'(與你一起的權重),以獲得加權斯皮爾曼的Rho退了出去。 – Paul

+0

不確定以下方法的統計有效性,但從技術角度來看,您可以簡單地將(預先計算的)字典映射等級封裝到函數中的標準化頻率中,並將其作爲「稱重器」傳遞給「weightedtau」。 – Paul

+0

讓我得到你的問題直,ķ + K + ... + K ň對觀測是太大,無法在RAM中。你能計算一個隨機樣本的等級相關性,增加樣本量,重複這個過程直到估計的等級相關性低於某個閾值水平? –

回答

0

計算Kendall的tau,由保羅建議的做法,有效。您不必將排序數組的索引作爲等級,但未排序數組的索引同樣正常(如加權tau中的示例所示)。權重也不需要標準化。

定期(未加權)Kendall的tau(在 「擴展」 數據集):

stats.kendalltau([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1], 
       [.25, .25, .25, .25, .2, .2, .2, .667, .667, .75, .6, .6]) 
KendalltauResult(correlation=0.7977240352174656, pvalue=0.0034446936330652677) 

加權Kendall的tau(與發生次數的數據集作爲權重):

stats.weightedtau([1, 0, 1, 0, 1], 
        [.667, .25, .75, .2, .6], 
        rank=False, 
        weigher=lambda r: [2, 4, 1, 3, 2][r], 
        additive=False) 
WeightedTauResult(correlation=0.7977240352174656, pvalue=nan) 

現在,由於weightedtau實現的特殊性,p值永遠不會被計算。我們可以用最初提供的縮小事件的技巧來近似p值,但我非常感謝其他方法。根據可用的內存量決定算法行爲對我來說看起來很痛苦。