2012-01-11 173 views
4

我一直在閱讀幾個網站,甚至使用NVIDA's代碼作爲指導,但我仍然得到了錯誤的答案。主會要求大小的用戶,並顯示A和B則顯示生成的矩陣C.但是說我運行一個2×2矩陣A和B,這是我的樣本輸出:矩陣乘法CUDA

Matrix A 
0.000000 8.000000 
2.000000 2.000000 


Matrix B 
3.000000 1.000000 
5.000000 7.000000 


Matrix C (Results) 
0.000000 9.000000 
7.000000 4.000000 

但是,這是不正確。它應該是:

40.000 56.000 
16.000 16.000 

我改變了它從小數到整數,這樣它會更容易檢查,我發現它不正確。我不明白爲什麼它不正確,尤其是即使我從代碼示例中正確使用它。

#ifndef _MATRIXMUL_KERNEL_H_ 
#define _MATRIXMUL_KERNEL_H_ 

#include <stdio.h> 

// Thread block size 
#define BLOCK_SIZE 16 
#define TILE_SIZE 16 



// CUDA Kernel 
__global__ void matrixMul(float* C, float* A, float* B, int wA, int wB) 
{ 
    // Block index 
    int bx = blockIdx.x; 
    int by = blockIdx.y; 

// Thread index 
int tx = threadIdx.x; 
int ty = threadIdx.y; 

// Index of the first sub-matrix of A processed 
// by the block 
int aBegin = wA * BLOCK_SIZE * by; 

// Index of the last sub-matrix of A processed 
// by the block 
int aEnd = aBegin + wA - 1; 

// Step size used to iterate through the 
// sub-matrices of A 
int aStep = BLOCK_SIZE; 

// Index of the first sub-matrix of B processed 
// by the block 
int bBegin = BLOCK_SIZE * bx; 

// Step size used to iterate through the 
// sub-matrices of B 
int bStep = BLOCK_SIZE * wB; 
float Csub=0; 
// Loop over all the sub-matrices of A and B 
// required to compute the block sub-matrix 
for (int a = aBegin, b = bBegin; a <= aEnd; a += aStep, b += bStep) 
{ 
    // Declaration of the shared memory array As 
    // used to store the sub-matrix of A 
    __shared__ float As[BLOCK_SIZE][BLOCK_SIZE]; 

    // Declaration of the shared memory array Bs 
    // used to store the sub-matrix of B 
    __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE]; 

    // Load the matrices from global memory 
    // to shared memory; each thread loads 
    // one element of each matrix 
    As[ty][tx] = A[a + wA * ty + tx]; 
    Bs[ty][tx] = B[b + wB * ty + tx]; 

    // Synchronize to make sure the matrices 
    // are loaded 
    __syncthreads(); 

    // Multiply the two matrices together; 
    // each thread computes one element 
    // of the block sub-matrix 
    for (int k = 0; k < BLOCK_SIZE; ++k) 
     Csub += As[ty][k] * Bs[k][tx]; 

    // Synchronize to make sure that the preceding 
    // computation is done before loading two new 
    // sub-matrices of A and B in the next iteration 
    __syncthreads(); 
} 
// Write the block sub-matrix to device memory; 
// each thread writes one element 
int c = wB * BLOCK_SIZE * by + BLOCK_SIZE * bx; 
C[c + wB * ty + tx] = Csub; 
} 

#endif // #ifndef _MATRIXMUL_KERNEL_H_ 

主機代碼:

//perform the calculation 
    //setup execution parameters 
    dim3 threads(BLOCK_SIZE, BLOCK_SIZE); 
    dim3 grid(c.colSize/threads.x, c.rowSize/threads.y); 

    // execute the kernel 
    matrixMul<<< grid, threads >>>(deviceMatrixC, deviceMatrixA, deviceMatrixB, a.colSize, b.colSize); 

感謝您的幫助, 丹

+5

您正在使用的代碼隱含地要求矩陣的大小是塊大小的整數倍(在這種情況下是16x16)。 2x2矩陣不起作用。嘗試使用16x16輸入運行並確認結果。 – talonmies 2012-01-11 06:53:35

+0

謝謝你解決了我的問題。是否只允許16x16因爲它的塊和大小? – Dan 2012-01-12 18:07:26

+0

是的。內積計算一次處理圖塊寬度,而不檢查存儲器訪問是否超出邊界。那是錯誤發生的地方。 – talonmies 2012-01-12 18:50:38

回答

3

您使用隱式的代碼要求矩陣的大小是塊大小的圓形倍數(16×16在這種情況下)。內積計算一次處理圖塊寬度,而不檢查存儲器訪問是否超出邊界。出於這個原因,2x2矩陣將無法工作。

如果您嘗試運行帶有16x16輸入的內核(例如將您的2x2外殼填充爲16x16),則應該能夠確認結果。