2012-02-10 122 views
2

我在玩新的C++標準。我編寫了一個測試來觀察調度算法的行爲,並看看線程發生了什麼。考慮到上下文切換時間,我預期實際特定線程的等待時間比由std::this_thread::sleep_for()函數指定的值多一點。但令人驚訝的是,它有時甚至比睡眠時間少!我想不通爲什麼會這樣,或者說我做錯了什麼......在C++ 0x多線程中等待

#include <iostream> 
#include <thread> 
#include <random> 
#include <vector> 
#include <functional> 
#include <math.h> 
#include <unistd.h> 
#include <sys/time.h> 

void heavy_job() 
{ 
    // here we're doing some kind of time-consuming job.. 
    int j=0; 
    while(j<1000) 
    { 
     int* a=new int[100]; 
     for(int i=0; i<100; ++i) 
      a[i] = i; 
     delete[] a; 
     for(double x=0;x<10000;x+=0.1) 
      sqrt(x); 
     ++j; 
    } 
    std::cout << "heavy job finished" << std::endl; 
} 

void light_job(const std::vector<int>& wait) 
{ 
    struct timeval start, end; 
    long utime, seconds, useconds; 
    std::cout << std::showpos; 
    for(std::vector<int>::const_iterator i = wait.begin(); 
     i!=wait.end();++i) 
    { 
     gettimeofday(&start, NULL); 
     std::this_thread::sleep_for(std::chrono::microseconds(*i)); 
     gettimeofday(&end, NULL); 
     seconds = end.tv_sec - start.tv_sec; 
     useconds = end.tv_usec - start.tv_usec; 
     utime = ((seconds) * 1000 + useconds/1000.0); 
     double delay = *i - utime*1000; 
     std::cout << "delay: " << delay/1000.0 << std::endl; 
    } 
} 

int main() 
{ 
    std::vector<int> wait_times; 
    std::uniform_int_distribution<unsigned int> unif; 
    std::random_device rd; 
    std::mt19937 engine(rd()); 
    std::function<unsigned int()> rnd = std::bind(unif, engine); 
    for(int i=0;i<1000;++i) 
     wait_times.push_back(rnd()%100000+1); // random sleep time between 1 and 1 million µs 
    std::thread heavy(heavy_job); 
    std::thread light(light_job,wait_times); 
    light.join(); 
    heavy.join(); 
    return 0; 
} 

輸出我的英特爾Core-i5的機器上:

..... 
delay: +0.713 
delay: +0.509 
delay: -0.008 // ! 
delay: -0.043 // !! 
delay: +0.409 
delay: +0.202 
delay: +0.077 
delay: -0.027 // ? 
delay: +0.108 
delay: +0.71 
delay: +0.498 
delay: +0.239 
delay: +0.838 
delay: -0.017 // also ! 
delay: +0.157 
+3

您是否認爲您的時間碼是錯誤的? – 2012-02-10 16:10:40

回答

3

你的計時代碼導致整體截斷。

utime = ((seconds) * 1000 + useconds/1000.0); 
double delay = *i - utime*1000; 

假設您的等待時間爲888888微秒,並且您準確地進入了該睡眠狀態。 seconds將爲0並且useconds將爲888888。除以1000.0後,您將得到888.888。然後您添加0*1000,仍然產生888.888。然後分配到一個很長的時間,讓你888,並明顯延遲888.888 - 888 = 0.888

您應該更新utime實際存儲微秒,這樣你就不會得到截斷,而且還因爲顧名思義,該單位是微秒,就像useconds。例如:

long utime = seconds * 1000000 + useconds; 

您也得到了延遲計算。忽略截斷的作用,它應該是:

double delay = utime*1000 - *i; 
std::cout << "delay: " << delay/1000.0 << std::endl; 

你得到它的方式,你要輸出的正延遲實際上是截斷的結果,而消極的代表實際延遲。

+6

嗯......並且在掌握了使用容易出錯的'timeval'之後,使用'std :: chrono :: clocks'來測量已用時間!它們很容易,當你減去它們時,你會得到'chrono :: durations',這幾乎不容易出錯。如果你手動轉換時間單位,你做錯了。 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm#Clocks – 2012-02-10 16:45:07

+0

+1:I lol'd ... – 2012-02-10 16:45:32

+0

Ooops ...謝謝! – 2012-02-10 17:29:16