2017-12-27 357 views
1

我使用Tensorflow進行一些非DL計算,並且我遇到了一個我不明白的行爲。我本身測試方陣的乘法:tf.matmul(a,a)Tensorflow乘法的常數性能比tf.random低

  1. 時tf.constant
  2. 創建矩陣在矩陣是在每次運行隨機初始化

我的期望是第一種情況應該有一些開銷來傳輸初始數據,100 MB(使用float32的5000x5000矩陣),但是由於每次運行時進行隨機初始化,第二種情況的執行速度應稍慢。

但是,我看到的是,即使在同一會話中連續運行,常數的乘法速度也要慢很多。

代碼

import tensorflow as tf 
import numpy as np 
from timeit import timeit 
import os 

os.environ["TF_CPP_MIN_LOG_LEVEL"]="2" # nospam 
SIZE = 5000 
NUM_RUNS = 10 

a = np.random.random((SIZE, SIZE)) 
_const_a = tf.constant(a, dtype=tf.float32, name="Const_A") 
_mul_const_a = tf.matmul(_const_a, _const_a, name="Mul_Const") 

_random_a = tf.random_uniform((SIZE, SIZE), dtype=tf.float32, name="Random_A") 
_mul_random_a = tf.matmul(_random_a, _random_a, name="Mul_Random") 

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as s: 
    # Run once to make sure everything is initialised 
    s.run((_const_a, _mul_const_a, _random_a, _mul_random_a)) 

    # timeit 
    print("TF with const\t", timeit(lambda: s.run((_mul_const_a.op)), number=NUM_RUNS)) 
    print("TF with random\t", timeit(lambda: s.run((_mul_random_a.op)), number=NUM_RUNS)) 

輸出

Device mapping: 
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1 
Random_A/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/RandomUniform: (RandomUniform): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A: (Add): /job:localhost/replica:0/task:0/device:GPU:0 
Mul_Random: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0 
Mul_Const: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/max: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/min: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/shape: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Const_A: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
TF with const 2.9953213009994215 
TF with random 0.513827863998813 

回答

0

YMMV,我獲得關於我的適度K1100M相反的結果。

Device mapping: 
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Quadro K1100M, pci bus id: 0000:01:00.0, compute capability: 3.0 
Random_A/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/RandomUniform: (RandomUniform): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A: (Add): /job:localhost/replica:0/task:0/device:GPU:0 
Mul_Random: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0 
Mul_Const: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/max: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/min: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Random_A/shape: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
Const_A: (Const): /job:localhost/replica:0/task:0/device:GPU:0 
TF with const 4.3167382130868175 
TF with random 9.889055849542306 
0

在tensorflow中第一次調用session.run()是不合理的代價。如果你想基準點記住要反覆調用它。

雖然,在你的情況中,除非你禁用常量摺疊,否則你可能幾乎看不到在常量情況下花費的時間,因爲你的圖只會獲取常量。