2017-09-25 294 views
0

我想用TensorFlow使用mpi。對於這樣的代碼的例子,see this OpenAI baselines PPO code。它告訴我們,運行以下命令:在tensorflow中使用mpirun -np X:是否受限於GPU的數量?

$ mpirun -np 8 python -m baselines.ppo1.run_atari 

我有一臺機器與一個GPU(與12GB的RAM)和Tensorflow 1.3.0安裝,使用Python 3.5.3。當我運行這段代碼,我得到以下錯誤:

2017-09-24 17:29:12.975967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: TITAN X (Pascal) 
major: 6 minor: 1 memoryClockRate (GHz) 1.531 
pciBusID 0000:01:00.0 
Total memory: 11.90GiB 
Free memory: 11.17GiB 
2017-09-24 17:29:12.975990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-09-24 17:29:12.975996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 
2017-09-24 17:29:12.976011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0) 
2017-09-24 17:29:12.987133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: TITAN X (Pascal) 
major: 6 minor: 1 memoryClockRate (GHz) 1.531 
pciBusID 0000:01:00.0 
Total memory: 11.90GiB 
Free memory: 11.17GiB 
2017-09-24 17:29:12.987159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-09-24 17:29:12.987165: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 
2017-09-24 17:29:12.987172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0) 
[2017-09-24 17:29:12,994] Making new env: PongNoFrameskip-v4 
2017-09-24 17:29:13.017845: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 
2017-09-24 17:29:13.022347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: TITAN X (Pascal) 
major: 6 minor: 1 memoryClockRate (GHz) 1.531 
pciBusID 0000:01:00.0 
Total memory: 11.90GiB 
Free memory: 104.81MiB 
2017-09-24 17:29:13.022394: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-09-24 17:29:13.022415: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 
2017-09-24 17:29:13.022933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0) 
2017-09-24 17:29:13.026338: E tensorflow/stream_executor/cuda/cuda_driver.cc:924] failed to allocate 104.81M (109903872 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 

(這是唯一的錯誤消息的第一部分,它是非常長的,但是我覺得這個開頭部分是看最重要的事情。)

但是,如果我使用mpirun -np 1運行該命令。

我在網上搜索,我發現了一個repository from Uber它說,「要與4個GPU的機器上運行」我需要使用:

$ mpirun -np 4 python train.py 

我只是想確認mpirun -np X意味着X有限通過機器上GPU的數量,假設我們正在運行的是TensorFlow程序。

回答

0

在閱讀了關於MPI的更多信息後,我可以肯定的是,的確,進程的數量受到GPU數量的限制。理由:

  • mpirun -np X命令將運行代碼(但每個都有自己的排名)的X「副本」。 See the documentation here
  • 每次運行的代碼都需要GPU
  • TensorFlow只允許一個程序一次使用一個GPU。換句話說,您不能同時運行python tf_program1.pypython tf_program2.py,而他們都使用TensorFlow並需要您的機器上使用單獨的GPU。

因此,它看起來像我被迫使用一個進程。