2017-10-10 102 views
0

我正在嘗試獲取mnist_replica.py示例工作。根據this問題的建議,我正在指定設備過濾器。分佈式Tensorflow:CreateSession仍然只在等待不同節點

我的代碼在ps和工作任務在同一個節點上時工作。當我嘗試將節點1上的ps任務和節點2上的輔助任務放到「CreateSession仍在等待」時。

例如:

僞分佈式版本(作品!)

節點1的終端轉儲(例如1)

node1 $ python mnist_replica.py --worker_hosts=node1:2223 --job_name=ps --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = ps 
task index = 0 
2017-10-10 11:09:16.637006: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 11:09:16.637075: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> node1:2223} 
2017-10-10 11:09:16.640114: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
... 

節點1的終端轉儲(例如2)

node1 $ python mnist_replica.py --worker_hosts=node1:2223 --job_name=worker --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = worker 
task index = 0 
2017-10-10 11:11:12.784982: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 11:11:12.785046: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2223} 
2017-10-10 11:11:12.787685: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2223 
Worker 0: Initializing session... 
2017-10-10 11:11:12.991784: I tensorflow/core/distributed_runtime/master_session.cc:998] Start master session 418af3aa5ce103a3 with config: device_filters: "/job:ps" device_filters: "/job:worker/task:0" allow_soft_placement: true 
Worker 0: Session initialization complete. 
Training begins @ 1507648273.272837 
1507648273.443305: Worker 0: training step 1 done (global step: 0) 
1507648273.454537: Worker 0: training step 2 done (global step: 1) 
... 

個2個節點的分佈式(不工作),節點1

節點2

node2 $ python mnist_replica.py --ps_hosts=node1:2222 --worker_hosts=node2:2222 --job_name=worker --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = worker 
task index = 0 
2017-10-10 10:51:13.303021: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> node1:2222} 
2017-10-10 10:51:13.303081: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2222} 
2017-10-10 10:51:13.308288: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
Worker 0: Initializing session... 
2017-10-10 10:51:23.508040: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:ps/replica:0/task:0 
2017-10-10 10:51:33.508247: I tensorflow/core/distributed_runtime/master.cc:209] CreateSession still waiting for response from worker: /job:ps/replica:0/task:0 
... 

運行CentOS7,Tensorflow R1.3,Python 2.7版兩個節點的

node1 $ python mnist_replica.py --worker_hosts=node2:2222 --job_name=ps --task_index=0 
Extracting /tmp/mnist-data/train-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/train-labels-idx1-ubyte.gz 
Extracting /tmp/mnist-data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/mnist-data/t10k-labels-idx1-ubyte.gz 
job name = ps 
task index = 0 
2017-10-10 10:54:27.419949: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222} 
2017-10-10 10:54:27.420064: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> node2:2222} 
2017-10-10 10:54:27.426168: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:2222 
... 

終端轉儲的

終端轉儲。節點可以通過ssh相互通話,主機名是正確的,防火牆被禁用。有什麼遺漏?

是否有任何額外的步驟需要採取以確保節點之間可以使用GRPC進行對話? 謝謝。

回答

0

我想你會更好地檢查ClusterSpec和服務器部分。例如,您應該檢查node1和node2的IP地址,檢查端口和任務索引等。我想給出具體的建議,但是如果沒有代碼,很難給出建議。謝謝。

+0

非常感謝您的回覆。我剛剛用終端轉儲,系統信息編輯了我的原始問題。我會感謝任何幫助。 – Sid

+0

嗯..我想如果你實現的代碼類似於你給的url,很難在代碼中找到錯誤。 檢查node1和node2之間的網絡連接情況如何?例如,檢查端口,或ping到node1 ... – jwl1993

+1

我使用兩個節點的服務器上面的代碼mnist_replica.py測試它,它工作正常。 我認爲這不是代碼中的錯誤。你最好檢查其他的東西。 – jwl1993

0

問題是防火牆阻塞了端口。我在問題的所有節點上禁用了防火牆,問題自行解決!

相關問題