2017-10-29 102 views
1

我想寫一個神經網絡,從頭開始識別xor函數。完整的代碼是here(在python 3中)。編寫一個基本的異或神經網絡程序

我目前得到的錯誤:

ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients 

我是新來tensorflow,我不明白這是爲什麼。任何人都可以幫我糾正我的代碼嗎?提前致謝。

P.S.如果問題中需要更多細節,請在downvoting之前告訴我。再次感謝!

編輯:代碼的相關部分:

def initialize_parameters(): 
    # Create Weights and Biases for Hidden Layer and Output Layer 
    W1 = tf.get_variable("W1", [2, 2], initializer = tf.contrib.layers.xavier_initializer()) 
    b1 = tf.get_variable("b1", [2, 1], initializer = tf.zeros_initializer()) 
    W2 = tf.get_variable("W2", [1, 2], initializer = tf.contrib.layers.xavier_initializer()) 
    b2 = tf.get_variable("b2", [1, 1], initializer = tf.zeros_initializer()) 
    parameters = { 
      "W1" : W1, 
      "b1" : b1, 
      "W2" : W2, 
      "b2" : b2 
    } 
    return parameters 

def forward_propogation(X, parameters): 

    threshold = tf.constant(0.5, name = "threshold") 
    W1, b1 = parameters["W1"], parameters["b1"] 
    W2, b2 = parameters["W2"], parameters["b2"] 

    Z1 = tf.add(tf.matmul(W1, X), b1) 
    A1 = tf.nn.relu(Z1) 
    tf.squeeze(A1) 
    Z2 = tf.add(tf.matmul(W2, A1), b2) 
    A2 = tf.round(tf.sigmoid(Z2)) 
    print(A2.shape) 
    tf.squeeze(A2) 
    A2 = tf.reshape(A2, [1, 1]) 
    print(A2.shape) 
    return A2 

def compute_cost(A, Y): 

    logits = tf.transpose(A) 
    labels = tf.transpose(Y) 
    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = logits, labels = labels) 
    return cost 

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500): 

    ops.reset_default_graph() 
    (n_x, m) = X_train.shape 
    n_y = Y_train.shape[0] 
    costs = [] 
    X, Y = create_placeholders(n_x, n_y) 
    parameters = initialize_parameters() 
    A2 = forward_propogation(X, parameters) 
    cost = compute_cost(A2, Y) 
    optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) 
    init = tf.global_variables_initializer() 

    with tf.Session() as session: 
     session.run(init) 
     for epoch in range(num_epochs): 
      epoch_cost = 0 
      _, epoch_cost = session.run([optimizer, cost], feed_dict = {X : X_train, Y : Y_train}) 
     parameters = session.run(parameters) 
     correct_prediction = tf.equal(tf.argmax(A2), tf.argmax(Y)) 
     accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) 
     print("Training Accuracy is {0} %...".format(accuracy.eval({X : X_train, Y : Y_train}))) 
     print("Test Accuracy is {0} %...".format(accuracy.eval({X : X_test, Y : Y_test}))) 
    return parameters 
+0

請在此處自行添加代碼的相關部分,而不是將整個代碼鏈接到外部鏈接。 –

回答

0

錯誤是由使用tf.round當你定義A2引起的(known issue,順便說一句)。

在這個特定的任務中,解決方案完全不使用tf.round。請記住,tf.sigmoid的輸出值爲01之間的值,可以將其解釋爲結果的概率1。交叉熵損失函數測量到目標的距離,01,並根據此距離計算所需的權重更新。在交叉熵之前調用tf.round將把概率壓縮到01 - 這將會使交叉熵變得毫無意義。

順便說一句,tf.losses.softmax_cross_entropy應該會更好,因爲您已經在第二層中應用了sigmoid。

+0

謝謝。它正在工作,但給出了1.0%的準確性:/你能幫我改進嗎?我會將答案標記爲正確。謝謝! –

+0

你可以更新有問題的代碼,以便它可以被複制? – Maxim