2016-11-17 264 views
0

我想實現一個簡單的前饋網絡。但是,我無法弄清楚如何通過matlab中的數據提供佔位符。這個例子:我無法弄清楚如何通過matlab中的數據提供佔位符

import tensorflow as tf 
import numpy as np 
import scipy.io as scio 
import math 

# # create data 
train_input=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/input_for_tensor.mat') 
train_output=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/output_for_tensor.mat') 
x_data=np.float32(train_input['input_for_tensor']) 
y_data=np.float32(train_output['output_for_tensor']) 

print x_data.shape 
print y_data.shape 
## create tensorflow structure start ### 
def add_layer(inputs, in_size, out_size, activation_function=None): 
    Weights = tf.Variable(tf.random_uniform([in_size,out_size], -4.0*math.sqrt(6.0/(in_size+out_size)), 4.0*math.sqrt(6.0/(in_size+out_size)))) 
    biases = tf.Variable(tf.zeros([1, out_size])) 
    Wx_plus_b = tf.matmul(inputs, Weights) + biases 
    if activation_function is None: 
     outputs = Wx_plus_b 
    else: 
     outputs = activation_function(Wx_plus_b) 
    return outputs 

xs = tf.placeholder(tf.float32, [None, 256]) 
ys = tf.placeholder(tf.float32, [None, 1024]) 
y= add_layer(xs, 256, 1024, activation_function=None) 

loss = tf.reduce_mean(tf.square(y - ys)) 
optimizer = tf.train.GradientDescentOptimizer(0.1) 
train = optimizer.minimize(loss) 

init = tf.initialize_all_variables() 
### create tensorflow structure end ### 

sess = tf.Session() 
sess.run(init) 

for step in range(201): 
    sess.run(train) 
    if step % 20 == 0: 
     print(step, sess.run(loss,feed_dict={xs: x_data, ys: y_data})) 

使我有以下錯誤:

/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/liutianyuan/PycharmProjects/untitled1/easycode.py 

(1, 256) 

(1, 1024) 

Traceback (most recent call last): 
    File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 46, in <module> 
    sess.run(train) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 340, in run 
    run_metadata_ptr) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run 
    feed_dict_string, options, run_metadata) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run 
    target_list, options, run_metadata) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call 
    e.code) 

tensorflow.python.framework.errors.InvalidArgumentError: **You must feed a value for placeholder tensor 'Placeholder' with dtype float** 
    [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

Caused by op u'Placeholder', defined at: 

    File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 30, in <module> 
    xs = tf.placeholder(tf.float32, [None, 256]) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/ops/array_ops.py", line 762, in placeholder 
    name=name) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 976, in _placeholder 
    name=name) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op 
    op_def=op_def) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op 
    original_op=self._default_original_op, op_def=op_def) 

    File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__ 
    self._traceback = _extract_stack() 

我已經檢查的類型和x_data和y_data的形狀,它的接縫他們corret。所以我沒有理想去哪裏出錯。

+0

它看起來不錯。你可以試試:'sess.run(loss,feed_dict = {xs:tf.cast(x_data,tf.float32),ys:tf.cast(y_data,tf.float32)}'? – sygi

+0

謝謝,但它仍然可以' t工作 –

回答

0

您的train操作取決於佔位符xsys,因此當您撥打sess.run(train)時必須爲這些佔位符提供值。

一種常見的方式做,這是你的輸入數據劃分成小批:

BATCH_SIZE = ... 
for step in range(201): 
    # N.B. You'll need extra code to handle the cases where start_index and/or end_index 
    # wrap around the end of x_data and y_data. 
    start_index = step * BATCH_SIZE 
    end_index = (step + 1) * BATCH_SIZE 
    start_index = sess.run(train, {xs: x_data[start_index:end_index,:], 
            ys: y_data[start_index:end_index,:]}) 

例子中的代碼只是爲了讓你開始。有關生成饋送數據的更靈活的方法,請參閱tf.learn代碼庫中的MNIST dataset example

+0

謝謝,但是我寫了一個簡單的程序來學習張量流的使用,所以只有一個輸入數據 –

+0

非常感謝!起初我不明白你的意思,但是當我咀嚼你的答案,我發現它真的有效! –

0

感謝mrry,問題已解決。原來我錯過了sess.run(train)的feed_dict部分。正確的方案是:

for step in range(201): 
    sess.run(train,feed_dict={xs: x_data, ys: y_data}) 
    if step % 20 == 0: 
     print(step, sess.run(loss,feed_dict={xs: x_data, ys: y_data})) 
+0

謝謝你的! –