2016-09-18 67 views
3

我想通過修改代碼使用張量流實現自己的自編碼器代碼:enter link description here我想將代碼寫入類。 我實現類是:使用張量流實現自動編碼器

import tensorflow as tf 

class AutoEncoder: 

    def __init__(self,input,hidden,learning_rate=0.01,training_epochs=50, 
       batch_size = 100, display_step = 10): 
     print('hello,world\n') 
     self.X = input 
     self.hidden = hidden 
     self.weights = [] 
     self.biases = [] 
     self.inputfeature = input.shape[1] 
     self.learning_rate = learning_rate 
     self.trainning_epochs = training_epochs 
     self.batch_size = batch_size 
     self.display_step = display_step 
    def initialPara(self): 
     weights = { 
      'encoder_h1': tf.Variable(tf.random_normal([self.inputfeature,self.hidden])), 
      'decoder_h1': tf.Variable(tf.random_normal([self.hidden,self.inputfeature])) 
     } 
     biases = { 
      'encoder_b1': tf.Variable(tf.random_normal([self.hidden])), 
      'decoder_b1': tf.Variable(tf.random_normal([self.inputfeature])) 
     } 
     self.weights = weights 
     self.biases = biases 
    def encoder(self,X): 
     layer = tf.nn.sigmoid(
      tf.add(
       tf.matmul(X, self.weights['encoder_h1']),self.biases['encoder_b1'] 
      ) 
     ) 
     return layer 
    def decoder(self,X): 
     layer = tf.nn.sigmoid(
      tf.add(
       tf.matmul(X, self.weights['decoder_h1']),self.biases['decoder_b1'] 
      ) 
     ) 
     return layer 

    def train(self): 

     X = self.X 
     batch_size = self.batch_size 

     self.initialPara() 

     encoder_op = self.encoder(X) 
     decoder_op = self.decoder(encoder_op) 

     y_pred = decoder_op 
     y_true = X 

     # define loss and optimizer, minimize the squared error 
     cost = tf.reduce_mean(
      tf.pow(y_true-y_pred,2) 
     ) 
     optimizer = tf.train.RMSPropOptimizer(self.learning_rate).minimize(cost) 

     init = tf.initialize_all_variables() 

     # launch the graph 
     with tf.Session() as sess: 
      sess.run(init) 
      total_batch = int(X.shape[0]/batch_size) 
      # training cycle 
      for epoch in range(self.trainning_epochs): 
       # loop over all batches 
       for i in range(total_batch): 
        batch_xs = X[i*batch_size:(i+1)*batch_size] 
        _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) 
       #display logs per epoch step 
       if epoch % self.display_step == 0: 
        print("Epoch:", '%04d'%(epoch+1), 
          "cost=","{:.9f}".foramt(c)) 

      print("optimization finished!!") 

     self.encoderOp = encoder_op 
     self.decoderOp = decoder_op 

和類由主函數調用:

from AutoEncoder import * 

import tensorflow as tf 
import tflearn.datasets.mnist as mnist 

from tensorflow.examples.tutorials.mnist import input_data 

X,Y,testX,testY = mnist.load_data(one_hot=True) 

autoencoder1 = AutoEncoder(X,10,learning_rate=0.01) 

autoencoder1.train() 

而且發生一個錯誤:

Traceback (most recent call last): 
    File "/home/zhq/Desktop/AutoEncoder/main.py", line 13, in <module> 
    autoencoder1.train() 
    File "/home/zhq/Desktop/AutoEncoder/AutoEncoder.py", line 74, in train 
    _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) 
TypeError: unhashable type: 'numpy.ndarray' 

而且我想知道什麼是錯的用我的代碼? 預先感謝您!

ZHQ

回答

2

問題是你需要的,如果你想在會話期間喂一些數據使用佔位符。例如:

self.X = tf.placeholder(tf.float32, [None, input_dim]) 

佔位符是會話期間Feed詞典指定的圖形的一部分。

您可以閱讀更多關於它們的信息here

+0

我還有一個問題:我能做些什麼來堆棧2 autoencoder進行迴歸預測?例如,encoder1 = autoencoder1.encoderOp; encoder2 = autoencoder2.encoderOp;下一步如何使用兩個自動編碼器?你能給我一個演示嗎? –

+0

您能更精確地描述2編碼器架構嗎? –

+0

在我的代碼(AutoEncoder類)中,我訓練了一個autoencoder圖層。訓練結束後,我可以得到訓練好的圖層,每一層都是:layer = tf.nn.sigmoid(...)。例如,我有兩層:layer1,layer2。但是,如何將這兩層堆疊起來形成一個深層網絡?據我所知,我可以通過疊加兩層:deeplayer = tf.nn.sigmoid(tf.add(tf.matmul(layer1,layer2.Weight),layer2.biase))。但是,我不知道如何獲得圖層的重量和偏見。謝謝! –