2017-10-05 201 views
0

TL; DR,我想知道如何在android應用程序中使用bi-lstm-ctc tensorflow模型。在Android中使用BI LSTM CTC Tensorflow Model

我已經成功地訓練了我的bi-lstm-ctc tensorflow模型,現在我想將它用於我的手寫識別android應用程序。下面是定義我用圖表的代碼的一部分:

self.inputs = tf.placeholder(tf.float32, [None, None, network_config.num_features], name="input") 
self.labels = tf.sparse_placeholder(tf.int32, name="label") 
self.seq_len = tf.placeholder(tf.int32, [None], name="seq_len_input") 

logits = self._bidirectional_lstm_layers(
    network_config.num_hidden_units, 
    network_config.num_layers, 
    network_config.num_classes 
) 

self.global_step = tf.Variable(0, trainable=False) 
self.loss = tf.nn.ctc_loss(labels=self.labels, inputs=logits, sequence_length=self.seq_len) 
self.cost = tf.reduce_mean(self.loss) 

self.optimizer = tf.train.AdamOptimizer(network_config.learning_rate).minimize(self.cost) 
self.decoded, self.log_prob = tf.nn.ctc_beam_search_decoder(inputs=logits, sequence_length=self.seq_len, merge_repeated=False) 
self.dense_decoded = tf.sparse_tensor_to_dense(self.decoded[0], default_value=-1, name="output") 

我還成功地冷凍和優化以下凍結的圖形和在該tutorial優化圖形碼。下面是應該運行模型的部分代碼:

bitmap = Bitmap.createScaledBitmap(bitmap, 1024, 128, true); 
int[] intValues = new int[bitmap.getWidth() * bitmap.getHeight()]; 
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight()); 
float[] floatValues = new float[bitmap.getWidth() * bitmap.getHeight()]; 
for (int i = 0; i < intValues.length; ++i) { 
    final int val = intValues[i]; 
    floatValues[i] = (((val >> 16) & 0xFF)); 
} 
float[] result = new float[80]; 
long[] INPUT_SIZE = new long[]{1, bitmap.getHeight(), bitmap.getWidth()}; 
inferenceInterface.feed(config.getInputName(), floatValues, INPUT_SIZE); 
inferenceInterface.feed("seq_len_input", new int[]{bitmap.getWidth()}, 1); 
inferenceInterface.run(config.getOutputNames()); 
inferenceInterface.fetch(config.getOutputNames()[0], result); 

return result.toString(); 

不過,我遇到這取決於我使用該模型這些問題。如果我用的是冷凍圖形,我遇到這樣的錯誤:

Caused by: java.lang.IllegalArgumentException: No OpKernel was registered to support 
Op 'SparseToDense' with these attrs. Registered devices: [CPU], Registered kernels: 
device='CPU'; T in [DT_STRING]; Tindices in [DT_INT64] 
device='CPU'; T in [DT_STRING]; Tindices in [DT_INT32] 
device='CPU'; T in [DT_BOOL]; Tindices in [DT_INT64] 
device='CPU'; T in [DT_BOOL]; Tindices in [DT_INT32] 
device='CPU'; T in [DT_FLOAT]; Tindices in [DT_INT64] 
device='CPU'; T in [DT_FLOAT]; Tindices in [DT_INT32] 
device='CPU'; T in [DT_INT32]; Tindices in [DT_INT64] 
device='CPU'; T in [DT_INT32]; Tindices in [DT_INT32] 

[[Node: output = SparseToDense[T=DT_INT64, Tindices=DT_INT64, validate_indices=true](CTCBeamSearchDecoder, CTCBeamSearchDecoder:2, CTCBeamSearchDecoder:1, output/default_value)]] 

如果我使用優化的冷凍圖形,我遇到這樣的錯誤:

java.io.IOException: Not a valid TensorFlow Graph serialization: NodeDef expected inputs '' do not match 1 inputs 
specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; 
NodeDef: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/bw/while/add/y = Const[dtype=DT_INT32, 
value=Tensor<type: int32 shape: [] values: 1>](stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/bw/while/Switch:1) 

除了方式來解決這些錯誤,我有其他問題/澄清:

如何解決這些錯誤?

回答

1

我已經使它工作。該解決方案也可以在此github issue中找到。

顯然,問題是使用的類型。我只通過int64接受int32。

self.dense_decoded = tf.sparse_tensor_to_dense(self.decoded[0], default_value=-1, name="output") 

爲了解決這個問題,我鑄造稀疏張量元素INT32:

self.dense_decoded = tf.sparse_to_dense(tf.to_int32(self.decoded[0].indices), 
       tf.to_int32(self.decoded[0].dense_shape), 
       tf.to_int32(self.decoded[0].values), 
       name="output") 

運行該應用程序後,給了我這個錯誤:

java.lang.IllegalArgumentException: Matrix size-incompatible: In[0]: [1,1056], In[1]: [160,128] 
[[Node:stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/bw/while/bw/basic_lstm_cell/basic_lstm_cell/ 

MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"] 

(stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/bw/while/bw/basic_lstm_cell/basic_lstm_cell/concat, 
stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/bw/while/bw/basic_lstm_cell/basic_lstm_cell/MatMul/Enter)]] 

出於某種奇怪的原因,在Java代碼中將圖像寬度從1024更改爲128可修復該錯誤。再次運行應用程序給了我這個錯誤:

java.lang.IllegalArgumentException: cannot use java.nio.FloatArrayBuffer with Tensor of type INT32 

提取輸出時出現問題。據此,我知道該模型成功運行,但應用程序無法獲取結果。

inferenceInterface.run(outputs); 
inferenceInterface.fetch(outputs[0], result); //where the error happens 

傻傻的我忘了輸出是一個整數數組,而不是浮點數組。因此,我將結果數組的類型更改爲int數組:

//float[] result = new float[80]; 
int[] result = new int[80]; 

因此使應用程序工作。由於未正確訓練,模型的準確性不佳。我只是試圖讓它在應用程序中工作。是時候進行一些嚴肅的訓練了!