2016-05-12 34 views
1

我找不到在烤寬麪條中使用空間網絡的很多例子,所以有可能我在定義網絡時犯了一個錯誤。請看我的網絡定義,並讓我知道如果故障在於我的網絡配置或者還有其他問題。在烤寬麪條中定義空間網絡的屬性錯誤

``` 淨= NeuralNet(

layers=[('loc_input', InputLayer), 
     ('loc_conv2d1', Conv2DLayer), 
     ('loc_maxpool1', MaxPool2DLayer), 
     ('loc_conv2d2', Conv2DLayer), 
     ('loc_maxpool2', MaxPool2DLayer), 
     ('loc_dense', DenseLayer), 
     ('loc_output', DenseLayer), 

     ('STN1', TransformerLayer), 

     ('conv2d1', Conv2DLayer), 
     ('maxpool1', MaxPool2DLayer), 
     ('conv2d2', Conv2DLayer), 
     ('maxpool2', MaxPool2DLayer), 
     ('dense', DenseLayer), 
     ('dropout1', DropoutLayer), 
     ('dense', DenseLayer), 
     ('output', DenseLayer), 
     ], 

loc_input_shape=(None, 1, X_train.shape[2],X_train.shape[3]), 
# layer conv2d1 
loc_conv2d1_num_filters=32, 
loc_conv2d1_filter_size=(5, 5), 
loc_conv2d1_stride=2, 
loc_conv2d1_W=lasagne.init.HeUniform(), 
# layer maxpool1 
loc_maxpool1_pool_size=(2, 2),  
# layer conv2d2 
loc_conv2d2_num_filters=64, 
loc_conv2d2_filter_size=(5, 5), 
loc_conv2d2_stride=2, 
loc_conv2d2_W=lasagne.init.HeUniform(), 
# layer maxpool2 
loc_maxpool2_pool_size=(2, 2), 
loc_dense_num_units=64,  
# dense 
loc_output_num_units=6, 

#Spatial Transformer Network 
STN1_incoming = 'loc_input', 
STN1_localization_network = 'loc_output', 
STN1_downsample_factor = 1, 

# layer conv2d1 
conv2d1_incoming = 'STN1', 
conv2d1_num_filters=32, 
conv2d1_filter_size=(3, 3), 
conv2d1_stride=2, 
conv2d1_nonlinearity=lasagne.nonlinearities.rectify, 
conv2d1_W=lasagne.init.GlorotUniform(), 
# layer maxpool1 
maxpool1_pool_size=(2, 2),  
# layer conv2d2 
conv2d2_num_filters=64, 
conv2d2_filter_size=(3, 3), 
conv2d2_stride=2, 
conv2d2_nonlinearity=lasagne.nonlinearities.rectify, 
# layer maxpool2 
maxpool2_pool_size=(2, 2), 
# dropout1 
dropout1_p=0.5,  
# dense 
dense_num_units=256, 
dense_nonlinearity=lasagne.nonlinearities.rectify,  
# output 
output_nonlinearity= softmax, 
output_num_units=numClasses, 

# optimization method params 
update=nesterov_momentum, 
update_learning_rate=0.01, 
update_momentum=0.9, 
max_epochs=20, 
verbose=1, 
) 

```

當我發起的網絡,我得到以下錯誤:

``` 
AttributeError       Traceback (most recent call last) 
<ipython-input-84-29eabf8b9697> in <module>() 
----> 1 net.initialize() 

D:\Python Directory\winPython 2.7\python-2.7.10.amd64\lib\site-packages\nolearn\lasagne\base.pyc in initialize(self) 
    360   out = getattr(self, '_output_layer', None) 
    361   if out is None: 
--> 362    out = self._output_layer = self.initialize_layers() 
    363   self._check_for_unused_kwargs() 
    364 

D:\Python Directory\winPython 2.7\python-2.7.10.amd64\lib\site-packages\nolearn\lasagne\base.pyc in initialize_layers(self, layers) 
    452    try: 
    453     layer_wrapper = layer_kw.pop('layer_wrapper', None) 
--> 454     layer = layer_factory(**layer_kw) 
    455    except TypeError as e: 
    456     msg = ("Failed to instantiate {} with args {}.\n" 

D:\Python Directory\winPython 2.7\python-2.7.10.amd64\lib\site-packages\lasagne\layers\special.pyc in __init__(self, incoming, localization_network, downsample_factor, **kwargs) 
    408     **kwargs): 
    409   super(TransformerLayer, self).__init__(
--> 410    [incoming, localization_network], **kwargs) 
    411   self.downsample_factor = as_tuple(downsample_factor, 2) 
    412 

D:\Python Directory\winPython 2.7\python-2.7.10.amd64\lib\site-packages\lasagne\layers\base.pyc in __init__(self, incomings, name) 
    246   self.input_shapes = [incoming if isinstance(incoming, tuple) 
    247        else incoming.output_shape 
--> 248        for incoming in incomings] 
    249   self.input_layers = [None if isinstance(incoming, tuple) 
    250        else incoming 

AttributeError: 'str' object has no attribute 'output_shape' 

```

回答

2

解決方案在於定義本地烤寬麪條中的圖層並傳遞最終層沒有學習,因爲沒有學習的神經網絡實現可能無法識別傳入的屬性。上述網絡的以下修改適用於我。

l1 = InputLayer (shape= (None, 1, X_train.shape[2],X_train.shape[3])) 
l2 = Conv2DLayer (l1, num_filters=32, filter_size=(3, 3), stride=2, W=lasagne.init.HeUniform()) 
l3 = MaxPool2DLayer (l2, pool_size=(2, 2)) 
l4 = Conv2DLayer (l3, num_filters=64, filter_size=(3, 3), stride=2, W=lasagne.init.HeUniform()) 
l5 = MaxPool2DLayer (l4, pool_size=(2, 2)) 
l6 = DenseLayer (l5, num_units=64) 
l7 = DenseLayer (l6, num_units=6) 
l8 = TransformerLayer (l1, l7 , downsample_factor=1.0) 
l9 = Conv2DLayer (l8, num_filters=32, filter_size=(3, 3), stride=2, W=lasagne.init.GlorotUniform(), 
        nonlinearity=lasagne.nonlinearities.rectify) 
l10 = MaxPool2DLayer (l9, pool_size=(2, 2)) 
l11 = Conv2DLayer (l10, num_filters=64, filter_size=(3, 3), stride=2, W=lasagne.init.GlorotUniform(), 
        nonlinearity=lasagne.nonlinearities.rectify) 
l12 = MaxPool2DLayer (l11, pool_size=(2, 2)) 
l13 = DropoutLayer (l12, p =0.5) 
l14 = DenseLayer (l13, num_units=256, nonlinearity=lasagne.nonlinearities.rectify) 
finalLayer = DenseLayer (l14, num_units=numClasses, nonlinearity=softmax) 


net = NeuralNet(
    finalLayer, 
    update=nesterov_momentum, 
    update_learning_rate=0.01, 
    update_momentum=0.9, 
    max_epochs=100, 
    verbose=1, 
    ) 
相關問題