我正在尝试使用keras功能api构建多输入多输出模型,并且正在遵循他们的代码,但是出现了该错误:
ValueError:输入0与层lstm_54不兼容:预期ndim = 3,找到的ndim = 4
我在创建lstm_out层时遇到了该错误,这是代码:
def build_model(self):
main_input = Input(shape=(self.seq_len, 1), name='main_input')
#seq_len = 50, vocab_len = 1000
x = Embedding(output_dim=512, input_dim=self.vocab_len()+1, input_length=self.seq_len)(main_input)
# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(50)(x)
self.auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
auxiliary_input = Input(shape=(self.seq_len,1), name='aux_input')
x = concatenate([lstm_out, auxiliary_input])
# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
# And finally we add the main logistic regression layer
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
self.model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
print(self.model.summary())
self.model.compile(optimizer='rmsprop', loss='binary_crossentropy',
loss_weights=[1., 0.2])
我以为问题出在Embedding层的input_dim上,但是我在keras嵌入文档中读到(input_dim应该等于词汇量+ 1)。
我不确切知道为什么会得到这个,input_dim中确切的错误是什么,以及如何解决?
嵌入的输入形状应为2D张量,形状为:(batch_size,sequence_length)。在您的代码段中提供了main_input,它是一个3D张量。要解决此问题,请更改以下几行:
main_input = Input(shape=(self.seq_len, 1), name='main_input')
<...>
auxiliary_input = Input(shape=(self.seq_len,1), name='aux_input')
至:
main_input = Input(shape=(self.seq_len, ), name='main_input')
<...>
auxiliary_input = Input(shape=(self.seq_len, ), name='aux_input')
它应该解决不同维度的问题
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句