Skip to content Skip to sidebar Skip to footer

Cnn-lstm Timeseries Input For Timedistributed Layer

I created a CNN-LSTM for survival prediction of web sessions, my training data looks as follows: print(x_train.shape) (288, 3, 393) with (samples, timesteps, features) and my mod

Solution 1:

your data are in 3d format and this is all you need to feed a conv1d or an LSTM. if your target is 2D remember to set return_sequences=False in your last LSTM cell.

using a flatten before an LSTM is a mistake because you are destroying the 3D dimensionality

pay attention also on the pooling operation in order to not have a negative time dimension to reduce (I use 'same' padding in the convolution above in order to avoid this)

below is an example in a binary classification task

n_sample, time_step, n_features = 288, 3, 393
X = np.random.uniform(0,1, (n_sample, time_step, n_features))
y = np.random.randint(0,2, n_sample)

model = Sequential()
model.add(Conv1D(128, 5, padding='same', activation='relu', 
                 input_shape=(time_step, n_features)))
model.add(MaxPooling1D())
model.add(LSTM(64, return_sequences=True))
model.add(LSTM(16, return_sequences=False))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X,y, epochs=3)

Post a Comment for "Cnn-lstm Timeseries Input For Timedistributed Layer"