Neural Network Does Not Learn (loss Stays The Same)
My project partner and I are currently facing a problem in our latest university project. Our mission is to implement a neural network that plays the game Pong. We are giving the b
Solution 1:
That's evil 'relu'
showing its power.
Relu has a "zero" region without gradients. When all your outputs get negative, Relu makes all of them equal to zero and kills backpropagation.
The easiest solution for using Relus safely is to add BatchNormalization
layers before them:
model = keras.models.Sequential()
model.add(Dense(16, input_dim = (8), kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(32, kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(number_of_actions, activation='softmax'))
This will make "rougly" half of the outputs of the layer be zero and half be trainable.
Other solutions consist of controlling very well your learning rate and optimizer, which may be quite a headache for beginners.
Post a Comment for "Neural Network Does Not Learn (loss Stays The Same)"