We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss stops decreasing after 6th epoch. I have run the original model proposed.
Epoch 1/50 37831/37831 [==============================] - 646s 17ms/step - loss: 1.2934 - acc: 0.4069 - val_loss: 1.1452 - val_acc: 0.4933 WARNING:tensorflow:From C:\Users\hmtkv\miniconda3\envs\voice\lib\site-packages\keras\callbacks\tensorboard_v1.py:343: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead.
Epoch 2/50 37831/37831 [==============================] - 562s 15ms/step - loss: 1.0629 - acc: 0.5535 - val_loss: 1.0023 - val_acc: 0.5854 Epoch 3/50 37831/37831 [==============================] - 575s 15ms/step - loss: 0.9804 - acc: 0.5960 - val_loss: 0.9605 - val_acc: 0.6059 Epoch 4/50 37831/37831 [==============================] - 645s 17ms/step - loss: 0.9549 - acc: 0.6062 - val_loss: 0.9530 - val_acc: 0.6019 Epoch 5/50 37831/37831 [==============================] - 626s 17ms/step - loss: 0.9366 - acc: 0.6144 - val_loss: 0.9421 - val_acc: 0.6145 Epoch 6/50 37831/37831 [==============================] - 629s 17ms/step - loss: 0.9300 - acc: 0.6156 - val_loss: 0.9327 - val_acc: 0.6120 Epoch 7/50 37831/37831 [==============================] - 544s 14ms/step - loss: 0.9212 - acc: 0.6218 - val_loss: 0.9239 - val_acc: 0.6161 Epoch 8/50 37831/37831 [==============================] - 610s 16ms/step - loss: 0.9136 - acc: 0.6247 - val_loss: 0.9398 - val_acc: 0.6001 Epoch 9/50 37831/37831 [==============================] - 584s 15ms/step - loss: 0.9081 - acc: 0.6259 - val_loss: 0.9309 - val_acc: 0.6196 Epoch 10/50 37831/37831 [==============================] - 596s 16ms/step - loss: 0.9053 - acc: 0.6294 - val_loss: 0.9182 - val_acc: 0.6219 Epoch 11/50 37831/37831 [==============================] - 569s 15ms/step - loss: 0.9021 - acc: 0.6310 - val_loss: 0.9335 - val_acc: 0.6093 Epoch 12/50 37831/37831 [==============================] - 636s 17ms/step - loss: 0.8967 - acc: 0.6321 - val_loss: 0.9365 - val_acc: 0.6095 Epoch 13/50 37831/37831 [==============================] - 596s 16ms/step - loss: 0.8950 - acc: 0.6323 - val_loss: 0.9366 - val_acc: 0.6045 Epoch 14/50 37831/37831 [==============================] - 567s 15ms/step - loss: 0.8917 - acc: 0.6336 - val_loss: 0.9196 - val_acc: 0.6196 Epoch 15/50 37831/37831 [==============================] - 511s 13ms/step - loss: 0.8883 - acc: 0.6364 - val_loss: 0.9240 - val_acc: 0.6159 Epoch 16/50 37831/37831 [==============================] - 517s 14ms/step - loss: 0.8846 - acc: 0.6380 - val_loss: 0.9149 - val_acc: 0.6217 Epoch 17/50 37831/37831 [==============================] - 569s 15ms/step - loss: 0.8834 - acc: 0.6382 - val_loss: 0.9154 - val_acc: 0.6224 Epoch 18/50 37831/37831 [==============================] - 552s 15ms/step - loss: 0.8813 - acc: 0.6394 - val_loss: 0.9192 - val_acc: 0.6173 Epoch 19/50 37831/37831 [==============================] - 547s 14ms/step - loss: 0.8770 - acc: 0.6395 - val_loss: 0.9123 - val_acc: 0.6243 Epoch 20/50 37831/37831 [==============================] - 541s 14ms/step - loss: 0.8782 - acc: 0.6387 - val_loss: 0.9128 - val_acc: 0.6243 Epoch 21/50 37831/37831 [==============================] - 499s 13ms/step - loss: 0.8747 - acc: 0.6405 - val_loss: 0.9135 - val_acc: 0.6214 Epoch 22/50 37831/37831 [==============================] - 495s 13ms/step - loss: 0.8718 - acc: 0.6422 - val_loss: 0.9158 - val_acc: 0.6254 Epoch 23/50 37831/37831 [==============================] - 502s 13ms/step - loss: 0.8687 - acc: 0.6428 - val_loss: 0.9141 - val_acc: 0.6229 Epoch 24/50 37831/37831 [==============================] - 502s 13ms/step - loss: 0.8685 - acc: 0.6441 - val_loss: 0.9259 - val_acc: 0.6181 Epoch 25/50 37831/37831 [==============================] - 496s 13ms/step - loss: 0.8670 - acc: 0.6445 - val_loss: 0.9172 - val_acc: 0.6224 Epoch 26/50 37831/37831 [==============================] - 500s 13ms/step - loss: 0.8630 - acc: 0.6449 - val_loss: 0.9139 - val_acc: 0.6253 Epoch 27/50 37831/37831 [==============================] - 498s 13ms/step - loss: 0.8664 - acc: 0.6452 - val_loss: 0.9140 - val_acc: 0.6256 Epoch 28/50 37831/37831 [==============================] - 499s 13ms/step - loss: 0.8607 - acc: 0.6485 - val_loss: 0.9216 - val_acc: 0.6204 Epoch 29/50 37831/37831 [==============================] - 497s 13ms/step - loss: 0.8587 - acc: 0.6470 - val_loss: 0.9201 - val_acc: 0.6235 Epoch 30/50 37831/37831 [==============================] - 519s 14ms/step - loss: 0.8573 - acc: 0.6503 - val_loss: 0.9126 - val_acc: 0.6276 Epoch 31/50 37831/37831 [==============================] - 539s 14ms/step - loss: 0.8547 - acc: 0.6507 - val_loss: 0.9187 - val_acc: 0.6156 Epoch 32/50 37831/37831 [==============================] - 534s 14ms/step - loss: 0.8540 - acc: 0.6514 - val_loss: 0.9169 - val_acc: 0.6205 Epoch 33/50 37831/37831 [==============================] - 531s 14ms/step - loss: 0.8525 - acc: 0.6509 - val_loss: 0.9129 - val_acc: 0.6229 Epoch 34/50 37831/37831 [==============================] - 522s 14ms/step - loss: 0.8490 - acc: 0.6532 - val_loss: 0.9189 - val_acc: 0.6230 Epoch 35/50 37831/37831 [==============================] - 510s 13ms/step - loss: 0.8508 - acc: 0.6509 - val_loss: 0.9140 - val_acc: 0.6247 Epoch 36/50 37831/37831 [==============================] - 520s 14ms/step - loss: 0.8471 - acc: 0.6526 - val_loss: 0.9153 - val_acc: 0.6223 Epoch 37/50 37831/37831 [==============================] - 551s 15ms/step - loss: 0.8435 - acc: 0.6544 - val_loss: 0.9203 - val_acc: 0.6221 Epoch 38/50 37831/37831 [==============================] - 556s 15ms/step - loss: 0.8431 - acc: 0.6535 - val_loss: 0.9227 - val_acc: 0.6165 Epoch 39/50 37831/37831 [==============================] - 539s 14ms/step - loss: 0.8416 - acc: 0.6549 - val_loss: 0.9120 - val_acc: 0.6279 Epoch 40/50 37831/37831 [==============================] - 526s 14ms/step - loss: 0.8421 - acc: 0.6576 - val_loss: 0.9158 - val_acc: 0.6229 Epoch 41/50 37831/37831 [==============================] - 519s 14ms/step - loss: 0.8367 - acc: 0.6571 - val_loss: 0.9210 - val_acc: 0.6176 Epoch 42/50 37831/37831 [==============================] - 534s 14ms/step - loss: 0.8358 - acc: 0.6585 - val_loss: 0.9153 - val_acc: 0.6269 Epoch 43/50 37831/37831 [==============================] - 520s 14ms/step - loss: 0.8371 - acc: 0.6589 - val_loss: 0.9183 - val_acc: 0.6229 Epoch 44/50 37831/37831 [==============================] - 522s 14ms/step - loss: 0.8355 - acc: 0.6588 - val_loss: 0.9215 - val_acc: 0.6228 Epoch 45/50 37831/37831 [==============================] - 502s 13ms/step - loss: 0.8325 - acc: 0.6585 - val_loss: 0.9206 - val_acc: 0.6231 Epoch 46/50 37831/37831 [==============================] - 498s 13ms/step - loss: 0.8321 - acc: 0.6606 - val_loss: 0.9210 - val_acc: 0.6186 Epoch 47/50 37831/37831 [==============================] - 499s 13ms/step - loss: 0.8282 - acc: 0.6611 - val_loss: 0.9249 - val_acc: 0.6227 Epoch 48/50 37831/37831 [==============================] - 507s 13ms/step - loss: 0.8279 - acc: 0.6616 - val_loss: 0.9199 - val_acc: 0.6219 Epoch 49/50 37831/37831 [==============================] - 499s 13ms/step - loss: 0.8262 - acc: 0.6620 - val_loss: 0.9245 - val_acc: 0.6217 Epoch 50/50 37831/37831 [==============================] - 498s 13ms/step - loss: 0.8252 - acc: 0.6624 - val_loss: 0.9290 - val_acc: 0.6195
It seems like using such complex network didn't work very well for twitter.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
loss stops decreasing after 6th epoch. I have run the original model proposed.
Epoch 1/50
37831/37831 [==============================] - 646s 17ms/step - loss: 1.2934 - acc: 0.4069 - val_loss: 1.1452 - val_acc: 0.4933
WARNING:tensorflow:From C:\Users\hmtkv\miniconda3\envs\voice\lib\site-packages\keras\callbacks\tensorboard_v1.py:343: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead.
Epoch 2/50
37831/37831 [==============================] - 562s 15ms/step - loss: 1.0629 - acc: 0.5535 - val_loss: 1.0023 - val_acc: 0.5854
Epoch 3/50
37831/37831 [==============================] - 575s 15ms/step - loss: 0.9804 - acc: 0.5960 - val_loss: 0.9605 - val_acc: 0.6059
Epoch 4/50
37831/37831 [==============================] - 645s 17ms/step - loss: 0.9549 - acc: 0.6062 - val_loss: 0.9530 - val_acc: 0.6019
Epoch 5/50
37831/37831 [==============================] - 626s 17ms/step - loss: 0.9366 - acc: 0.6144 - val_loss: 0.9421 - val_acc: 0.6145
Epoch 6/50
37831/37831 [==============================] - 629s 17ms/step - loss: 0.9300 - acc: 0.6156 - val_loss: 0.9327 - val_acc: 0.6120
Epoch 7/50
37831/37831 [==============================] - 544s 14ms/step - loss: 0.9212 - acc: 0.6218 - val_loss: 0.9239 - val_acc: 0.6161
Epoch 8/50
37831/37831 [==============================] - 610s 16ms/step - loss: 0.9136 - acc: 0.6247 - val_loss: 0.9398 - val_acc: 0.6001
Epoch 9/50
37831/37831 [==============================] - 584s 15ms/step - loss: 0.9081 - acc: 0.6259 - val_loss: 0.9309 - val_acc: 0.6196
Epoch 10/50
37831/37831 [==============================] - 596s 16ms/step - loss: 0.9053 - acc: 0.6294 - val_loss: 0.9182 - val_acc: 0.6219
Epoch 11/50
37831/37831 [==============================] - 569s 15ms/step - loss: 0.9021 - acc: 0.6310 - val_loss: 0.9335 - val_acc: 0.6093
Epoch 12/50
37831/37831 [==============================] - 636s 17ms/step - loss: 0.8967 - acc: 0.6321 - val_loss: 0.9365 - val_acc: 0.6095
Epoch 13/50
37831/37831 [==============================] - 596s 16ms/step - loss: 0.8950 - acc: 0.6323 - val_loss: 0.9366 - val_acc: 0.6045
Epoch 14/50
37831/37831 [==============================] - 567s 15ms/step - loss: 0.8917 - acc: 0.6336 - val_loss: 0.9196 - val_acc: 0.6196
Epoch 15/50
37831/37831 [==============================] - 511s 13ms/step - loss: 0.8883 - acc: 0.6364 - val_loss: 0.9240 - val_acc: 0.6159
Epoch 16/50
37831/37831 [==============================] - 517s 14ms/step - loss: 0.8846 - acc: 0.6380 - val_loss: 0.9149 - val_acc: 0.6217
Epoch 17/50
37831/37831 [==============================] - 569s 15ms/step - loss: 0.8834 - acc: 0.6382 - val_loss: 0.9154 - val_acc: 0.6224
Epoch 18/50
37831/37831 [==============================] - 552s 15ms/step - loss: 0.8813 - acc: 0.6394 - val_loss: 0.9192 - val_acc: 0.6173
Epoch 19/50
37831/37831 [==============================] - 547s 14ms/step - loss: 0.8770 - acc: 0.6395 - val_loss: 0.9123 - val_acc: 0.6243
Epoch 20/50
37831/37831 [==============================] - 541s 14ms/step - loss: 0.8782 - acc: 0.6387 - val_loss: 0.9128 - val_acc: 0.6243
Epoch 21/50
37831/37831 [==============================] - 499s 13ms/step - loss: 0.8747 - acc: 0.6405 - val_loss: 0.9135 - val_acc: 0.6214
Epoch 22/50
37831/37831 [==============================] - 495s 13ms/step - loss: 0.8718 - acc: 0.6422 - val_loss: 0.9158 - val_acc: 0.6254
Epoch 23/50
37831/37831 [==============================] - 502s 13ms/step - loss: 0.8687 - acc: 0.6428 - val_loss: 0.9141 - val_acc: 0.6229
Epoch 24/50
37831/37831 [==============================] - 502s 13ms/step - loss: 0.8685 - acc: 0.6441 - val_loss: 0.9259 - val_acc: 0.6181
Epoch 25/50
37831/37831 [==============================] - 496s 13ms/step - loss: 0.8670 - acc: 0.6445 - val_loss: 0.9172 - val_acc: 0.6224
Epoch 26/50
37831/37831 [==============================] - 500s 13ms/step - loss: 0.8630 - acc: 0.6449 - val_loss: 0.9139 - val_acc: 0.6253
Epoch 27/50
37831/37831 [==============================] - 498s 13ms/step - loss: 0.8664 - acc: 0.6452 - val_loss: 0.9140 - val_acc: 0.6256
Epoch 28/50
37831/37831 [==============================] - 499s 13ms/step - loss: 0.8607 - acc: 0.6485 - val_loss: 0.9216 - val_acc: 0.6204
Epoch 29/50
37831/37831 [==============================] - 497s 13ms/step - loss: 0.8587 - acc: 0.6470 - val_loss: 0.9201 - val_acc: 0.6235
Epoch 30/50
37831/37831 [==============================] - 519s 14ms/step - loss: 0.8573 - acc: 0.6503 - val_loss: 0.9126 - val_acc: 0.6276
Epoch 31/50
37831/37831 [==============================] - 539s 14ms/step - loss: 0.8547 - acc: 0.6507 - val_loss: 0.9187 - val_acc: 0.6156
Epoch 32/50
37831/37831 [==============================] - 534s 14ms/step - loss: 0.8540 - acc: 0.6514 - val_loss: 0.9169 - val_acc: 0.6205
Epoch 33/50
37831/37831 [==============================] - 531s 14ms/step - loss: 0.8525 - acc: 0.6509 - val_loss: 0.9129 - val_acc: 0.6229
Epoch 34/50
37831/37831 [==============================] - 522s 14ms/step - loss: 0.8490 - acc: 0.6532 - val_loss: 0.9189 - val_acc: 0.6230
Epoch 35/50
37831/37831 [==============================] - 510s 13ms/step - loss: 0.8508 - acc: 0.6509 - val_loss: 0.9140 - val_acc: 0.6247
Epoch 36/50
37831/37831 [==============================] - 520s 14ms/step - loss: 0.8471 - acc: 0.6526 - val_loss: 0.9153 - val_acc: 0.6223
Epoch 37/50
37831/37831 [==============================] - 551s 15ms/step - loss: 0.8435 - acc: 0.6544 - val_loss: 0.9203 - val_acc: 0.6221
Epoch 38/50
37831/37831 [==============================] - 556s 15ms/step - loss: 0.8431 - acc: 0.6535 - val_loss: 0.9227 - val_acc: 0.6165
Epoch 39/50
37831/37831 [==============================] - 539s 14ms/step - loss: 0.8416 - acc: 0.6549 - val_loss: 0.9120 - val_acc: 0.6279
Epoch 40/50
37831/37831 [==============================] - 526s 14ms/step - loss: 0.8421 - acc: 0.6576 - val_loss: 0.9158 - val_acc: 0.6229
Epoch 41/50
37831/37831 [==============================] - 519s 14ms/step - loss: 0.8367 - acc: 0.6571 - val_loss: 0.9210 - val_acc: 0.6176
Epoch 42/50
37831/37831 [==============================] - 534s 14ms/step - loss: 0.8358 - acc: 0.6585 - val_loss: 0.9153 - val_acc: 0.6269
Epoch 43/50
37831/37831 [==============================] - 520s 14ms/step - loss: 0.8371 - acc: 0.6589 - val_loss: 0.9183 - val_acc: 0.6229
Epoch 44/50
37831/37831 [==============================] - 522s 14ms/step - loss: 0.8355 - acc: 0.6588 - val_loss: 0.9215 - val_acc: 0.6228
Epoch 45/50
37831/37831 [==============================] - 502s 13ms/step - loss: 0.8325 - acc: 0.6585 - val_loss: 0.9206 - val_acc: 0.6231
Epoch 46/50
37831/37831 [==============================] - 498s 13ms/step - loss: 0.8321 - acc: 0.6606 - val_loss: 0.9210 - val_acc: 0.6186
Epoch 47/50
37831/37831 [==============================] - 499s 13ms/step - loss: 0.8282 - acc: 0.6611 - val_loss: 0.9249 - val_acc: 0.6227
Epoch 48/50
37831/37831 [==============================] - 507s 13ms/step - loss: 0.8279 - acc: 0.6616 - val_loss: 0.9199 - val_acc: 0.6219
Epoch 49/50
37831/37831 [==============================] - 499s 13ms/step - loss: 0.8262 - acc: 0.6620 - val_loss: 0.9245 - val_acc: 0.6217
Epoch 50/50
37831/37831 [==============================] - 498s 13ms/step - loss: 0.8252 - acc: 0.6624 - val_loss: 0.9290 - val_acc: 0.6195
It seems like using such complex network didn't work very well for twitter.
The text was updated successfully, but these errors were encountered: