DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测

DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测


基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测

设计思路

数据集下载:https://download.csdn.net/download/qq_41185868/13767751

输出结果

Using TensorFlow backend.
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[nltk_data] Error loading punkt: <urlopen error [Errno 11004]
[nltk_data]     getaddrinfo failed>
raw_text[:10] : alice's ad
Total Characters: 144413
chars ['\n', ' ', '!', '"', "'", '(', ')', '*', ',', '-', '.', '0', '3', ':', ';', '?', '[', ']', '_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
Total Vocab: 45
sentences 1625 ["alice's adventures in wonderland\n\nlewis carroll\n\nthe millennium fulcrum edition 3.0\n\nchapter i. down the rabbit-hole\n\nalice was beginning to get very tired of sitting by her sister on the\nbank, and of having nothing to do: once or twice she had peeped into the\nbook her sister was reading, but it had no pictures or conversations in\nit, 'and what is the use of a book,' thought alice 'without pictures or\nconversations?'", 'so she was considering in her own mind (as well as she could, for the\nhot day made her feel very sleepy and stupid), whether the pleasure\nof making a daisy-chain would be worth the trouble of getting up and\npicking the daisies, when suddenly a white rabbit with pink eyes ran\nclose by her.', "there was nothing so very remarkable in that; nor did alice think it so\nvery much out of the way to hear the rabbit say to itself, 'oh dear!", 'oh dear!', "i shall be late!'"]
lengths (1625,) [420 289 140 ... 636 553   7]
CharMapInt_dict 45 {'\n': 0, ' ': 1, '!': 2, '"': 3, "'": 4, '(': 5, ')': 6, '*': 7, ',': 8, '-': 9, '.': 10, '0': 11, '3': 12, ':': 13, ';': 14, '?': 15, '[': 16, ']': 17, '_': 18, 'a': 19, 'b': 20, 'c': 21, 'd': 22, 'e': 23, 'f': 24, 'g': 25, 'h': 26, 'i': 27, 'j': 28, 'k': 29, 'l': 30, 'm': 31, 'n': 32, 'o': 33, 'p': 34, 'q': 35, 'r': 36, 's': 37, 't': 38, 'u': 39, 'v': 40, 'w': 41, 'x': 42, 'y': 43, 'z': 44}
IntMapChar_dict 45 {0: '\n', 1: ' ', 2: '!', 3: '"', 4: "'", 5: '(', 6: ')', 7: '*', 8: ',', 9: '-', 10: '.', 11: '0', 12: '3', 13: ':', 14: ';', 15: '?', 16: '[', 17: ']', 18: '_', 19: 'a', 20: 'b', 21: 'c', 22: 'd', 23: 'e', 24: 'f', 25: 'g', 26: 'h', 27: 'i', 28: 'j', 29: 'k', 30: 'l', 31: 'm', 32: 'n', 33: 'o', 34: 'p', 35: 'q', 36: 'r', 37: 's', 38: 't', 39: 'u', 40: 'v', 41: 'w', 42: 'x', 43: 'y', 44: 'z'}
dataX: 144313 100 [[19, 30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32], [30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1], [27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38], [21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26], [23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26, 23]]
dataY: 144313 [1, 38, 26, 23, 1]
Total patterns: 144313
X_train.shape (144313, 100, 1)
Y_train.shape (144313, 45)
Init data,after read_out, chars:
 144313 alice's adventures in wonderland

lewis carroll

tge millennium fulcrum edition 3.0

cgapter i. down
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
F:\File_Jupyter\实用代码\NeuralNetwork(神经网络)\CharacterLanguageLSTM.py:135: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
  LSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=10, batch_size=64, callbacks=callbacks_list)
lstm_1 (LSTM)                (None, 256)               264192
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0
_________________________________________________________________
dense_1 (Dense)              (None, 45)                11565
=================================================================
Total params: 275,757
Trainable params: 275,757
Non-trainable params: 0
_________________________________________________________________
LSTM_Model
 None
Epoch 1/10
2020-12-23 23:42:07.919094: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

  64/1000 [>.............................] - ETA: 29s - loss: 3.8086
 128/1000 [==>...........................] - ETA: 15s - loss: 3.7953
 192/1000 [====>.........................] - ETA: 11s - loss: 3.7823
 256/1000 [======>.......................] - ETA: 8s - loss: 3.7692
 320/1000 [========>.....................] - ETA: 7s - loss: 3.7552
 384/1000 [==========>...................] - ETA: 5s - loss: 3.7372
 448/1000 [============>.................] - ETA: 4s - loss: 3.7026
 512/1000 [==============>...............] - ETA: 4s - loss: 3.6552
 576/1000 [================>.............] - ETA: 3s - loss: 3.5955
 640/1000 [==================>...........] - ETA: 2s - loss: 3.5678
 704/1000 [====================>.........] - ETA: 2s - loss: 3.5116
 768/1000 [======================>.......] - ETA: 1s - loss: 3.4778
 832/1000 [=======================>......] - ETA: 1s - loss: 3.4441
 896/1000 [=========================>....] - ETA: 0s - loss: 3.4278
 960/1000 [===========================>..] - ETA: 0s - loss: 3.4092
1000/1000 [==============================] - 7s 7ms/step - loss: 3.3925

Epoch 00001: loss improved from inf to 3.39249, saving model to hdf5/weights-improvement-01-3.3925.hdf5
Epoch 2/10

  64/1000 [>.............................] - ETA: 4s - loss: 3.1429
 128/1000 [==>...........................] - ETA: 4s - loss: 3.1370
 192/1000 [====>.........................] - ETA: 3s - loss: 3.1034
 256/1000 [======>.......................] - ETA: 3s - loss: 3.1038
 320/1000 [========>.....................] - ETA: 3s - loss: 3.0962
 384/1000 [==========>...................] - ETA: 2s - loss: 3.1055
 448/1000 [============>.................] - ETA: 2s - loss: 3.0986
 512/1000 [==============>...............] - ETA: 2s - loss: 3.0628
 576/1000 [================>.............] - ETA: 2s - loss: 3.0452
 640/1000 [==================>...........] - ETA: 1s - loss: 3.0571
 704/1000 [====================>.........] - ETA: 1s - loss: 3.0684
 768/1000 [======================>.......] - ETA: 1s - loss: 3.0606
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0596
 896/1000 [=========================>....] - ETA: 0s - loss: 3.0529
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0484
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0371

Epoch 00002: loss improved from 3.39249 to 3.03705, saving model to hdf5/weights-improvement-02-3.0371.hdf5
Epoch 3/10

  64/1000 [>.............................] - ETA: 4s - loss: 3.1671
 128/1000 [==>...........................] - ETA: 4s - loss: 3.0008
 192/1000 [====>.........................] - ETA: 4s - loss: 3.0159
 256/1000 [======>.......................] - ETA: 4s - loss: 3.0019
 320/1000 [========>.....................] - ETA: 3s - loss: 3.0056
 384/1000 [==========>...................] - ETA: 3s - loss: 3.0156
 448/1000 [============>.................] - ETA: 2s - loss: 3.0392
 512/1000 [==============>...............] - ETA: 2s - loss: 3.0243
 576/1000 [================>.............] - ETA: 2s - loss: 3.0226
 640/1000 [==================>...........] - ETA: 1s - loss: 3.0162
 704/1000 [====================>.........] - ETA: 1s - loss: 3.0238
 768/1000 [======================>.......] - ETA: 1s - loss: 3.0195
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0286
 896/1000 [=========================>....] - ETA: 0s - loss: 3.0272
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0214
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0225

Epoch 00003: loss improved from 3.03705 to 3.02249, saving model to hdf5/weights-improvement-03-3.0225.hdf5
Epoch 4/10

  64/1000 [>.............................] - ETA: 5s - loss: 2.7843
 128/1000 [==>...........................] - ETA: 5s - loss: 2.8997
 192/1000 [====>.........................] - ETA: 4s - loss: 2.9975
 256/1000 [======>.......................] - ETA: 4s - loss: 3.0150
 320/1000 [========>.....................] - ETA: 3s - loss: 3.0025
 384/1000 [==========>...................] - ETA: 3s - loss: 3.0442
 448/1000 [============>.................] - ETA: 3s - loss: 3.0494
 512/1000 [==============>...............] - ETA: 2s - loss: 3.0398
 576/1000 [================>.............] - ETA: 2s - loss: 3.0170
 640/1000 [==================>...........] - ETA: 2s - loss: 3.0421
 704/1000 [====================>.........] - ETA: 1s - loss: 3.0366
 768/1000 [======================>.......] - ETA: 1s - loss: 3.0339
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0316
 896/1000 [=========================>....] - ETA: 0s - loss: 3.0361
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0326
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0352

Epoch 00004: loss did not improve from 3.02249
Epoch 5/10

  64/1000 [>.............................] - ETA: 4s - loss: 2.8958
 128/1000 [==>...........................] - ETA: 4s - loss: 2.9239
 192/1000 [====>.........................] - ETA: 4s - loss: 2.9044
 256/1000 [======>.......................] - ETA: 4s - loss: 2.9417
 320/1000 [========>.....................] - ETA: 3s - loss: 2.9674
 384/1000 [==========>...................] - ETA: 3s - loss: 2.9646
 448/1000 [============>.................] - ETA: 3s - loss: 2.9629
 512/1000 [==============>...............] - ETA: 2s - loss: 2.9707
 576/1000 [================>.............] - ETA: 2s - loss: 2.9699
 640/1000 [==================>...........] - ETA: 1s - loss: 2.9594
 704/1000 [====================>.........] - ETA: 1s - loss: 2.9830
 768/1000 [======================>.......] - ETA: 1s - loss: 2.9773
 832/1000 [=======================>......] - ETA: 0s - loss: 2.9774
 896/1000 [=========================>....] - ETA: 0s - loss: 2.9891
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0070
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0120

Epoch 00005: loss improved from 3.02249 to 3.01205, saving model to hdf5/weights-improvement-05-3.0120.hdf5
Epoch 6/10

  64/1000 [>.............................] - ETA: 4s - loss: 3.0241
 128/1000 [==>...........................] - ETA: 4s - loss: 3.0463
 192/1000 [====>.........................] - ETA: 3s - loss: 3.0364
 256/1000 [======>.......................] - ETA: 3s - loss: 2.9712
 320/1000 [========>.....................] - ETA: 3s - loss: 2.9840
 384/1000 [==========>...................] - ETA: 3s - loss: 2.9887
 448/1000 [============>.................] - ETA: 2s - loss: 2.9785
 512/1000 [==============>...............] - ETA: 2s - loss: 2.9852
 576/1000 [================>.............] - ETA: 2s - loss: 2.9893
 640/1000 [==================>...........] - ETA: 1s - loss: 2.9931
 704/1000 [====================>.........] - ETA: 1s - loss: 2.9790
 768/1000 [======================>.......] - ETA: 1s - loss: 2.9962
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0166
 896/1000 [=========================>....] - ETA: 0s - loss: 3.0213
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0143
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0070

Epoch 00006: loss improved from 3.01205 to 3.00701, saving model to hdf5/weights-improvement-06-3.0070.hdf5
Epoch 7/10

  64/1000 [>.............................] - ETA: 5s - loss: 3.0738
 128/1000 [==>...........................] - ETA: 5s - loss: 3.0309
 192/1000 [====>.........................] - ETA: 4s - loss: 2.9733
 256/1000 [======>.......................] - ETA: 4s - loss: 2.9728
 320/1000 [========>.....................] - ETA: 4s - loss: 2.9422
 384/1000 [==========>...................] - ETA: 3s - loss: 2.9496
 448/1000 [============>.................] - ETA: 3s - loss: 2.9548
 512/1000 [==============>...............] - ETA: 3s - loss: 2.9635
 576/1000 [================>.............] - ETA: 2s - loss: 2.9614
 640/1000 [==================>...........] - ETA: 2s - loss: 2.9537
 704/1000 [====================>.........] - ETA: 1s - loss: 2.9454
 768/1000 [======================>.......] - ETA: 1s - loss: 2.9649
 832/1000 [=======================>......] - ETA: 1s - loss: 2.9814
 896/1000 [=========================>....] - ETA: 0s - loss: 2.9955
 960/1000 [===========================>..] - ETA: 0s - loss: 2.9948
1000/1000 [==============================] - 6s 6ms/step - loss: 2.9903

Epoch 00007: loss improved from 3.00701 to 2.99027, saving model to hdf5/weights-improvement-07-2.9903.hdf5
Epoch 8/10

  64/1000 [>.............................] - ETA: 5s - loss: 2.9248
 128/1000 [==>...........................] - ETA: 4s - loss: 2.9293
 192/1000 [====>.........................] - ETA: 4s - loss: 2.9820
 256/1000 [======>.......................] - ETA: 4s - loss: 3.0261
 320/1000 [========>.....................] - ETA: 3s - loss: 2.9989
 384/1000 [==========>...................] - ETA: 3s - loss: 3.0101
 448/1000 [============>.................] - ETA: 3s - loss: 3.0050
 512/1000 [==============>...............] - ETA: 2s - loss: 3.0155
 576/1000 [================>.............] - ETA: 2s - loss: 3.0414
 640/1000 [==================>...........] - ETA: 2s - loss: 3.0180
 704/1000 [====================>.........] - ETA: 1s - loss: 3.0295
 768/1000 [======================>.......] - ETA: 1s - loss: 2.9996
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0151
 896/1000 [=========================>....] - ETA: 0s - loss: 3.0201
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0063
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0064

Epoch 00008: loss did not improve from 2.99027
Epoch 9/10

  64/1000 [>.............................] - ETA: 4s - loss: 2.8417
 128/1000 [==>...........................] - ETA: 4s - loss: 2.9652
 192/1000 [====>.........................] - ETA: 4s - loss: 2.9907
 256/1000 [======>.......................] - ETA: 3s - loss: 3.0133
 320/1000 [========>.....................] - ETA: 3s - loss: 3.0092
 384/1000 [==========>...................] - ETA: 3s - loss: 3.0139
 448/1000 [============>.................] - ETA: 2s - loss: 3.0453
 512/1000 [==============>...............] - ETA: 2s - loss: 3.0481
 576/1000 [================>.............] - ETA: 2s - loss: 3.0434
 640/1000 [==================>...........] - ETA: 1s - loss: 3.0158
 704/1000 [====================>.........] - ETA: 1s - loss: 3.0141
 768/1000 [======================>.......] - ETA: 1s - loss: 3.0203
 832/1000 [=======================>......] - ETA: 0s - loss: 3.0068
 896/1000 [=========================>....] - ETA: 0s - loss: 2.9980
 960/1000 [===========================>..] - ETA: 0s - loss: 3.0016
1000/1000 [==============================] - 5s 5ms/step - loss: 2.9944

Epoch 00009: loss did not improve from 2.99027
Epoch 10/10

  64/1000 [>.............................] - ETA: 4s - loss: 3.0100
 128/1000 [==>...........................] - ETA: 4s - loss: 3.0620
 192/1000 [====>.........................] - ETA: 4s - loss: 3.0169
 256/1000 [======>.......................] - ETA: 3s - loss: 3.0289
 320/1000 [========>.....................] - ETA: 3s - loss: 3.0060
 384/1000 [==========>...................] - ETA: 3s - loss: 2.9940
 448/1000 [============>.................] - ETA: 2s - loss: 2.9823
 512/1000 [==============>...............] - ETA: 2s - loss: 2.9686
 576/1000 [================>.............] - ETA: 2s - loss: 2.9699
 640/1000 [==================>...........] - ETA: 1s - loss: 2.9710
 704/1000 [====================>.........] - ETA: 1s - loss: 2.9625
 768/1000 [======================>.......] - ETA: 1s - loss: 2.9748
 832/1000 [=======================>......] - ETA: 0s - loss: 2.9794
 896/1000 [=========================>....] - ETA: 0s - loss: 2.9788
 960/1000 [===========================>..] - ETA: 0s - loss: 2.9802
1000/1000 [==============================] - 5s 5ms/step - loss: 2.9963

Epoch 00010: loss did not improve from 2.99027
LSTM_Pre_word.shape:
 (3, 45)
after LSTM read_out, chars:
 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n']
LSTM_Model,Seed:
" ent down its head to hide a smile: some of the other birds
tittered audibly.

'what i was going to s "
                                                                                                                                                                                                        199 100

 Generated Sequence:

 Done.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
lstm_2 (LSTM)                (None, 100, 256)          264192
_________________________________________________________________
dropout_2 (Dropout)          (None, 100, 256)          0
_________________________________________________________________
lstm_3 (LSTM)                (None, 64)                82176
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 45)                2925
=================================================================
Total params: 349,293
Trainable params: 349,293
Non-trainable params: 0
_________________________________________________________________
DeepLSTM_Model
 None
F:\File_Jupyter\实用代码\NeuralNetwork(神经网络)\CharacterLanguageLSTM.py:246: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
  DeepLSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=2, batch_size=256, callbacks=callbacks_list)
Epoch 1/2

 256/1000 [======>.......................] - ETA: 11s - loss: 3.8128
 512/1000 [==============>...............] - ETA: 5s - loss: 3.8058
 768/1000 [======================>.......] - ETA: 2s - loss: 3.7976
1000/1000 [==============================] - 10s 10ms/step - loss: 3.7883

Epoch 00001: loss improved from inf to 3.78827, saving model to hdf5/weights-improvement-01-3.7883.hdf5
Epoch 2/2

 256/1000 [======>.......................] - ETA: 5s - loss: 3.7167
 512/1000 [==============>...............] - ETA: 4s - loss: 3.6880
 768/1000 [======================>.......] - ETA: 1s - loss: 3.6622
1000/1000 [==============================] - 8s 8ms/step - loss: 3.6151

Epoch 00002: loss improved from 3.78827 to 3.61512, saving model to hdf5/weights-improvement-02-3.6151.hdf5
DeepLSTM_Pre_word.shape:
 (3, 45)
after DeepLSTM read_out, chars:
 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n']

核心代码

LSTM_Model = Sequential()
LSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2])))
LSTM_Model.add(Dropout(0.2))
LSTM_Model.add(Dense(Y_train.shape[1], activation='softmax'))
LSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print('LSTM_Model \n',LSTM_Model.summary())

embedding_vector_length = 32
LSTMWithE_Model = Sequential()
LSTMWithE_Model.add(Embedding(chars_len, embedding_vector_length, input_length=seq_length))
LSTMWithE_Model.add(LSTM(256))
LSTMWithE_Model.add(Dropout(0.2))
LSTMWithE_Model.add(Dense(Y_train.shape[1], activation='softmax'))
LSTMWithE_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print (LSTMWithE_Model.summary())

DeepLSTM_Model = Sequential()
DeepLSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
DeepLSTM_Model.add(Dropout(0.2))
DeepLSTM_Model.add(LSTM(64))
DeepLSTM_Model.add(Dropout(0.2))
DeepLSTM_Model.add(Dense(Y_train.shape[1], activation='softmax'))
DeepLSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print('DeepLSTM_Model \n',DeepLSTM_Model.summary())
(0)

相关推荐