Advanced Next-Word Prediction: Leveraging Text Generation with LSTM Model
Keywords:
Next Word Prediction, LSTM, RNN, NLP, Tensor Flow, Keras, Deep LearningAbstract
Natural Language Processing (NLP) increasingly relies on machine learning to make better predictions of sequential text. This work focuses on the application of Long Short-Term Memory Networks, a variant of Recurrent Neural Networks that is specialized for modeling long-term dependencies. Traditional RNNs leave much to be desired in predicting sequences that contain repeated patterns or contextual dependencies. The research uses “The Adventures of Sherlock Holmes” as the training dataset and applies TensorFlow and Keras frameworks for implementation. The major preprocessing steps included word tokenization, n-gram creation, and one-hot encoding to prepare the dataset for modeling. The LSTM model was trained over 100 epochs to optimize prediction capabilities. Through this work, we show that LSTM is effective in next-word prediction and can potentially improve the performance and practicality of language models for real-world applications. The model achieved a commendable accuracy of 87.6%, demonstrating its effectiveness.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License