This respiratory is used for the deployment of a Text-Generation Model using Streamlit. The model was developed using an Encoder-Decoder Transformer architecture trained on the The Project Gutenberg eBook of Grimms’ Fairy Tales, by Jacob Grimm and Wilhelm Grimm
dataset. The text generation was achieved using an auto-regressive approach with greedy sampling technique.
While the model offers some promising results, it is important to note that it has several limitations. For instance, the model may ignore unknown words, generate text with no relation to the input text, and exhibit low cohesion when the generated text gets longer. We acknowledge that the model has some flaws and limitations and we are working to improve the model's performance in the future.
The model's architecture utilizes Encoder-Decoder Transformer model, which is a state-of-the-art technique for various Natural Language Processing tasks, including text generation. With the ability to capture long-range dependencies and handle variable-length sequences, the Transformer model offers significant improvements in terms of both performance and efficiency.
The web application for this project can be accessed from: This link
( https://kongfha-transformer-text-generation-streamlit-app-u8wpe1.streamlit.app/ )
By deploying this Text-Generation Model, we hope to offer a valuable tool for generating realistic and engaging text. However, we also want to remind users that the model has its limitations and should be used with caution. The generated text should always be carefully reviewed before publishing or sharing.