PFed-TG: A Personalized Federated Learning Framework for Text Generation
Keywords:
Federated Learning, Personalized Federated Learning, Text Generation, Privacy Preservation, Natural Language Processing, PythonAbstract
In recent years, advancements in deep learning and machine learning have spurred the development of various text generation models, particularly through Python programming. This paper introduces PFed-TG, a novel personalized federated learning (PFL) framework for text generation (PFed-TG) tasks that integrates personalized model training with federated learning principles, leveraging Python's Natural Language Processing (NLP) tools, including the Hugging Face Transformers library. The framework's efficacy is evaluated using the Shakespeare dataset, demonstrating consistent production of contextually relevant text. Performance is assessed using metrics such as ASL, ROUGE-L, BLEU, METEOR, and Perplexity, focusing on readability, coherence, and alignment. Results indicate that PFed-TG enhances efficiency and offers insights into optimizing personalized FL models for practical applications across diverse domains like healthcare, finance, and education. This research comprehensively evaluates PFed-TG's methodology, highlighting its potential to advance the field of NLP through innovative FL approaches.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License