A Hybrid Transformer-LSTM Model with Federated Learning for Privacy- Preserving and Explainable Text Classification

Main Article Content

Rupesh Malla
Pankaj Sonawane

Abstract

The growing demand for context-aware and privacy-preserving recommendation systems in dynamic and decentralized environments got the need for this type of system. Todays centralized models face many challenges such as data privacy risks, communication overhead, and adapting to rapidly changing user behavior. By leveraging and improving current system with federated learning, the proposed Hybrid Transformer-LSTM model makes sure that the user data remains local, enhancing privacy and compliance with data regulations. The merge of BERT and LSTM architectures combines the strengths of transformer models in capturing semantic relationships with LSTM's ability to understand sequential dependencies. The addition in this process of an attention mechanism enhances explainability by highlighting important input features, crucial for transparency in decision-making systems. This framework is designed and developed to adapt firmly to evolving data distributions system which will make it suitable for real-world applications like personalized recommendations, healthcare diagnostics, and adaptive learning platforms in decentralized settings.

Downloads

Download data is not yet available.

Article Details

How to Cite
Rupesh Malla, & Pankaj Sonawane. (2024). A Hybrid Transformer-LSTM Model with Federated Learning for Privacy- Preserving and Explainable Text Classification. Educational Administration: Theory and Practice, 30(11), 2332–2341. Retrieved from https://kuey.net/index.php/kuey/article/view/10467
Section
Articles
Author Biographies

Rupesh Malla

Department of Computer Engineering, D.J. Sanghvi College of Engineering Mumbai, India

Pankaj Sonawane

Department of Computer Engineering, D.J. Sanghvi College of Engineering Mumbai, India

Most read articles by the same author(s)