A Hybrid Transformer-LSTM Model with Federated Learning for Privacy- Preserving and Explainable Text Classification
Main Article Content
Abstract
The growing demand for context-aware and privacy-preserving recommendation systems in dynamic and decentralized environments got the need for this type of system. Todays centralized models face many challenges such as data privacy risks, communication overhead, and adapting to rapidly changing user behavior. By leveraging and improving current system with federated learning, the proposed Hybrid Transformer-LSTM model makes sure that the user data remains local, enhancing privacy and compliance with data regulations. The merge of BERT and LSTM architectures combines the strengths of transformer models in capturing semantic relationships with LSTM's ability to understand sequential dependencies. The addition in this process of an attention mechanism enhances explainability by highlighting important input features, crucial for transparency in decision-making systems. This framework is designed and developed to adapt firmly to evolving data distributions system which will make it suitable for real-world applications like personalized recommendations, healthcare diagnostics, and adaptive learning platforms in decentralized settings.