Federated Learning for Secure AI Models: Enhancing Privacy and Robustness in Decentralized Environments
Main Article Content
Abstract
Federated learning (FL) has emerged as a promising approach to training AI models while addressing key concerns such as data privacy, security, and decentralization. Unlike traditional centralized machine learning, where data is aggregated into a central server, FL allows for the decentralized training of AI models by enabling data to remain on local devices. This paradigm is particularly relevant for industries where data privacy is critical, such as healthcare, finance, and personal data applications. Despite its advantages, federated learning faces significant challenges in ensuring the security and robustness of AI models against various threats, including adversarial attacks, model poisoning, and data leakage. This paper explores the potential of federated learning to secure AI models by leveraging its decentralized nature, and highlights the security challenges it faces, including threats at the edge, model integrity, and privacy concerns. Additionally, the paper reviews state-of-the-art techniques for enhancing federated learning security, including secure aggregation, differential privacy, and federated adversarial training. By examining current research and practical applications, this paper provides insights into the future of federated learning for secure AI model development.