Federated Learning: A Potentially Effective Method for Improving the Efficiency and Privacy of Machine Learning
Keywords:
Federated learning, privacy, security, blocking, adversarial attack, decentralized learning federation, approved federal trainingAbstract
A new AI paradigm called federated learning (FL) decentralizes data and enhances privacy by delivering education straight to the user's device. However, additional privacy concerns arose during the exchange and training of server and client parameters. Integrating FL privacy solutions at the edge level can result in higher computational and communication costs, which can compromise learning performance metrics and data value. To promote the best trade-offs among FL privateness and different performance-associated application needs, including precision, privation, convergency, value, computational protection and connection, this study offers a thorough research overview of key techniques and metrics. Reaching stability among privateness and different standards of factual international Federated Learning utilization is the focus of this paper, which also explores quantitative methodologies for evaluating privacy in FL. To mitigate server-related risks, decentralized federated learning removes the server from the network and uses blockchain technology to compensate for its loss. However, this benefit comes at the expense of exposing the system to additional privacy risks. An extensive safety study is required in this new paradigm. This survey examines various security mechanisms and addresses potential adversaries and dangers in decentralized federated learning. The verifiability and trustworthiness of decentralized federated learning are also considered.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Nishant Jakhar, Sajjan Singh

This work is licensed under a Creative Commons Attribution 4.0 International License.