FedMcon: an adaptive aggregation method for federated learning via meta controller

Federated learning has emerged as a novel machine learning setting that enables collaborative training of deep models on decentralized clients while ensuring data privacy. However, the vanilla federated averaging algorithm (FedAvg) faces significant challenges when dealing with heterogeneous and unknown client data distributions. Its weighted linear combination-based aggregation approach often fails to address the varied dynamics of different scenarios, settings, and data distributions in federated learning, leading to slow convergence and compromised generalization performance.

Federated learning has emerged as a novel machine learning setting that enables collaborative training of deep models on decentralized clients while ensuring data privacy. However, the vanilla federated averaging algorithm (FedAvg) faces significant challenges when dealing with heterogeneous and unknown client data distributions. Its weighted linear combination-based aggregation approach often fails to address the varied dynamics of different scenarios, settings, and data distributions in federated learning, leading to slow convergence and compromised generalization performance.