Congruent Learning for Self-Regulated Federated Learning in 6G
Congruent Learning for Self-Regulated Federated Learning in 6G
Blog Article
Future 6G networks are expected to be AI-native with distributed machine learning functionalities responsible for improving and automating a variety of network- and service-management tasks.To enable a privacy-preserving approach to distributed learning, federated learning (FL) has become prevalent in the communication-and-networking domain.However, for efficient management of the networks, FL needs to be Pourer Covers automated requiring minimal hyperparameter tuning.
An outstanding challenge towards automation of FL is regarding difficulties in handling overfitting.Existing techniques tackle overfitting via regularization heuristics that rely on hyperparameter tuning and as such presume availability of representative validation data.However, in the dynamic and heterogeneous network environments, this assumption is limiting.
Even if existence of validation data can be assumed, hyperparameter tuning comes with added communication and compute overhead cost which grows prohibitively as the federation scales in size.Here, we propose the congruent federated learning (CFL) as a self-regulated method of learning that is robust to overfitting and achieves the robustness without reliance on hyperparameter tuning.CFL employs a self-taught regularization mechanism that refrains local models from overfitting to the local data.
This is enabled via introduction of the congruent activation functions as a class of similarity-promoting activation functions that discourage learning local models which differ excessively from the global Corkscrew (federated) model.Across four networking use cases on several tasks, reflecting different profiles of data heterogeneity and limited availability of data, it is shown that CFL greatly reduces overfitting and in nearly all cases improves the performance—a relative gain of about 21% averaged across all use cases.