Browsing by Author "Eslami Abyane, Amin"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access Robustness and Reliability of Federated Learning(2022-12-21) Eslami Abyane, Amin; Hemmati, Hadi; Abou-Zeid, Hatem; Wu, HuaqingFederated Learning (FL) is a newly introduced distributed learning scheme, which is designed with users' privacy in mind, by never collecting clients' data during the training process. FL's process starts with the server sending a model to clients, then the clients train that model using their local data and send the updated model back to the server (only the trained parameter values of the model not the actual features values from the client's local dataset). Afterward, the server aggregates all the received values and updates the global model. This process is repeated until the model converges. Since clients may become unavailable (e.g., due to their movement) during FL training, and server-client communication is extremely costly, only a fraction of clients gets selected for training at each round using a client selection technique. Although FL is excellent at preserving privacy, it still faces many challenges, of which we focus on two of the most important ones: robustness and reliability. In FL, attacks or faults may occur on each client, and it is crucial that the system is robust to these problems. Furthermore, clients may be unreliable and become unavailable at each point, so FL needs to withstand these availability changes and be effective and efficient. To address the robustness challenges of FL, we perform a large-scale empirical study from multiple angles of attacks, simulated faults (via mutation operators), and aggregation (defense) methods evaluated on multiple datasets resulting in 496 configurations. Our results show that most faults (mutators) have a negligible effect on the final trained model when leveraging existing aggregators, but this is not the case with all attacks. However, choosing the most robust FL aggregator depends on the attack type and datasets. Therefore, we propose a simple ensemble of aggregators and show that it results in a more robust solution compared to any single aggregator and is the best choice in 75% of the cases. To analyze the reliability challenges of FL, we consider multiple client selection techniques and propose the first availability-aware selection strategy called MDA. The results show that our approach makes learning faster than vanilla FL by up to 6.5%. Finally, we show that resource heterogeneity-aware selection techniques are effective but can become even better when combined with our approach, making the final solution faster than the state-of-the-art selectors by up to 16%.