Ibrahim Musa1, Shaoxi Teng1 , Xutong Mu1,*
Xutong Mu
1School of Computer Science and Technology, Xidian University, Xi’an, 710071, China
*Corresponding author
Federated learning (FL) is susceptible to Byzantine attacks, where malicious clients can corrupt local data or upload adversarial updates to undermine model training. Many existing defense methods assume that data often rely on data homogeneity assumptions or prior datasets. To overcome these issues, we present FedCWA, a credibility-weighted aggregate framework for Byzantine-robust federated learning based on credibility-weighted aggregation. FedCWA presents ProfDiff, a technique that generates a fair proxy dataset (PDFD) on the server based on client class prototypes, which represents the global data distribution, eliminating dependency on external prior datasets. By analyzing the similarity of client prediction behaviors on PDFD, we construct a logits similarity matrix based on cosine similarity, enabling fine-grained client credibility assessment. Depending on the assessment results, the scheme designs a dynamic weight optimization mechanism that adaptively adjusts aggregation weights to effectively suppress the influence of malicious clients. Comprehensive experiments achieved on different benchmark datasets under a variety of Byzantine attack scenarios demonstrate that FedCWA consistently outperforms existing state-of-the-art defense methods, achieving higher accuracy, improved stability in convergence, and greater resilience in heterogeneous federated learning settings. Theoretical analysis further substantiates the robustness guarantees of our methodology, achieving FedCWA as an efficacious strategy for protecting federated learning.
Federated learning, Byzantine attacks, Credibility-weighted aggregation, Malicious clients
Ibrahim Musa, Shaoxi Teng Xutong Mu (2025). FedCWA: Credibility-Weighted Aggregation for Byzantine-Robust Federated Learning. Journal of Networking and Network Applications, Volume 5, Issue 2, pp. 77–84. https://doi.org/10.33969/J-NaNA.2025.050203.
[1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017, pp. 1273–1282.
[2] Y. Shi, H. Song, and J. Xu, “Responsible and effective federated learning in financial services: A comprehensive survey,” in 2023 62nd IEEE Conference on Decision and Control (CDC), 2023, pp. 4229–4236.
[3] S. T. Ahmed, A. C. Kaladevi, K. V, A. Shankar, and F. Alqahtani, “Pri-vacy enhanced edge-ai healthcare devices authentication: A federated learning approach,” IEEE Transactions on Consumer Electronics, pp. 1–1, 2025.
[4] T. Zheng, A. Li, Z. Chen, H. Wang, and J. Luo, “Autofed: Heterogeneity-aware federated multimodal learning for robust autonomous driving,” in Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, ser. ACM MobiCom ’23. New York, NY, USA: Association for Computing Machinery, 2023. [Online]. Available: https://doi.org/10.1145/3570361.3592517
[5] P. R. Ovi and A. Gangopadhyay, “Robust federated learning against data poisoning attacks: Prevention and detection of attacked nodes,” Electronics, vol. 14, no. 15, 2025. [Online]. Available: https://www.mdpi.com/2079-9292/14/15/2970
[6] L. Yang, Y. Miao, Z. Liu, Z. Liu, X. Li, D. Kuang, H. Li, and R. H. Deng, “Enhanced model poisoning attack and multi-strategy defense in federated learning,” IEEE Transactions on Information Forensics and Security, vol. 20, pp. 3877–3892, 2025.
[7] W. Shen, W. Huang, G. Wan, and M. Ye, “Label-free backdoor attacks in vertical federated learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 19, pp. 20 389–20 397, Apr. 2025. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/34246
[8] X. Mu, K. Cheng, T. Liu, T. Zhang, X. Geng, and Y. Shen, “Fedpta: Prior-based tensor approximation for detecting malicious clients in federated learning,” IEEE Transactions on Information Forensics and Security, vol. 19, pp. 9100–9114, 2024.
[9] H. Xiao, X. Mu, and K. Cheng, “Fedrma: a robust federated learning resistant to multiple poisoning attacks,” Journal of Networking and Network Applications, vol. 4, no. 1, pp. 31–38, 2024.
[10] X. Mu, K. Cheng, Y. Shen, X. Li, Z. Chang, T. Zhang, and X. Ma, “Feddmc: Efficient and robust federated learning via detecting malicious clients,” IEEE Transactions on Dependable and Secure Computing, vol. 21, no. 6, pp. 5259–5274, 2024.
[11] A. Song, T. Zhang, K. Cheng, Y. Cao, X. Zhu, and Y. Shen, “Byzantine-robust federated learning framework via a server-client defense mecha-nisms,” IEEE Internet of Things Journal, vol. 12, no. 14, pp. 29 073–29 088, 2025.
[12] H. Kabbaj, R. El-Azouzi, and A. Kobbane, “Robust federated learning via weighted median aggregation*,” in 2024 2nd International Con-ference on Federated Learning Technologies and Applications (FLTA), 2024, pp. 298–303.
[13] L. Huo, L. Wu, J. Feng, X. Fan, E. Wang, and X. Li, “Dp-caka: Defending local model poisoning attacks based on differential privacy and complex acc-based multi-krum algorithm in distributed federated learning,” in 2024 IEEE International Conference on High Performance Computing and Communications (HPCC), 2024, pp. 1409–1418.
[14] C. Fung, C. J. M. Yoon, and I. Beschastnikh, “Mitigating sybils in federated learning poisoning,” 2020. [Online]. Available: https://arxiv.org/abs/1808.04866
[15] T. Wang, Z. Zheng, and F. Lin, “Federated learning framework based on trimmed mean aggregation rules,” Expert Systems with Applications, vol. 270, p. 126354, 2025.
[16] S. Pandey, O. Singh, A. Pandey, and C. Pandey, “Robust and privacy-preserving federated learning against malicious clients: A bulyan-based adaptive differential privacy framework,” IEEE Access, vol. 13, pp. 139 931–139 943, 2025.
[17] K. Pillutla, S. M. Kakade, and Z. Harchaoui, “Robust aggregation for federated learning,” IEEE Transactions on Signal Processing, vol. 70, pp. 1142–1154, 2022.
[18] X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” 2022. [Online]. Available: https://arxiv.org/abs/2012.13995
[19] J. Park, D.-J. Han, M. Choi, and J. Moon, “Sageflow: Robust feder-ated learning against both stragglers and adversaries,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34. Curran Associates, Inc., 2021, pp. 840–851.
[20] W. Huang, Z. Shi, M. Ye, H. Li, and B. Du, “Self-driven entropy aggregation for byzantine-robust heterogeneous federated learning,” in Forty-first International Conference on Machine Learning, 2024.[Online]. Available: https://openreview.net/forum?id=k2axqNsVVO
[21] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar, “Adversarial machine learning,” in Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, ser. AISec ’11. New York, NY, USA: Association for Computing Machinery, 2011, p. 43–58. [Online]. Available: https://doi.org/10.1145/2046684.2046692
[22] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” 2013. [Online]. Available: https://arxiv.org/abs/1206.6389
[23] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 09–15 Jun 2019, pp. 634–643. [Online]. Available: https://proceedings.mlr.press/v97/bhagoji19a.html
[24] R. Cheng, X. Wang, F. Sohel, and H. Lei, “Topology-aware universal adversarial attack on 3d object tracking,” Visual Intelligence, vol. 1, no. 1, p. 31, 2023. [Online]. Available: https://doi.org/10.1007/s44267-023-00033-8
[25] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoor federated learning?” 2019. [Online]. Available: https://arxiv.org/abs/1911.07963
[26] V. Shejwalkar and A. Houmansadr, “Manipulating the byzantine: Opti-mizing model poisoning attacks and defenses for federated learning,” in NDSS, 2021.
[27] D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80. PMLR, 10–15 Jul 2018, pp. 5650–5659. [Online]. Available: https://proceedings.mlr.press/v80/yin18a.html
[28] E. M. E. Mhamdi, R. Guerraoui, and S. Rouault, “The hidden vulnerability of distributed learning in byzantium,” 2018. [Online]. Available: https://arxiv.org/abs/1802.07927
[29] A. Krizhevsky, G. Hinton et al., “Convolutional deep belief networks on cifar-10,” Unpublished manuscript, vol. 40, no. 7, pp. 1–9, 2010.
[30] S. Peng, Y. Yang, M. Mao, and D.-S. Park, “Centralized machine learning versus federated averaging: A comparison using mnist dataset.” KSII Transactions on Internet & Information Systems, vol. 16, no. 2, 2022.
[31] J. Reyes, L. Di Jorio, C. Low-Kam, and M. Kersten-Oertel, “Precision-weighted federated learning,” arXiv preprint arXiv:2107.09627, 2021.
[32] J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp. 550–554, 1994.
[33] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng et al., “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 5. Granada, 2011, p. 7.
[34] P. Roy, S. Ghosh, S. Bhattacharya, and U. Pal, “Effects of degradations on deep neural network architectures,” 2025. [Online]. Available: https://arxiv.org/abs/1807.10108
[35] B. van Rooyen, A. Menon, and R. C. Williamson, “Learning with symmetric label noise: The importance of being unhinged,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., 2015.
[36] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama, “Robust training of deep neural networks with extremely noisy labels,” in Thirty-fourth Conference on Neural Information Pro-cessing Systems (NeurIPS), vol. 2, 2020, p. 4.
[37] J. Shi, W. Wan, S. Hu, J. Lu, and L. Yu Zhang, “Challenges and approaches for mitigating byzantine attacks in federated learning,” in 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2022, pp. 139–146.
[38] G. Baruch, M. Baruch, and Y. Goldberg, “A little is enough: Circum-venting defenses for distributed learning,” in Advances in Neural Infor-mation Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019.