计算机技术 |
|
|
|
|
基于模型聚合的去中心化拜占庭鲁棒算法 |
卢朕( ),李建业,董云泉*( ) |
1. 南京信息工程大学 电子与信息工程学院,江苏 南京 210044 |
|
Decentralized Byzantine robust algorithm based on model aggregation |
Zhen LU( ),Jianye LI,Yunquan DONG*( ) |
1. School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China |
16 |
XIE C, KOYEJO S, GUPTA I. Zeno++: robust fully asynchronous SGD [C]// International Conference on Machine Learning . Vienna: PMLR, 2020: 10495−10503.
|
17 |
SO J, GÜLER B, AVESTIMEHR A S Byzantine-resilient secure federated learning[J]. IEEE Journal on Selected Areas in Communications, 2020, 39 (7): 2168- 2181
|
18 |
WANG H, MUÑOZ-GONZÁLEZ L, EKLUND D, et al. Non-IID data re-balancing at IoT edge with peer-to-peer federated learning for anomaly detection [C]// Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks . New York: Association for Computing Machinery, 2021: 153−163.
|
19 |
KANG J, XIONG Z, NIYATO D, et al Reliable federated learning for mobile networks[J]. IEEE Wireless Communications, 2020, 27 (2): 72- 80
doi: 10.1109/MWC.001.1900119
|
20 |
GHOLAMI A, TORKZABAN N, BARAS J S. Trusted decentralized federated learning [C]// 2022 IEEE 19th Annual Consumer Communications and Networking Conference (CCNC) . Las Vegas: IEEE, 2022: 1−6.
|
21 |
ZHAO Y, ZHAO J, JIANG L, et al Privacy-preserving blockchain-based federated learning for IoT devices[J]. IEEE Internet of Things Journal, 2020, 8 (3): 1817- 1829
|
22 |
LU Y, HUANG X, ZHANG K, et al Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles[J]. IEEE Transactions on Vehicular Technology, 2020, 69 (4): 4298- 4311
doi: 10.1109/TVT.2020.2973651
|
23 |
肖丹. 去中心联邦学习中抗女巫和拜占庭攻击的研究[D]. 西安: 西安电子科技大学, 2022. XIAO Dan. A study of resistance to witch and Byzantine attacks in decentralized federal learning [D]. Xi'an: Xi'an University of Electronic Science and Technology, 2022.
|
24 |
李丽萍. 基于模型聚合的分布式拜占庭鲁棒优化算法研究[D]. 安徽: 中国科学技术大学, 2020. LI Liping. Research on distributed Byzantine robust optimization algorithm based on model aggregation [D]. Anhui: University of Science and Technology of China, 2020.
|
25 |
POLYAK B T Gradient methods for the minimisation of functionals[J]. USSR Computational Mathematics and Mathematical Physics, 1963, 3 (4): 864- 878
doi: 10.1016/0041-5553(63)90382-3
|
1 |
MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]// Artificial Intelligence and Statistics . Lauderdale: PMLR, 2017: 1273−1282.
|
2 |
LI T, SAHU A K, ZAHEER M, et al Federated optimization in heterogeneous networks[J]. Proceedings of Machine Learning and Systems, 2020, 2: 429- 450
|
26 |
刘铁岩, 陈薇, 王太峰, 等. 分布式机器学习算法理论与实践[M]. 北京: 机械工业出版社, 2018.
|
27 |
KRIZHEVSKY A. Learning multiple layers of features from tiny images [D]. Toronto: University of Toronto, 2009.
|
28 |
DENG L The mnist database of handwritten digit images for machine learning research [best of the web][J]. IEEE Signal Processing Magazine, 2012, 29 (6): 141- 142
doi: 10.1109/MSP.2012.2211477
|
29 |
CHEN T, LI M, LI Y, et al. Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems [EB/OL]. (2015−12−03). https://doi.org/10.48550/arXiv.1512.01274.
|
3 |
GHOLAMI A, TORKZABAN N, BARAS J S, et al. Joint mobility-aware UAV placement and routing in multi-hop UAV relaying systems [C]// Ad Hoc Networks: 12th EAI International Conference . Paris: Springer International Publishing, 2021: 55−69.
|
4 |
GAO H, HUANG H. Periodic stochastic gradient descent with momentum for decentralized training [EB/OL]. (2020−08−24). https://arxiv.org/abs/2008.10435.
|
5 |
LI X, YANG W, WANG S, et al. Communication-efficient local decentralized sgd methods [EB/OL]. (2021−04−05). https://doi.org/10.48550/arXiv.1910.09126.
|
6 |
LU S, ZHANG Y, WANG Y. Decentralized federated learning for electronic health records [C]// 2020 54th Annual Conference on Information Sciences and Systems . Princeton: IEEE, 2020: 1−5.
|
7 |
YU H, JIN R, YANG S. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization [C]// International Conference on Machine Learning . Long Beach: PMLR, 2019: 7184−7193.
|
8 |
LAMPORT L, SHOSTAK R, PEASE M. The Byzantine generals problem [M]// Concurrency: the works of leslie lamport . New York: Association for Computing Machinery, 2019: 203−226.
|
9 |
DAMASKINOS G, GUERRAOUI R, PATRA R, et al. Asynchronous Byzantine machine learning (the case of SGD) [C]// International Conference on Machine Learning . Stockholm: PMLR, 2018: 1145−1154.
|
10 |
CHEN Y, SU L, XU J Distributed statistical machine learning in adversarial settings: Byzantine gradient descent[J]. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2017, 1 (2): 1- 25
|
11 |
YIN D, CHEN Y, KANNAN R, et al. Byzantine-robust distributed learning: towards optimal statistical rates [C]// International Conference on Machine Learning . Stockholm: PMLR, 2018: 5650−5659.
|
12 |
XIE C, KOYEJO O, GUPTA I. Phocas: dimensional byzantine-resilient stochastic gradient descent [EB/OL]. (2018−05−23). https://doi.org/10.48550/arXiv.1805.09682.
|
13 |
XIE C, KOYEJO O, GUPTA I. Generalized byzantine-tolerant sgd [EB/OL]. (2018−05−23). https://doi.org/10.48550/arXiv.1802.10116.
|
14 |
BLANCHARD P, EL MHAMDI E M, GUERRAOUI R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems . New York: Curran Associates Inc, 2017: 118−128.
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|