Doctor of Philosophy (Ph.D.)
Oftentimes, training a large-scale deep learning neural network on a single machine becomes more difficult in a complex network model. Distributed training provides an efficient solution, but opens up participating workers to Byzantine attacks. This problem emerges when some workers cheat during uploading gradients or weights to the central server, e.g., the information received by the server is not always the true result computed by workers. In order to address this problem, we investigate Byzantine problems in distributed machine learning and respectively defend against these kinds of attacks in three scenarios: i) classic distributed machine learning; ii) federated learning; and iii) quantum federated learning. In order to defend against Byzantine attacks in distributed machine learning, two algorithms are proposed for both effectiveness and efficiency. We propose FABA(Fast Aggregation against Byzantine Attacks) and VBOR (Variance-Based Outlier Removal). They are both based on the idea of removing the outliers in the uploaded gradients and obtaining gradients that are close to the true gradients. FABA is efficient and effective against Byzantine attacks. VBOR is specifically for large-scale distributed machine learning. We show the convergence of these algorithms. The experiments demonstrate that our algorithms can achieve similar performance to non-Byzantine cases and higher efficiency compared to previous algorithms. In order to defend against Byzantine attacks in federated learning, we first compare two differences in federated learning: first, each worker keeps its own non-i.i.d. private dataset and second, malicious workers take over the majority in some iterations. In this work, we propose a novel reference dataset-based Two-Filter algorithm ToFi to defend against Byzantine attacks in federated learning. Our experiments highlight the effectiveness of our algorithm compared with previous algorithms in various environments. In quantum federated learning, we will first borrow the core idea of federated learning to propose QuantumFed, a quantum federated learning framework to collaborate multiple quantum nodes with local quantum data. We will conduct simulated experiments to show the feasibility and robustness of our framework. Then we extend the Byzantine problem to our QuantumFed framework, examine and compare how the previously proposed algorithms, FABA, and ToFi, work in the quantum federated learning framework together with other previous algorithms.
© The Author
Xia, Qi, "Distributed Byzantine Tolerant Machine Learning" (2021). Dissertations, Theses, and Masters Projects. William & Mary. Paper 1638386746.