Netinfo Security ›› 2023, Vol. 23 ›› Issue (12): 10-20.doi: 10.3969/j.issn.1671-1122.2023.12.002

Previous Articles     Next Articles

A Hierarchical Federated Learning Framework Based on Shared Dataset and Gradient Compensation

LIU Jiqiang, WANG Xuewei, LIANG Mengqing, WANG Jian()   

  1. Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, Beijing 100044, China
  • Received:2023-10-07 Online:2023-12-10 Published:2023-12-13

Abstract:

Federated learning(FL) enables vehicles to locally retain data for model training, enhancing privacy. However, due to variations in conditions such as onboard sensors and driving routes, vehicles participating in FL may exhibit different data distributions, thereby reducing model generalization and increasing convergence challenges. To ensure real-time performance, asynchronous stochastic gradient descent(SGD) techniques widely employes in Internet of vehicle. Nevertheless, the issue of gradient delay can lead to inaccuracies in model training. To address these challenges, this paper proposes a layered FL framework based on shared datasets and gradient compensation. The framework utilized shared datasets and an aggregation method weighted by ReLU values to reduce model bias. Additionally, it employed a Taylor expansion approximation of the original loss function using the gradient function to compensate for asynchronous SGD. Experimental results on the MNIST and CIFAR-10 datasets indicate that compared to FedAVG, MOON, and HierFAVG, the proposed method achieves an average accuracy improvement of 13.8%, 2.2%, and 3.5%, respectively. The time cost is only half that of both synchronous SGD and asynchronous SGD.

Key words: Internet of vehicle, federated learning, gradient compensation, heterogeneous data, asynchronous communication

CLC Number: