With the successive promulgation of data privacy protection laws and regulations, the problem of privacy data exposure in the traditional centralized learning model has become an important factor restricting the development of artificial intelligence. The proposal of federated learning solves this problem, however, existing federated learning has problems such as model parameters leaking sensitive information and relying on trusted third-party servers. This paper proposed a new parameter masking federated learning privacy preserving scheme, which can resist server attacks, user attacks, server colluding with less than t users attacks. The scheme included three protocols: key exchange, parameter masking, and disconnection processing. User uploaded the masked model parameters after training the model locally. After the server aggregated model parameters, it can only obtain the masked parameter aggregation results. Experiments show that for 16-byte input values, our protocol offer 1.44× communication expansion for 27 user and 220- dimensional vector over sending data in the clear, and compared with existing scheme, it has lower communication cost.