With the rapid development and widespread application of machine learning technology, the issues related to data privacy have garnered significant attention. Membership inference attacks, which involve analyzing whether specific data samples are used in a model’s training, have raised concerns, particularly in sensitive domains such as healthcare and finance. Existing membership inference attacks exhibit limited attack performance, and various defense mechanisms, including differential privacy and knowledge distillation, have been employed to mitigate their threat to individual privacy. This paper conducted an in-depth analysis of various black-box membership inference attacks targeting classification models and proposed a membership inference attacks method based on ensemble learning that had stronger attack performance and less easily defensible membership inference attacks. Firstly, the experiment analyzed the relationships among target model generalization gap, attack success rate, and attack difference. Secondly, representative membership inference attacks were selected based on an analysis of the difference among different attacks. Finally, ensemble technology was used to integrate the selected attacks to obtain attacks with stronger performance. The experiments show that compared to existing membership inference attacks, ensemble-based membership inference attacks method based on ensemble learning has stronger and more stable attack performance across a wide range of models and datasets. By conducting an in-depth analysis of the attack methodology, including factors such as datasets, model architecture, and generalization gap, valuable insights can be provided for defending against membership inference attacks.