EXPLORING THE POTENTIAL OF ENSEMBLE LEARNING TECHNIQUES TO ENHANCE ACCURACY AND ROBUSTNESS IN COMPLEX REAL-WORLD MACHINE LEARNING APPLICATIONS

Authors

  • Sanat Juneja Research Scholars Program, Harvard Student Agencies, In collaboration with Learn with Leaders

Keywords:

Machine Learning Models, Boosting, Random Forests, Bagging, Algorithm

Abstract

This research paper investigates the extent to which ensemble learning techniques can improve the accuracy and robustness of machine learning models in complex real-world applications. The paper examines various ensemble methods, including boosting, random forests, bagging, and SVMs, and evaluates their performance on a range of metrics. The findings highlight the effectiveness of ensemble learning in enhancing model performance, with calibrated boosted trees emerging as the top-performing algorithm across multiple metrics.

References

I. Caruana, R. (2006). Empirical comparison of supervised learning algorithms. Cornell University. https://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml06.pdf

II. Dr. Raymond J. Mooney (2012). Ensemble Learning. SlideServe. https://www.slideserve.com/ata/ensemble-learning

III. Liu, J., & Dietterich, T. G. (2009). Boosting with multi-instance examples. Machine Learning, 78(2-3), 329-351. https://doi.org/10.1007/s10994-009-5152-4

IV. Dietterich, T. G., Lathrop, R. H., & Lozano-Pérez, T. (1997). Solving the multiple instance problem with axis-parallel rectangles. In Proceedings of the Fourteenth International Conference on Machine Learning (ICML) (pp. 179-186). https://ieeexplore.ieee.org/document/1647630

Additional Files

Published

01-11-2023

How to Cite

Sanat Juneja. (2023). EXPLORING THE POTENTIAL OF ENSEMBLE LEARNING TECHNIQUES TO ENHANCE ACCURACY AND ROBUSTNESS IN COMPLEX REAL-WORLD MACHINE LEARNING APPLICATIONS. International Educational Journal of Science and Engineering, 6(6). Retrieved from https://iejse.com/journals/index.php/iejse/article/view/58