EXPLORING THE POTENTIAL OF ENSEMBLE LEARNING TECHNIQUES TO ENHANCE ACCURACY AND ROBUSTNESS IN COMPLEX REAL-WORLD MACHINE LEARNING APPLICATIONS
Keywords:
Machine Learning Models, Boosting, Random Forests, Bagging, AlgorithmAbstract
This research paper investigates the extent to which ensemble learning techniques can improve the accuracy and robustness of machine learning models in complex real-world applications. The paper examines various ensemble methods, including boosting, random forests, bagging, and SVMs, and evaluates their performance on a range of metrics. The findings highlight the effectiveness of ensemble learning in enhancing model performance, with calibrated boosted trees emerging as the top-performing algorithm across multiple metrics.
References
I. Caruana, R. (2006). Empirical comparison of supervised learning algorithms. Cornell University. https://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml06.pdf
II. Dr. Raymond J. Mooney (2012). Ensemble Learning. SlideServe. https://www.slideserve.com/ata/ensemble-learning
III. Liu, J., & Dietterich, T. G. (2009). Boosting with multi-instance examples. Machine Learning, 78(2-3), 329-351. https://doi.org/10.1007/s10994-009-5152-4
IV. Dietterich, T. G., Lathrop, R. H., & Lozano-Pérez, T. (1997). Solving the multiple instance problem with axis-parallel rectangles. In Proceedings of the Fourteenth International Conference on Machine Learning (ICML) (pp. 179-186). https://ieeexplore.ieee.org/document/1647630
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 International Educational Journal of Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.