Ensemble Learning: A Multi-model Strategy to reduce errors in ML

Machine learning algorithms/models are used to find patterns and learn from data itself. Machine learning algorithms use statistics methods to find patterns in massive data. But the data may be erroneous which may get further propagated to Machine Learning models.

Machine Learning models may be erroneous due to the effect of biasing, noise or variance in data. In such data scenarios, if we apply different models, we may find different results with varied errors depending upon parameters under consideration. This is where we can apply ensemble methods to find optimal results.

Ensemble learning is used to combine results from various models to improve the overall performance. It is a machine learning technique that combines several models to find optimal results.Ensemble learning is of two types: Bagging and Boosting.

Bagging stands for Bootstrap Aggregating. In Bagging, we expose each model to different subset of data. Due to this feature, bagging techniques are well suited to reduce variance error while avoiding over fitting, while Boosting is an iterative technique to update weights in a model. Boosting techniques are designed to decrease biasing errors but prone to over-fitting.

I found interesting articles on similar topic at:
https://towardsdatascience.com/ensemble-methods-in-machine-learning-what-are-they-and-why-use-them-68ec3f9fef5f
https://towardsdatascience.com/simple-guide-for-ensemble-learning-methods-d87cc68705a2

Close Menu