1. Help Center
  2. Machine Learning

When Overfitting can be useful?

sometimes overfitting of base models can be desirable when stacking or using ensemble methods. This assumes that you have different models that overfit in different ways. When aggregating these (relatively) uncorrelated predictions you reduce variance of these very flexible, low bias models. I should add that the base models are overfitting slightly but not simply memorizing the data. You just let the models overfit more than you would if you were only using a single model. This method was employed in some of the most competitive Kaggle competitions