In today’s class, I learned about cross-validation, bootstrapping, and k-fold cross-validation.
K-Fold Cross-Validation:
K-fold cross-validation is a crucial technique in machine learning and data analysis. It involves dividing the original dataset into “k” subsets or folds, each of roughly equal size. The model is then trained and evaluated “k” times, with each fold taking turns as the validation set while the remaining folds serve as the training data. This comprehensive process ensures that every data point participates in both training and validation, leading to a more reliable assessment of model performance. By averaging the results from each iteration, K-fold cross-validation provides a robust estimate of how well a model generalizes to unseen data.
Bootstrapping: Bootstrapping is a resampling technique used for statistical inference. It involves creating multiple random samples (with replacement) from a given dataset.
And I watched the videos of the cross validation right and wrong ways. I learnt a little bit. Also I worked on my project.