Does k-fold cross validation suffer from ‘peeking’ at the validation dataset?

  • by

I’m new to the concept of k-fold cross validation. In the text book I’ve been referring, I learnt that the model we train can suffer from peeking if we tune our parameters based on the results of the validation set, thus making the model unable to generalize well.

But isn’t that what we are doing in k-fold cross validation? Aren’t we just training our model on a subset k-1 times and evaluating it on that subset once?

Or is it that because we do this to the whole dataset, the effect of peeking is negated to some extent?

I’m a bit confused here. Would highly appreciate your input on this. Thanks in advance.

submitted by /u/Ill-Quantity-4933
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *