-
Notifications
You must be signed in to change notification settings - Fork 0
/
overview.qmd
42 lines (25 loc) · 1.78 KB
/
overview.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
title: "Overview"
---
# Motivation
1. Estimate predictions for small data
2. Evaluate variable or feature importance on all subpopulations of the data
3. Generate prediction uncertainty and variance
4. Develop classifiers based on unseen data
# Main advantages
- Predictions are generated on many different training/validation data splits
- Predictor or feature importances to the dependent variable are generalized over many subpopulations of the data
- No data leakage - Predictions are on observations not included during training
# Procedural overview
- Monte Carlo simulation splits the data into training and validation sets
- K-fold cross validation (10 by default) on the *training set* is used to estimate "good" model parameters
- The model with the "good" parameters is fit on the entire training set
- The refitted model predicts the yet-to-be-seen validation set
- Performance metrics are generated using resamples (bootstrap with replacement) of the observation probabilities
# Related Methods
- Monte-Carlo simulation (similar to [ShuffleSplit in sklearn](https://scikit-learn.org/stable/modules/cross_validation.html#shufflesplit))
- Cross-Validation ([tidymodels in R](https://www.tmwr.org/resampling.html) and [sklearn in python](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation))
- [*mc_cv* from tidymodels in R](https://rsample.tidymodels.org/reference/mc_cv.html), but does not include additional cross validation as explained above
# References
- [K-fold and Montecarlo cross-validation vs Bootstrap: a primer](https://nirpyresearch.com/kfold-montecarlo-cross-validation-bootstrap-primer/)
- [Cross-Validation: K Fold vs Monte Carlo](https://towardsdatascience.com/cross-validation-k-fold-vs-monte-carlo-e54df2fc179b)