ROC Curve Explorer

KNN · Bias–Variance Tradeoff · Decision Boundary

Trials: 0
Binary Classification

Controls

K Neighbors 5
K=1 (overfit)K=51 (underfit)
Training Set Size 150
30500
Current AUC
Test Accuracy
AUC Mean
AUC Std Dev

Feature Space & Decision Boundary

K = 5
Class 0 train
Class 1 train
Class 0 test
Class 1 test
Decision boundary

ROC Curve

FPR vs TPR
Current model
Trial curves
Random chance
Bias–Variance Low K → jagged boundary, high variance (ROC curves spread wide across trials). High K → smooth boundary, high bias. Watch the boundary morph as you drag K.
Reading the ROC Each point is a classification threshold. AUC summarises discriminability — 1.0 is perfect, 0.5 is random chance (dashed diagonal).
100 Trials Resampling training data while holding the test set fixed reveals how much the ROC varies — a direct measure of model variance for the chosen K.