|
Predicted Positive |
Predicted Negative |
Actual Positive |
True Positive (TP) |
False Negative (FN) |
Actual Negative |
False Positive (FP) |
True Negative (TN) |
Accuracy: Overall Performance of model
Precision: How many accurate positive predictions were made.
Recall: Coverage of actual positive samples
Specificity: Coverage of actual negative samples
F-1 Score:
AUC and ROC
- True Positive Rate (TPR): Recall, Sensitivity
- False Positive Rate (FPR): 1-Specificity
![Untitled](https://prod-files-secure.s3.us-west-2.amazonaws.com/066cbe51-cded-4438-9b8a-2708fc7fd522/5b28169f-2b8c-4fff-9ac2-2e2094daccba/Untitled.png)
Reference:
https://stanford.edu/~shervine/teaching/cs-229/cheatsheet-machine-learning-tips-and-tricks