Modal Measurements

rememberme
Jan 14, 2022

--

TP = True Positive; FP =False Positive; TN=True Negative; FN= False Negative

Accuracy (how good a model is predicting the TP and TN)

accuracy = (TP+TN)/(TP+TN+FP+FN)

Precision (what fraction of true positive in all positive cases)

precision= (TP)/(TP+FP)

True positive rate or Sensitivity or Recall ( Fraction of true positive among actual positive)

True positive rate = sensitivity = recall=(TP)/(TP+FN)

Specificity = True nagtive rate

FPR = (TN)/(TN+FP)

F1 Score

Typically used in measuring performance for binary classification

F1 = 2*(Recall * Precision) / (Recall + Precision)

ROC curve

True positive rate vs false positive rate

AUC (Area under curve in ROC) should be greater then .5 (AUC = 1 mean classifier correctly predicted all the data elements in the data set)

--

--

No responses yet