Positive Predictive Value

Overview

The Positive Predictive Value (PPV), also known as Precision, is a metric used to evaluate the performance of a classification model in terms of the proportion of positive predictions that are correct. It measures the model's ability to correctly identify positive instances.

Formula

PPV = P(actual = + | prediction = +) = TP/(TP + FP)

Source

Where:

  • TP (True Positives): The number of positive instances correctly predicted by the model
  • FP (False Positives): The number of negative instances incorrectly predicted as positive by the model

Usage

Manually

ppv_priv, ppv_unpriv = positive_predicted_value(df, protected_attribute, privileged_group, labels, positive_label, y_true)

print("Positive Predicted Value for privileged group:", ppv_priv)
print("Positive Predicted Value for unprivileged group:", ppv_unpriv)

Using Fairness Object

ppp_priv, ppv_unpriv = (fo.compute(positive_predicted_value))

Results

Positive Predicted Value for privileged group: 0.399999999992 
Positive Predicted Value for unprivileged group: 0.399999999992

These results are obtained by using the input data given in the Create Example Data page under Getting Started

Interpretation

A higher PPV value indicates that the model has a higher proportion of correct positive predictions, which is desirable. A PPV of 1.0 indicates that all positive predictions made by the model are correct.