Conditional Use Accuracy Difference
Overview
The difference between the positive predicted value and negative predicted value for both privileged and unprivileged groups
Calculation
(PPV for privileged group + PPV for unprivileged group) - (NPV for privileged group + NPV for unprivileged group)
Usage
Manually
# Calculate conditional use accuracy difference
difference = conditional_use_accuracy_difference(df, protected_attribute, privileged_group, labels, positive_label, y_true)
print("Conditional Use Accuracy Difference:", difference)
Using Fairness Object
difference = (fo.compute(conditional_use_accuracy_difference))
Results
Conditional Use Accuracy Difference: 0.6333333333234445
These results are obtained by using the input data given in the Create Example Data page under Getting Started
Interpretation
The Conditional Use Accuracy Difference metric quantifies the difference in accuracy between two groups or populations based on a specific condition or attribute. A positive value indicates that the accuracy of the classification model is higher for the positive predictive values compared to the negative predictive values, while a negative value indicates the opposite.
Updated 9 months ago