Average Odds Difference
Overview
The Average Odds Difference is a fairness metric used to assess the difference in predictive performance between two groups or populations in terms of both false positive rates and true positive rates. It focuses on measuring the balance of prediction outcomes across different groups.
Formula
Average Odds Difference = (difference between groups' true positive rates + difference between groups' true negative rates) / 2
Where:
- True Positive Rate = True Positives / (True Positives + False Negatives)
- False Negative Rate = False Negatives / (True Positives + False Negatives)
Usage
Manually
# Calculate average odds difference
result = average_odds_difference(df, protected_attribute, privileged_group, labels, positive_label, y_true)
print("Average Odds Difference:", result)
Using Fairness Object
result = (fo.compute(average_odds_difference))
Results
Average Odds Difference: -0.020833333334635412
These results are obtained by using the input data given in the Create Example Data page under Getting Started
Interpretation
The Average Odds Difference metric quantifies the difference in false positive rates and true positive rates between two groups or populations. A positive value indicates that the model produces higher false positive rates and true positive rates for the first group compared to the second group, suggesting potential disparities in predictive performance.
Updated 10 months ago