Equal Odds Difference

Overview

Equalized Odds is a fairness metric used to assess whether a classification model provides equal predictive performance across different groups or populations. It focuses on measuring the balance of true positive rates (sensitivity/recall) and true negative rates (specificity) between the groups.

Calculation

Equalized Odds = Privileged Group True Positive Rate - Unprivileged Group True Positive Rate + Privileged Group True Negative Rate - Unprivileged Group True Negative Rate

Where:

  • True Positive Rate = True Positives / (True Positives + False Negatives)
  • False Negative Rate = False Negatives / (True Positives + False Negatives)

Source

Usage

Manually

# Calculate equal odds difference
result = equal_odds_difference(df, protected_attribute, privileged_group, labels, positive_label, y_true)

print("Equal Odds Difference:", result)

Using Fairness Object

result = (fo.compute(equal_odds_difference))

Results

Equal Odds Difference: 0.33333333331666665

These results are obtained by using the input data given in the Create Example Data page under Getting Started

Interpretation

The Equalized Odds metric quantifies the difference in true positive rates (sensitivity/recall) and true negative rates (specificity) between two groups or populations. A positive value indicates that the model performs better in terms of true positive rates and true negative rates for the first group compared to the second group, suggesting potential disparities in predictive performance.