Accuracy Difference
Overview
The Accuracy Difference is a metric used to assess the disparity in accuracy between two groups or populations. It measures the absolute difference in accuracy rates achieved by a classification model for each group.
Calculation
Accuracy Difference = Privileged Group overall accuracy - Unprivileged Group overall accuracy
Usage
Manually
# Calculate accuracy difference
result = accuracy_difference(df, protected_attribute, privileged_group, labels, positive_label, y_true)
print("Accuracy Difference:", result)
Using Fairness Object
result = (fo.compute(accuracy_difference))
Results
Accuracy Difference: 0.33636363636363636
These results are obtained by using the input data given in the Create Example Data page under Getting Started
Interpretation
The Accuracy Difference quantifies the difference in accuracy between two groups or populations. A positive value indicates that the accuracy is higher for the first group compared to the second group, suggesting potential disparities in predictive performance.
Updated 8 months ago