Grouped F-Beta
Overview
Computes the F-Beta score separately for privileged and unprivileged groups. It returns a tuple containing the F-Beta scores for each group.
Usage
Manually
# Calculate grouped_fb
f1_priv, f1_unpriv = grouped_fb(df, protected_attribute, privileged_group, labels, positive_label, y_true, beta=1)
print("f1_priv, f1_unpriv:", f1_priv, f1_unpriv)
Using Fairness Object
result = fo.compute(grouped_fb, beta=1)
Results
f1_priv, f1_unpriv: 0.5 0.3636363636363636
These results are obtained by using the input data given in the Create Example Data page under Getting Started
Interpretation
By calculating and comparing the F1 scores for each group, we can gain insights into potential disparities in the model's ability to correctly identify positive samples and balance false positives and false negatives.
Ideally, we would aim for similar F1 scores across all groups, indicating equitable performance. However, if there are significant differences in F1 scores between groups, it suggests potential disparities. Lower F1 scores for certain groups may indicate that the model is less accurate in predicting positive samples or may have imbalanced error rates for those groups.
Updated 9 months ago