Jump to Content
Fairo
GuidesChangelogDiscussions
v1.0

Log InFairo
Guides
Log In
v1.0GuidesChangelogDiscussions

Fairo API

  • Welcome
  • Authentication
    • Create API Access Keys
  • API and System Documentation
  • API Responses

Fairo Metrics

  • Getting Started
    • Installation
    • Create Example Data
    • The Fairness Object
    • Metric Parameters
  • Metrics
    • False Negative Rate
    • False Positive Rate
    • True Negative Rate
    • True Positive Rate
    • Positive Predictive Value
    • False Discovery Rate
    • Negative Predictive Value
    • False Omission Rate
    • Parity Difference
    • Parity Ratio
    • Predictive Parity
    • Conditional Use Accuracy Difference
    • Treatment Equality
    • Equal Odds Difference
    • Average Odds Difference
    • Average Odds Ratio
    • Overall Accuracy
    • Accuracy Difference
    • F-Beta Score
    • Grouped F-Beta
    • F-Beta Difference
    • F-Beta Ratio
    • ROCAUC Score
    • Grouped ROCAUC
    • ROCAUC Difference
    • ROCAUC Ratio
    • Gini Coefficient
    • Theil Index
  • Computing Metric Results Via the API

Fairo Policies

  • Overview

Tutorials

  • Quickstart Guides
    • AI Experiment Tracking
    • AI Testing & Metrics

Interfaces

  • Overview
Powered by 

Getting Started

Suggest Edits

The Fairo Metrics package is a collection of metrics that are useful when testing AI algorithms for bias and accuracy. These metrics have been curated and compiled to enhance validation and testing when using the Fairo platform.

These metrics can be computed using the Python package or the Metrics API. When using the API, metric results can be saved/linked to other resources managed by Fairo.

Updated over 1 year ago


What’s Next
  • Installation