Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update of the abbreviations #206

Open
wants to merge 5 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion R/fairness_selection.R
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ fairness_selection <- function(q1 = NULL,
}
} else if (q2 == 3) {
name <- "Equalized Odds"
measure <- "dp"
measure <- "eo"
q2_name <- "Correct and incorrect classification"
}
} else if (q1 == 2) {
Expand Down
8 changes: 4 additions & 4 deletions R/model_fairness.R
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
#' quantify potential fairness or discrimination in the algorithms predictions.
#' Available parity metrics include predictive rate parity, proportional parity,
#' accuracy parity, false negative rate parity, false positive rate parity, true
#' positive rate parity, negative predicted value parity, specificity parity,
#' positive rate parity, negative predictive value parity, specificity parity,
#' and demographic parity. The function returns an object of class
#' \code{jfaFairness} that can be used with associated \code{summary()} and
#' \code{plot()} methods.
Expand Down Expand Up @@ -98,11 +98,11 @@
#' \item{False positive rate parity (\code{fprp}): calculated as FP / (TN
#' + FP), quantifies whether the false positive rate is the same across
#' groups.}
#' \item{True positive rate parity (\code{tprp}): calculated as TP / (TP +
#' \item{True positive rate parity (\code{tprp}, also known as Equal opportunity): calculated as TP / (TP +
#' FN), quantifies whether the true positive rate is the same across
#' groups.}
#' \item{Negative predicted value parity (\code{npvp}): calculated as TN /
#' (TN + FN), quantifies whether the negative predicted value is equal
#' \item{Negative predictive value parity (\code{npvp}): calculated as TN /
#' (TN + FN), quantifies whether the negative predictive value is equal
#' across groups.}
#' \item{Specificity parity (\code{sp}): calculated as TN / (TN + FP),
#' quantifies whether the true positive rate is the same across groups.}
Expand Down
8 changes: 4 additions & 4 deletions man/model_fairness.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion tests/testthat/test-fairness-selection.R
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ test_that(desc = "Validation of fairness selection", {
expect_equal(outcome$measure, "pp")
# Equalized odds
outcome <- fairness_selection(q1 = 1, q2 = 3)
expect_equal(outcome$measure, "dp")
expect_equal(outcome$measure, "eo")
# False negative rate parity
outcome <- fairness_selection(q1 = 1, q2 = 2, q3 = NULL, q4 = 2)
expect_equal(outcome$measure, "fnrp")
Expand Down
2 changes: 1 addition & 1 deletion tests/testthat/test-model-fairness.R
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ test_that(desc = "Benchmark against fairness package", {
fairness_tpr <- fairness::equal_odds(compas, outcome = "TwoYrRecidivism", preds = "Predicted", group = "Ethnicity", outcome_base = "no", base = "Caucasian")$Metric
expect_equal(jfa_tprp[["metric"]][["all"]][["estimate"]], as.numeric(fairness_tpr[1, ]))
expect_equal(jfa_tprp[["parity"]][["all"]][["estimate"]], as.numeric(fairness_tpr[2, ]))
# Negative predicted value parity
# Negative predictive value parity
jfa_npvp <- model_fairness(compas, "Ethnicity", "TwoYrRecidivism", "Predicted", privileged = "Caucasian", positive = "yes", metric = "npvp")
fairness_npv <- fairness::npv_parity(compas, outcome = "TwoYrRecidivism", preds = "Predicted", group = "Ethnicity", outcome_base = "no", base = "Caucasian")$Metric
expect_equal(jfa_npvp[["metric"]][["all"]][["estimate"]], as.numeric(fairness_npv[1, ]))
Expand Down
2 changes: 1 addition & 1 deletion vignettes/articles/algorithm-auditing.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ metrics between an unprivileged protected class and a privileged protected class
is referred to as parity, which quantifies relative fairness in the algorithm's
predictions. Available parity metrics include predictive rate parity,
proportional parity, accuracy parity, false negative rate parity, false positive
rate parity, true positive rate parity, negative predicted value parity,
rate parity, true positive rate parity, negative predictive value parity,
specificity parity, and demographic parity [@friedler_2019; @pessach_2022]. The
function returns an object that can be used with the associated `summary()` and
`plot()` methods.
Expand Down
30 changes: 15 additions & 15 deletions vignettes/articles/model-fairness.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ algorithmic decision-making systems. It computes various model-agnostic metrics
based on the observed and predicted labels in a dataset. The fairness metrics
that can be calculated include demographic parity, proportional parity,
predictive rate parity, accuracy parity, false negative rate parity, false
positive rate parity, true positive rate parity, negative predicted value
positive rate parity, true positive rate parity, negative predictive value
parity, and specificity parity [@calders_2010; @chouldechova_2017;
@feldman_2015; @zafar_2017; @friedler_2019]. Furthermore, the metrics are tested
for equality between protected groups in the data.
Expand Down Expand Up @@ -346,12 +346,12 @@ model_fairness(
privileged group (Caucasians). This indicates relatively fair treatment
in the algorithm's true positive predictions.

8. **Negative predicted value parity**: Compares the negative predicted value
8. **Negative predictive value parity**: Compares the negative predictive value
(e.g., for non-reoffenders) of each unprivileged group with that of the
privileged group.

The formula for the negative predicted value is $NPV = \frac{TN}{TN + FN}$,
and the negative predicted value parity for unprivileged group $i$ is
The formula for the negative predictive value is $NPV = \frac{TN}{TN + FN}$,
and the negative predictive value parity for unprivileged group $i$ is
given by $NPVP = \frac{NPV_{i}}{NPV_{privileged}}$.

```{r}
Expand All @@ -367,17 +367,17 @@ model_fairness(
```

***Interpretation:***
- *African American*: The negative predicted value parity for African
- *African American*: The negative predictive value parity for African
Americans is 0.98013. A value close to 1 indicates that the negative
predicted value for African Americans is very similar to the privileged
group (Caucasians). This suggests fair treatment in predicting
non-reoffenders among African Americans.
- *Asian*: The negative predicted value parity for Asians is 1.1163,
indicating that their negative predicted value is slightly higher than for
- *Asian*: The negative predictive value parity for Asians is 1.1163,
indicating that their negative predictive value is slightly higher than for
Caucasians. This could suggest potential favoritism towards Asians in
predicting non-reoffenders.
- *Hispanic*: The negative predicted value parity for Hispanics is 1.0326,
suggesting that their negative predicted value is slightly higher than for
- *Hispanic*: The negative predictive value parity for Hispanics is 1.0326,
suggesting that their negative predictive value is slightly higher than for
Caucasians. This indicates potential favoritism towards Hispanics in
predicting non-reoffenders.

Expand Down Expand Up @@ -472,12 +472,12 @@ The `fairness_selection()` function offers a method to select a fairness measure
tailored to a specific context and dataset by answering the questions in the
developed decision-making workflow. The fairness measure that can be selected
include disparate impact, equalized odds, false positive rate parity, false
negative rate parity, predictive rate parity, equal opportunity, specificity
parity, negative predictive value parity, accuracy parity [@castelnovo_2022;
@feldman_2015; @friedler_2019; @hardt_2016, @verma_2018]. After answering the
questions in the decision-making workflow and selecting the fairness measure to
apply, a graphical representation of the followed path can be created based
on the responses.
negative rate parity, predictive rate parity, equal opportunity (also known as true positive
rate parity), specificity parity, negative predictive rate parity, accuracy parity
[@castelnovo_2022; @feldman_2015; @friedler_2019; @hardt_2016, @verma_2018].
After answering the questions in the decision-making workflow and selecting the fairness
measure to apply, a graphical representation of the followed path can be
created based on the responses.

As mentioned earlier, not all fairness measures are equally suitable for every
audit situation. For this reason, the `fairness_selection()` function we propose
Expand Down
Loading