Guillaume A. Rousselet 2020-07-10
- Introduction
- Example 1: simulated data with a uniform shift
- Example 2: simulated data without differences
- Example 3: real data
- Get lexical decision dataset
- Compute shift functions for all participants
- Illustrate results
- Stochastic dominance
- Compute shift functions for a subset of participants
- Illustrate results
- T-test
- Reverse order of conditions
- Quartiles instead of deciles
- 99% confidence intervals
- More quantiles?
- Percentile bootstrap hierarchical shift function
- References
library(tibble)
library(ggplot2)
library(retimes)
In the standard shift function, we compare two vectors of observations at multiple quantiles. Here we deal with a hierarchical setting: we have varying numbers of trials from 2 dependent conditions, sampled from multiple participants. Ideally, such data would be analysed using a multi-level model, including for instance ex-Gaussian fits, random slopes and intercepts for participants, item analyses… This can be done using the lme4 or brms R packages.
However, in some research fields, the common approach is to collapse the variability across trials into a single number per participant and condition to be able to perform a paired t-test: typically, the mean is computed across trials for each condition and participant, then the means are subtracted, and the distribution of mean differences is entered into a one-sample t-test. Obviously, this strategy throws away a huge amount of information!
Depending on how conditions differ, looking at other aspects of the data than the mean can be more informative. For instance, in Rousselet & Wilcox (2019), we consider group comparisons of individual medians. Considering that the median is the second quartile, looking at the other quartiles can be of theoretical interest to investigate effects in early or later parts of distributions. This could be done in several ways, for instance by making inferences on the first quartile (Q1) or the third quartile (Q3). If the goal is to detect differences anywhere in the distributions, a more systematic approach consists in quantifying differences at multiple quantiles. Here we consider the case of the deciles, but other quantiles could be used. First, for each participant and each condition, the sample deciles are computed over trials. Second, for each participant, condition 2 deciles are subtracted from condition 1 deciles - we’re dealing with a within-subject (repeated-measure) design. Third, for each decile, the distribution of differences is subjected to a one-sample test. Fourth, a correction for multiple comparisons is applied across the 9 one-sample tests. We call this procedure a hierarchical shift function. There are many options available to implement this procedure and the example used here is not the definitive answer: the goal is simply to demonstrate that a relatively simple procedure can be much more powerful and informative than standard approaches.
In creating a hierarchical shift function we need to make three choices:
a quantile estimator, a statistical test to assess quantile differences
across participants, and a correction for multiple comparisons
technique. The deciles are estimated using type 8 from the base R
quantile()
function (see justification in Rousselet & Wilcox, 2019).
The group comparisons are performed using a one-sample t-test for the
20% trimmed mean, which performs well in many situations. The correction
for multiple comparisons employs Hochberg’s strategy (Hochberg, 1988),
which guarantees that the probability of at least one false positive
will not exceed the nominal level as long as the nominal level is not
exceeded for each quantile. All these defaults can be changed - see
help(hsf)
.
In Rousselet & Wilcox (2019), we consider power curves for the hierarchical shift function (HSF) and contrast them to other approaches: by design, HSF is sensitive to more types of differences than any standard approach using the mean or a single quantile. Another advantage of HSF is that the location of the distribution difference can be interrogated, which is impossible if inferences are limited to a single quantile.
First, we consider a uniform shift between two ex-Gaussian distributions. In condition 1, the ex-Gaussian parameters were mu = 500, sigma = 50 and tau = 200. Parameters in condition 2 were the same, but each sample was shifted by 50.
Illustrate data from one participant, 1000 trials per condition.
nt <- 1000
# ex Gaussian parameters
mu <- 500
sigma <- 50
tau <- 200
ES <- 50
set.seed(21)
g1 <- rexgauss(nt, mu = mu, sigma = sigma, tau = tau)
g2 <- rexgauss(nt, mu = mu, sigma = sigma, tau = tau) + ES
df <- mkt2(g1, g2, group_labels = c("Condition1", "Condition2"))
p <- ggplot(df, aes(x = obs)) + theme_classic() +
stat_density(aes(colour = gr), geom="line",position="identity", size=1) +
scale_colour_viridis_d(end = 0.8) +
coord_cartesian(xlim = c(0, 2500)) +
theme(axis.title.x = element_text(size = 18),
axis.text.x = element_text(size = 16, colour="black"),
axis.text.y = element_text(size = 16, colour="black"),
axis.title.y = element_text(size = 18),
legend.key.width = unit(1.5,"cm"),
legend.position = c(0.55,0.75),
legend.direction = "vertical",
legend.text=element_text(size=16),
legend.title=element_text(size=18),
title = element_text(size=20)) +
labs(x = "Reaction times", y = "Density", colour = "Conditions") +
ggtitle("Uniform shift")
p
# p.uni.dist <- p
To better understand how the distributions differ, let’s look at the standard shift function, in which the difference between the deciles of the two conditions are plotted as a function of the deciles in condition
- The decile differences are all negative, showing stochastic dominance of condition 2 over condition 1. The function is not flat because of random sampling and limited sample size.
out <- shifthd_pbci(df, nboot = 200, adj_ci = FALSE)
p <- plot_sf(out, plot_theme = 1)[[1]] +
theme(axis.text = element_text(size = 16, colour="black"))
p
set.seed(747)
nt <- 100 # trials
np <- 30 # participants
# ex Gaussian parameters
mu <- 500
sigma <- 50
tau <- 200
ES <- 50
# generate data: matrix participants x trials
data1 <- matrix(rexgauss(nt*np, mu = mu, sigma = sigma, tau = tau), nrow = np)
data2 <- matrix(rexgauss(nt*np, mu = mu, sigma = sigma, tau = tau), nrow = np) + ES
# analysis parameters
qseq <- seq(0.1,0.9,0.1) # quantiles
alpha <- 0.05
nboot <- 1000 # bootstrap
tr <- 0.2 # group trimmed mean for each quantile
nq <- length(qseq)
df <- tibble(rt = c(as.vector(data1), as.vector(data2)),
cond = factor(c(rep("cond1", nt*np),rep("cond2", nt*np))),
id = factor(rep(seq(1,np),nt*2)))
out <- hsf(df, rt ~ cond + id)
p <- plot_hsf(out)
p
out$pvalues
## [1] 4.627365e-11 6.080292e-11 3.737277e-11 6.334578e-07 5.635363e-10
## [6] 1.646131e-08 3.683236e-10 1.389007e-06 2.415958e-03
out$adjusted_pvalues
## [1] 3.701892e-10 4.256204e-10 3.363549e-10 1.900373e-06 2.817682e-09
## [6] 6.584525e-08 2.209942e-09 2.778014e-06 2.415958e-03
Does one condition dominate the other at all quantiles? In how many participants? See Speckman et al. (2008) for a great introduction to stochastic dominance.
nq <- length(out$quantiles)
pdmt0 <- apply(out$individual_sf > 0, 2, sum)
print(paste0('In ',sum(pdmt0 == nq),' participants (',round(100 * sum(pdmt0 == nq) / np, digits = 1),'%), all quantile differences are more than to zero'))
## [1] "In 0 participants (0%), all quantile differences are more than to zero"
pdlt0 <- apply(out$individual_sf < 0, 2, sum)
print(paste0('In ',sum(pdlt0 == nq),' participants (',round(100 * sum(pdlt0 == nq) / np, digits = 1),'%), all quantile differences are less than to zero'))
## [1] "In 22 participants (73.3%), all quantile differences are less than to zero"
Use the percentile bootstrap to compute confidence intervals.
Hierarchical situation: nt trials at level 2, two conditions compared using a shift function (default = deciles) in each of np participants at level 1. For each decile of the shift function, we perform a one-sample test on the 20% trimmed mean. The deciles are dependent, so we resample participants, then trials (hierarchical sampling).
set.seed(8899)
out <- hsf_pb(df, rt ~ cond + id)
plot_hsf_pb(out, interv = "ci")
plot_hsf_pb(out, interv = "hdi")
Distributions of bootstrap estimates can be considered cheap Bayesian
posterior
distributions.
They also contain useful information not captured by simply reporting
confidence intervals. Here we plot them using geom_halfeyeh()
from
tidybayes.
plot_hsf_pb_dist(out)
## Warning: Ignoring unknown parameters: .point_interval
With 80% confidence interval, median of bootstrap differences and different colours
plot_hsf_pb_dist(out, point_interv = "median_ci", interval_width = .80,
int_colour = "blue", fill_colour = "grey")
## Warning: Ignoring unknown parameters: .point_interval
Second, we consider samples from the same two ex-Gaussian distributions. In both conditions, the ex-Gaussian parameters were mu = 500, sigma = 50 and tau = 200.
Illustrate data from one participant, 1000 trials per condition.
nt <- 1000
# ex Gaussian parameters
mu <- 500
sigma <- 50
tau <- 200
ES <- 0
set.seed(21)
g1 <- rexgauss(nt, mu = mu, sigma = sigma, tau = tau)
g2 <- rexgauss(nt, mu = mu, sigma = sigma, tau = tau) + ES
df <- mkt2(g1, g2, group_labels = c("Condition1", "Condition2"))
p <- ggplot(df, aes(x = obs)) + theme_classic() +
stat_density(aes(colour = gr), geom="line",position="identity", size=1) +
scale_colour_viridis_d(end = 0.8) +
coord_cartesian(xlim = c(0, 2500)) +
theme(axis.title.x = element_text(size = 18),
axis.text.x = element_text(size = 16, colour="black"),
axis.text.y = element_text(size = 16, colour="black"),
axis.title.y = element_text(size = 18),
legend.key.width = unit(1.5,"cm"),
legend.position = c(0.55,0.75),
legend.direction = "vertical",
legend.text=element_text(size=16),
legend.title=element_text(size=18),
title = element_text(size=20)) +
labs(x = "Reaction times", y = "Density", colour = "Conditions") +
ggtitle("Uniform shift")
p
To better understand how the distributions differ, let’s look at the standard shift function, in which the difference between the deciles of the two conditions are plotted as a function of the deciles in condition
- The decile differences are all negative, showing stochastic dominance of condition 2 over condition 1. The function is not flat because of random sampling and limited sample size. In fact, even with 1,000 trials, large differences are suggested for middle and upper deciles.
out <- shifthd_pbci(df, nboot = 200, adj_ci = FALSE)
p <- plot_sf(out, plot_theme = 1)[[1]] +
theme(axis.text = element_text(size = 16, colour="black"))
p
set.seed(747)
nt <- 100 # trials
np <- 30 # participants
# ex Gaussian parameters
mu <- 500
sigma <- 50
tau <- 200
ES <- 0
# generate data: matrix participants x trials
data1 <- matrix(rexgauss(nt*np, mu = mu, sigma = sigma, tau = tau), nrow = np)
data2 <- matrix(rexgauss(nt*np, mu = mu, sigma = sigma, tau = tau), nrow = np) + ES
# analysis parameters
qseq <- seq(0.1,0.9,0.1) # quantiles
alpha <- 0.05
nboot <- 1000 # bootstrap
tr <- 0.2 # group trimmed mean for each quantile
nq <- length(qseq)
df <- tibble(rt = c(as.vector(data1), as.vector(data2)),
cond = factor(c(rep("cond1", nt*np),rep("cond2", nt*np))),
id = factor(rep(seq(1,np),nt*2)))
out <- hsf(df, rt ~ cond + id)
p <- plot_hsf(out)
p
nq <- length(out$quantiles)
pdmt0 <- apply(out$individual_sf > 0, 2, sum)
print(paste0('In ',sum(pdmt0 == nq),' participants (',round(100 * sum(pdmt0 == nq) / np, digits = 1),'%), all quantile differences are more than to zero'))
## [1] "In 3 participants (10%), all quantile differences are more than to zero"
pdlt0 <- apply(out$individual_sf < 0, 2, sum)
print(paste0('In ',sum(pdlt0 == nq),' participants (',round(100 * sum(pdlt0 == nq) / np, digits = 1),'%), all quantile differences are less than to zero'))
## [1] "In 4 participants (13.3%), all quantile differences are less than to zero"
set.seed(8899)
out <- hsf_pb(df, rt ~ cond + id)
plot_hsf_pb(out, interv = "hdi")
plot_hsf_pb_dist(out)
## Warning: Ignoring unknown parameters: .point_interval
Data from the French Lexicon
Project. Click on
“French Lexicon Project trial-level results with R scripts.zip”. The
.RData
dataset was created by applying the script
/data-raw/getflprtdata.Rmd
.
#> get data - tibble = `flp`
flp <- flp # reaction time data - check `help(flp)`
#> columns =
#> - 1 = participant
#> - 2 = rt
#> - 3 = acc = accuracy 0/1
#> - 4 = condition = word/non-word
np <- length(unique(flp$participant)) # number of participants
Because of the large number of participants, the confidence intervals are too narrow to be visible.
out <- hsf(flp, rt ~ condition + participant,
qseq = seq(0.1,0.9,0.1),
tr = 0.2,
alpha = 0.05,
qtype = 8,
todo = c(1,2),
null.value = 0,
adj_method = "hochberg")
# fig.width = 5, fig.height = 3
plot_hsf(out)
nq <- length(out$quantiles)
pdmt0 <- apply(out$individual_sf > 0, 2, sum)
print(paste0('In ',sum(pdmt0 == nq),' participants (',round(100 * sum(pdmt0 == nq) / np, digits = 1),'%), all quantile differences are more than to zero'))
## [1] "In 13 participants (1.4%), all quantile differences are more than to zero"
pdlt0 <- apply(out$individual_sf < 0, 2, sum)
print(paste0('In ',sum(pdlt0 == nq),' participants (',round(100 * sum(pdlt0 == nq) / np, digits = 1),'%), all quantile differences are less than to zero'))
## [1] "In 798 participants (83.2%), all quantile differences are less than to zero"
set.seed(19)
id <- unique(flp$participant)
df <- subset(flp, flp$participant %in% sample(id, 50, replace = FALSE))
out <- hsf(df, rt ~ condition + participant)
# fig.width = 5, fig.height = 3
plot_hsf(out)
# remove all variability across trials
dfred <- na.omit(tapply(df$rt, list(df$participant, df$condition), mean))
t.test(dfred[,1], dfred[,2], paired = TRUE)
##
## Paired t-test
##
## data: dfred[, 1] and dfred[, 2]
## t = -10.246, df = 49, p-value = 8.923e-14
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -110.30178 -74.12955
## sample estimates:
## mean of the differences
## -92.21567
out <- hsf(df, rt ~ condition + participant,
todo = c(2,1))
plot_hsf(out)
plot_hsf(hsf(df, rt ~ condition + participant, qseq = c(0.25, 0.5, 0.75)))
plot_hsf(hsf(df, rt ~ condition + participant, alpha = 0.01))
With about 1000 trials per condition we can study the distributions in more detail.
p <- plot_hsf(hsf(df, rt ~ condition + participant, qseq = seq(0.05, 0.95, 0.05)))
p + theme(axis.text.x = element_text(size = 10))
set.seed(8899)
out <- hsf_pb(df, rt ~ condition + participant)
plot_hsf_pb(out, interv = "hdi")
plot_hsf_pb_dist(out)
## Warning: Ignoring unknown parameters: .point_interval
Rousselet, G. A., & Wilcox, R. R. (2019). Reaction times and other
skewed distributions: problems with the mean and the median.
[preprint]
[reproducibility
package]
Ferrand, L., New, B., Brysbaert, M., Keuleers, E., Bonin, P., Méot, A., Augustinova, M., & Pallier, C. (2010). The French Lexicon Project: Lexical decision data for 38,840 French words and 38,840 pseudowords. Behavior Research Methods, 42, 488-496. [article] [dataset]
Speckman, P. L., Rouder, J. N., Morey, R. D. & Pratte, M. S. (2008). Delta plots and coherent distribution ordering. The American Statistician, 62(3), 262–266. [article]