# Fitting Example Using dfddm

#### July 08, 2020

Function dfddm evaluates the density function (or probability density function, PDF) for the Ratcliff diffusion decision model (DDM) using different methods for approximating the full PDF, which contains an infinite sum. An overview of the mathematical details of the different approximations is provided in the Math Vignette. An empirical validation of the implemented methods is provided in the Validity Vignette. Timing benchmarks for the present methods and comparison with existing methods are provided in the Benchmark Vignette.

Our implementation of the DDM has the following parameters: $$a \in (0, \infty)$$ (threshold separation), $$v \in (-\infty, \infty)$$ (drift rate), $$t_0 \in [0, \infty)$$ (non-decision time/response time constant), $$w \in (0, 1)$$ (relative starting point), and $$sv \in (0, \infty)$$ (inter-trial-variability of drift).

# Introduction

This vignette contains two examples of how to use fddm, in particular the dfddm function, in fitting the DDM to real-world data. We will load a dataset that is included in the fddm package and fit the Ratcliff DDM to the response time data contained within the dataset. We will show a simple fitting procedure for estimating the DDM parameter values for only a single individual in the study in addition to a more involved fitting procedure that includes DDM parameter estimation for all of the individuals in the study. After running this more involved optimization, we provide a rudimentary analysis of the fitted parameter estimates that groups the parameter estimates by the expertise of the study’s participants.

# Example Fitting

In this example, we will fit to the med_dec data that comes with fddm. This data contains the accuracy condition reported in Trueblood et al. (2018) investigating medical decision making among medical professionals (pathologists) and novices (i.e., undergraduate students). The task of participants was to judge whether pictures of blood cells show cancerous cells (i.e., blast cells) or non-cancerous cells (i.e., non-blast cells). The data set contains 200 decision per participant, based on pictures of 100 true cancerous cells and pictures of 100 true non-cancerous cells. We load the fddm package, read the data, and remove any invalid responses from the data.

library("fddm")
data(med_dec, package = "fddm")
med_dec <- med_dec[which(med_dec$rt >= 0), ] ## Log-likelihood Function Our approach will be a straightforward maximum likelihood estimation (MLE). Since we will be using the optimization function nlminb, we must write an objective function for it to optimize. By default nlminb finds the minimum of the objective function instead of the maximum, so we will simply negate our likelihood function. In addition, we will employ the common practice of using the log-likelihood as this tends to be more stable while still maintaining the same minima (negated maxima) as the regular likelihood function. We are going to be fitting the parameters $$v$$, $$a$$, $$t_0$$, $$w$$, and $$sv$$; however, we want to fit two distinct drift rates, one for the upper boundary ($$v_u$$) and one for the lower boundary ($$v_\ell$$). In order to make this distinction, we require the input of the truthful classification of each decision (i.e. what the correct response is for each entry). Note that our log-likelihood function depends on the number of response times, the number of responses, and the number of truthful classifications all being equal. As we are using the optimization function nlminb, the first argument to our log-likelihood function needs to be a vector of the initial values of the six parameters that are being optimized: $$v_u$$, $$v_\ell$$, $$a$$, $$t_0$$, $$w$$, and $$sv$$. The rest of the arguments will be the other necessary inputs to dfddm that are not optimized: the vector of response times, the vector of responses, the vector of the truthful classifications, and the allowable error tolerance for the density function (optional). Details on all of these inputs can be found in the dfddm documentation. Upon being called, the log-likelihood function first separates the input response times and responses by their truthful classification to yield two new response time vectors and two new response vectors. The response times and responses are then input into separate density functions using a separate $$v$$ parameter, $$v_u$$ or $$v_\ell$$. These separate densities are then combined, and the log-likelihood function heavily penalizes any combination of parameters that returns a log-density of $$-\infty$$ (equivalent to a regular density of $$0$$). Lastly, the actual log-likelihood is returned as the negative of the sum of all of the log-densities. ll_fun <- function(pars, rt, resp, truth) { rtu <- rt[truth == "upper"] rtl <- rt[truth == "lower"] respu <- resp[truth == "upper"] respl <- resp[truth == "lower"] # the truth is "upper" so use vu densu <- dfddm(rt = rtu, response = respu, a = pars[[3]], v = pars[[1]], t0 = pars[[4]], w = pars[[5]], sv = pars[[6]], log = TRUE) # the truth is "lower" so use vl densl <- dfddm(rt = rtl, response = respl, a = pars[[3]], v = pars[[2]], t0 = pars[[4]], w = pars[[5]], sv = pars[[6]], log = TRUE) densities <- c(densu, densl) if (any(!is.finite(densities))) return(1e6) return(-sum(densities)) } ## Simple Fitting Routine As an intermediate step, we will fit the DDM to only one participant from the med_dec data. We select the individual whose data we will use for fitting before preparing the data by defining upper and lower responses and the correct response bounds. onep <- med_dec[ med_dec$id == "2" & med_dec$group == "experienced", ] onep$resp <- ifelse(onep$response == "blast", "upper", "lower") onep$truth <- ifelse(onep$classification == "blast", "upper", "lower") str(onep) #> 'data.frame': 200 obs. of 11 variables: #>$ id            : int  2 2 2 2 2 2 2 2 2 2 ...
#>  $group : chr "experienced" "experienced" "experienced" "experienced" ... #>$ block         : int  3 3 3 3 3 3 3 3 3 3 ...
#>  $trial : int 1 2 3 4 5 6 7 8 9 10 ... #>$ classification: chr  "blast" "non-blast" "non-blast" "non-blast" ...
#>  $difficulty : chr "easy" "easy" "hard" "hard" ... #>$ response      : chr  "blast" "non-blast" "blast" "non-blast" ...
#>  $rt : num 0.853 0.575 1.136 0.875 0.748 ... #>$ stimulus      : chr  "blastEasy/BL_10166384.jpg" "nonBlastEasy/16258001115A_069.jpg" "nonBlastHard/BL_11504083.jpg" "nonBlastHard/MY_9455143.jpg" ...
#>  $resp : chr "upper" "lower" "upper" "lower" ... #>$ truth         : chr  "upper" "lower" "lower" "lower" ...

We then pass the data and log-likelihood function with the necessary additional arguments to an optimization function. As we are using the optimization function nlminb for this example, we must input as the first argument the initial values of our DDM parameters that we want optimized. These are input in the order: $$v_u$$, $$v_\ell$$, $$a$$, $$t_0$$, $$w$$, and $$sv$$; we also need to define upper and lower bounds for each parameters. Fitting the DDM to this data is basically instantaneous using this setup.

fit <- nlminb(c(0, 0, 1, 0, 0.5, 0), objective = ll_fun,
rt = onep$rt, resp = onep$resp, truth = onep$truth, # limits: vu, vl, a, t0, w, sv lower = c(-Inf, -Inf, 0, 0, 0, 0), upper = c( Inf, Inf, Inf, Inf, 1, Inf)) fit #>$par
#> [1]  5.6813 -2.1887  2.7909  0.3764  0.4010  2.2813
#>
#> $objective #> [1] 42.47 #> #>$convergence
#> [1] 0
#>
#> $iterations #> [1] 41 #> #>$evaluations
#> function gradient
#>       60      301
#>
#> $message #> [1] "relative convergence (4)" ## Fitting the Entire Dataset Here we will run a more rigorous fitting on the entire med_dec datset to obtain parameter estimates for each participant in the study. To do this, we define a function to run the data fitting for us; we want it to output a dataframe containing the parameter estimates for each individual in the data. The inputs will be the dataset, the allowable error tolerance for the density function, how the “upper” response is presented in the dataset, and indices of the columns in the dataset containing: identification of the individuals in the dataset, the response times, the responses, and the truthful classifications. After some data checking, the fitting function will extract the unique individuals from the dataset and run the parameter optimization for the responses and response times for each individual. The optimizations themselves are initialized with random initial parameter values to aid in the avoidance of local minima in favor of global minima. Moreover, the optimization will run 5 times for each individual, with 5 different sets of random initial parameter values. The value of the minimized log-likelihood function will be compared across all 5 runs, and the smallest such value will indicate the best fit. The parameter estimates, convergence code, and minimized value of the log-likelihood function produced by this best fit will be saved for that individual. rt_fit <- function(data, id_idx = NULL, rt_idx = NULL, response_idx = NULL, truth_idx = NULL, response_upper = NULL) { # Format data for fitting if (all(is.null(id_idx), is.null(rt_idx), is.null(response_idx), is.null(truth_idx), is.null(response_upper))) { df <- data # assume input data is already formatted } else { if(any(data[,rt_idx] < 0)) { stop("Input data contains negative response times; fit will not be run.") } if(any(is.na(data[,response_idx]))) { stop("Input data contains invalid responses (NA); fit will not be run.") } nr <- nrow(data) df <- data.frame(id = character(nr), rt = double(nr), response = character(nr), truth = character(nr), stringsAsFactors = FALSE) if (!is.null(id_idx)) { # relabel identification tags for (i in 1:length(id_idx)) { idi <- unique(data[,id_idx[i]]) for (j in 1:length(idi)) { df$id[data[,id_idx[i]] == idi[j]] <- paste(df$id[data[,id_idx[i]] == idi[j]], idi[j], sep = " ") } } df$id <- trimws(df$id, which = "left") } df$rt <- as.double(data[,rt_idx])

df$response <- "lower" df$response[data[,response_idx] == response_upper] <- "upper"

df$truth <- "lower" df$truth[data[,truth_idx] == response_upper] <- "upper"
}

# Preliminaries
ids <- unique(df$id) nids <- max(length(ids), 1) # if inds is null, there is only one individual ninit_vals <- 5 # Initilize the output dataframe cnames <- c("ID", "Convergence", "Objective", "vu_fit", "vl_fit", "a_fit", "t0_fit", "w_fit", "sv_fit") out <- data.frame(matrix(ncol = length(cnames), nrow = nids)) colnames(out) <- cnames temp <- data.frame(matrix(ncol = length(cnames)-1, nrow = ninit_vals)) colnames(temp) <- cnames[-1] # Loop through each individual and starting values for (i in 1:nids) { out$ID[i] <- ids[i]

# extract data for id i
dfi <- df[df$id == ids[i],] rti <- dfi$rt
respi <- dfi$response truthi <- dfi$truth

# starting value for t0 must be smaller than the smallest rt
min_rti <- min(rti)

# create initial values for this individual
init_vals <- data.frame(vu = rnorm(n = ninit_vals, mean = 4, sd = 2),
vl = rnorm(n = ninit_vals, mean = -4, sd = 2),
a  = runif(n = ninit_vals, min = 0.5, max = 5),
t0 = runif(n = ninit_vals, min = 0, max = min_rti),
w  = runif(n = ninit_vals, min = 0, max = 1),
sv = runif(n = ninit_vals, min = 0, max = 5))

# loop through all of the starting values
for (j in 1:ninit_vals) {
mres <- nlminb(init_vals[j,], ll_fun,
rt = rti, resp = respi, truth = truthi,
# limits:   vu,   vl,   a,  t0, w,  sv
lower = c(-Inf, -Inf,   0,   0, 0,   0),
upper = c( Inf,  Inf, Inf, Inf, 1, Inf))
temp$Convergence[j] <- mres$convergence
temp$Objective[j] <- mres$objective
temp[j, -c(1, 2)] <- mres$par } # determine best fit for the individual min_idx <- which.min(temp$Objective)
out[i, -1] <- temp[min_idx,]
}
return(out)
}

We load the dataset, remove any invalid rows from the dataset, and run the fitting; the dataframe of the fitting results is output below.

data(med_dec, package = "fddm")
med_dec <- med_dec[which(med_dec$rt >= 0),] fit <- rt_fit(med_dec, id_idx = c(2,1), rt_idx = 8, response_idx = 7, truth_idx = 5, response_upper = "blast") fit #> ID Convergence Objective vu_fit vl_fit a_fit t0_fit w_fit sv_fit #> 1 experienced 2 0 42.472 5.6813 -2.188662 2.791 0.3764 0.4010 2.2813 #> 2 experienced 6 0 9.277 3.7361 -4.695748 2.631 0.3509 0.5976 2.4680 #> 3 experienced 7 0 -12.023 2.6125 -2.723905 1.757 0.4455 0.4588 1.5768 #> 4 experienced 9 0 71.578 4.0534 -3.988777 2.829 0.4498 0.4743 2.8875 #> 5 experienced 12 0 21.641 5.1522 -1.642243 2.144 0.3737 0.4425 2.0534 #> 6 experienced 14 0 8.085 3.5183 -1.613796 1.859 0.3993 0.4812 1.2818 #> 7 experienced 16 0 253.696 1.2602 -1.389188 2.756 0.4692 0.4401 1.0434 #> 8 experienced 17 0 161.668 2.4201 -1.914755 2.615 0.4667 0.4075 1.6725 #> 9 inexperienced 3 0 51.613 3.0706 -0.179936 1.653 0.4250 0.5791 1.8457 #> 10 inexperienced 4 0 158.042 0.7468 -1.055648 1.728 0.4092 0.4794 0.8258 #> 11 inexperienced 5 0 113.070 3.9516 -3.793086 2.895 0.4040 0.3932 3.5158 #> 12 inexperienced 8 0 148.755 2.1662 -0.058012 2.323 0.1273 0.4734 0.5912 #> 13 inexperienced 10 0 -23.838 2.8688 -3.177567 1.730 0.4001 0.4701 1.7026 #> 14 inexperienced 11 0 201.106 2.1116 -0.551508 2.732 0.1341 0.4301 1.0057 #> 15 inexperienced 13 0 52.699 5.0587 -1.293657 2.250 0.3708 0.5075 2.5581 #> 16 inexperienced 15 0 40.303 1.9817 -0.086222 1.343 0.3660 0.5046 0.4336 #> 17 inexperienced 18 0 60.919 2.4624 -0.991438 1.616 0.5178 0.4478 1.3322 #> 18 inexperienced 19 0 130.081 2.9277 -3.406009 3.215 0.3365 0.4878 1.9747 #> 19 novice 1 0 152.450 0.4255 -1.148243 1.796 0.3945 0.4832 1.2299 #> 20 novice 2 0 198.129 0.2382 -3.040580 2.491 0.5062 0.4642 2.1391 #> 21 novice 3 0 125.433 1.5841 -1.128407 1.670 0.4838 0.5087 1.6309 #> 22 novice 4 0 78.002 -0.3561 -1.252989 1.415 0.3792 0.5636 1.4141 #> 23 novice 5 0 129.790 0.7649 -1.872245 1.742 0.4956 0.5286 1.1449 #> 24 novice 6 0 110.105 1.7099 -0.509214 1.704 0.4326 0.5377 1.1984 #> 25 novice 7 0 343.795 0.4114 -0.925340 2.893 0.3894 0.4137 0.6613 #> 26 novice 8 0 23.690 2.7386 1.884254 1.712 0.0930 0.3724 1.3573 #> 27 novice 9 0 6.753 1.7057 -1.327128 1.335 0.3758 0.4748 0.4797 #> 28 novice 10 0 84.534 1.1978 -0.023265 1.272 0.4693 0.4086 0.7307 #> 29 novice 11 0 30.496 2.4929 -1.254151 1.571 0.3782 0.5702 1.6935 #> 30 novice 12 0 19.840 -1.0715 -3.268591 1.734 0.3323 0.5510 1.0363 #> 31 novice 13 0 3.215 3.5515 -0.925159 1.698 0.3842 0.5903 1.0028 #> 32 novice 14 0 13.321 3.4875 -2.187928 1.691 0.3717 0.4580 2.6167 #> 33 novice 15 0 103.567 1.3054 -2.613707 1.846 0.4028 0.5180 2.0519 #> 34 novice 16 0 101.081 1.6672 -1.111215 1.812 0.3310 0.5929 0.9928 #> 35 novice 17 0 248.656 0.5313 -0.693281 2.328 0.1901 0.4723 0.9625 #> 36 novice 18 0 72.383 0.8209 -1.454783 1.414 0.4633 0.5227 0.8923 #> 37 novice 19 0 155.092 1.7933 -2.037156 2.301 0.4277 0.5099 1.8230 #> 38 novice 20 0 210.012 1.1097 0.409930 2.051 0.0000 0.4615 0.5932 #> 39 novice 21 0 -50.515 2.2779 -1.403195 1.184 0.4149 0.4805 0.7363 #> 40 novice 22 0 -6.886 2.4617 -1.882826 1.462 0.3269 0.5325 1.4456 #> 41 novice 23 0 99.036 0.4869 -0.996690 1.383 0.4182 0.5759 1.0347 #> 42 novice 24 0 52.208 2.0161 -0.731876 1.669 0.3322 0.5664 0.9166 #> 43 novice 25 0 36.029 2.1551 -2.658371 1.821 0.4141 0.5133 1.9366 #> 44 novice 26 0 163.488 0.4245 -1.529602 1.934 0.4507 0.3925 1.2570 #> 45 novice 27 0 44.627 0.5760 -1.468021 1.290 0.5113 0.4539 1.1904 #> 46 novice 28 0 115.818 0.7869 -0.929356 1.628 0.1311 0.4851 0.4752 #> 47 novice 29 0 -27.070 1.2844 -0.949565 1.031 0.3688 0.4976 1.1021 #> 48 novice 30 0 99.798 1.2689 -0.737151 1.621 0.3308 0.5218 0.7305 #> 49 novice 31 0 227.685 0.3707 -0.304099 1.852 0.4709 0.4474 0.9408 #> 50 novice 32 0 77.311 1.7172 -0.204380 1.414 0.5120 0.3828 1.4498 #> 51 novice 33 0 86.096 1.7151 -0.198330 1.395 0.3317 0.4688 1.1781 #> 52 novice 34 0 85.562 1.2147 -0.070721 1.367 0.4892 0.5227 1.0500 #> 53 novice 35 0 158.302 1.6009 -1.895735 2.206 0.5111 0.4798 3.5130 #> 54 novice 36 0 5.739 1.8265 -2.324060 1.679 0.5149 0.4280 0.8562 #> 55 novice 37 0 80.647 1.3637 0.003775 1.479 0.3295 0.5346 0.8036 ### Rudimentary Analysis To show some basic results of our fitting, we will plot the fitted values of $$v_u$$ and $$v_\ell$$ grouped by the experience level of the participant to demonstrate how these parameters differ among novices, inexperienced professionals, and experienced professionals. library("reshape2") library("ggplot2") fitp <- data.frame(fit[, c(1, 4, 5)]) # make a copy to manipulate for plotting colnames(fitp)[-1] <- c("vu", "vl") for (i in 1:length(unique(fitp$ID))) {
first <- substr(fitp$ID[i], 1, 1) if (first == "n") { fitp$ID[i] <- "novice"
} else if (first == "i") {
fitp$ID[i] <- "inexperienced" } else { fitp$ID[i] <- "experienced"
}
}

fitp <- melt(fitp, id.vars = "ID", measure.vars = c("vu", "vl"),
variable.name = "vuvl", value.name = "estimate")

ggplot(fitp, aes(x = factor(ID, levels = c("novice", "inexperienced", "experienced")),
y = estimate,
color = factor(vuvl, levels = c("vu", "vl")))) +
geom_point(alpha = 0.4, size = 4) +
labs(title = "Parameter Estimates for vu and vl",
x = "Experience Level", y = "Parameter Estimate",
color = "Drift Rate") +
theme_bw() +
theme(panel.border = element_blank(),
plot.title = element_text(size = 23),
plot.subtitle = element_text(size = 16),
axis.text.x = element_text(size = 16),
axis.text.y = element_text(size = 16),
axis.title.x = element_text(size = 20,
margin = margin(10, 5, 5, 5, "pt")),
axis.title.y = element_text(size = 20),
legend.title = element_text(size = 20),
legend.text = element_text(size = 16))

Before we begin analysis of this plot, note that the drift rate corresponding to the upper threshold should always be positive, and the drift rate corresponding to the lower threshold should always be negative. Since there are a few fitted values that switch this convention, the novice participants show evidence of consistently responding incorrectly to the stimulus. In contrast, both the inexperienced and experienced participants show a clean division of drift rates around zero.

In addition, we notice that the more experienced participants tend to have higher fitted drift rates in absolute value. A more extreme drift rate means that the participant receives and processes information more efficiently than a more mild drift rate. The overall pattern is that the novices are on average the worst at receiving information, the experienced professionals are the best, and the inexperienced professionals are somewhere in the middle. This pattern indicates that experienced professionals are indeed better at their job than untrained undergraduate students!

#### R Session Info

sessionInfo()
#> R version 4.0.2 (2020-06-22)
#> Platform: x86_64-w64-mingw32/x64 (64-bit)
#> Running under: Windows 10 x64 (build 18363)
#>
#> Matrix products: default
#>
#> locale:
#> [1] LC_COLLATE=C                            LC_CTYPE=English_United Kingdom.1252
#> [3] LC_MONETARY=English_United Kingdom.1252 LC_NUMERIC=C
#> [5] LC_TIME=English_United Kingdom.1252
#>
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base
#>
#> other attached packages:
#> [1] ggplot2_3.3.2.9000   reshape2_1.4.4       microbenchmark_1.4-7 RWiener_1.3-3
#> [5] rtdists_0.11-2       fddm_0.1-1
#>
#> loaded via a namespace (and not attached):
#>  [1] Rcpp_1.0.4.6     pillar_1.4.4     compiler_4.0.2   plyr_1.8.6       tools_4.0.2
#>  [6] digest_0.6.25    evd_2.3-3        evaluate_0.14    lifecycle_0.2.0  tibble_3.0.1
#> [11] gtable_0.3.0     lattice_0.20-41  pkgconfig_2.0.3  rlang_0.4.6      Matrix_1.2-18
#> [16] yaml_2.2.1       mvtnorm_1.1-1    expm_0.999-4     xfun_0.15        withr_2.2.0
#> [21] dplyr_1.0.0      stringr_1.4.0    knitr_1.29       generics_0.0.2   vctrs_0.3.1
#> [26] tidyselect_1.1.0 grid_4.0.2       ggnewscale_0.4.1 glue_1.4.1       R6_2.4.1
#> [31] survival_3.2-3   rmarkdown_2.3    farver_2.0.3     purrr_0.3.4      magrittr_1.5
#> [36] scales_1.1.1     htmltools_0.5.0  ellipsis_0.3.1   splines_4.0.2    colorspace_1.4-1
#> [41] labeling_0.3     stringi_1.4.6    gsl_2.1-6        munsell_0.5.0    msm_1.6.8
#> [46] crayon_1.3.4

# References

Trueblood, Jennifer S., William R. Holmes, Adam C. Seegmiller, Jonathan Douds, Margaret Compton, Eszter Szentirmai, Megan Woodruff, Wenrui Huang, Charles Stratton, and Quentin Eichbaum. 2018. “The Impact of Speed and Bias on the Cognitive Processes of Experts and Novices in Medical Image Decision-Making.” Cognitive Research: Principles and Implications 3 (1): 28. https://doi.org/10.1186/s41235-018-0119-2.