Forced-choice (FC) tests are gaining researcher’s interest increasingly for its faking resistance when well-designed. Well-designed FC tests should often be characterized by items within a block measuring different latent traits, and items within a block having similar magnitude, or high inter-item agreement (IIA) in terms of their social desirability. Other scoring models may also require factor loading differences or item locations within a block to be maximized or minimized.
Either way, decision on which items should be assigned to the same block - item pairing - is a crucial issue in building a well-designed FC test, which is currently carried out manually. However, given that we often need to simultaneously meet multiple objectives, manual pairing will turn out to be impractical and even infeasible, especially when the number of latent traits and/or the number of items per trait become relatively large.
The R package autoFC is developed to address these difficulties and provides a tool for facilitating automatic FC test construction. It offers users the functionality to:
Customize one or more item pairing criteria and calculate a composite pairing index, termed “energy” with user-specified weights for each criterion.
Automatically optimize the energy for the whole test by sequentially or simultaneously optimizing each matching rule, through the exchange of items among blocks or replacement with unused items.
Construct parallel forms of the same test following the same pairing rules.
Users are allowed to create an FC test of any block size (e.g. Pairs, Triplets, Quadruplets).
Below is a brief explanation of all functions provided by autoFC. Details and usage can be found in the next section.
cal_block_energy_with_iia()both calculate the total energy for a single item block, or a full FC test with multiple blocks, given a data frame of item characteristics. The latter function incorporates IIA metrics into energy calculation.
cal_block_energy_with_iia()incorporates four IIA metrics in which items are paired by maximizing the IIA within each block. Each IIA has a default weight of 1.
make_random_block() takes in number of items and block size as input arguments and produces a test with blocks of randomly paired item numbers. Information about item characteristics is not required.
get_iia() takes in item responses and a single item block (Or a full FC test with multiple blocks), then returns IIA metrics for each item block.
sa_pairing_generalized() is the automatic pairing function which takes in item characteristics (and also individual responses for all items) and an initial FC test, then optimizes the energy of the test based on Simulated Annealing (SA) algorithm.
SA is a probabilistic technique for approximating the global optimum of a given function, in which each iteration involves the cool down of the “Temperature” until it reaches a certain value. Within each iteration, a new solution (FC test) is produced and compared with current solution in terms of their energy (Which is calculated by calling
cal_block_energy_with_iia(). Acceptance or rejection of new solution is determined as follows:
If all items in the item characteristic data frame are used to construct the FC test,
sa_pairing_generalized() will produce new solutions by randomly exchanging items between two blocks; Otherwise, it will randomly select between exchanging items and replacing with unused items based on proportion of items used to construct the test.
sa_pairing_generalized() has built-in default values for most of the arguments if they are not given; For example, an FC test with block size 2 using all of the given items will be constructed by default if the
block argument is not provided. See the tutorial below for meanings and default values for other arguments.
In this tutorial, we suppose that 60 5-Point Likert items measuring Big Five traits, each with a certain item location, are used to build a FC scale with block size 3. We also simulate responses from 1,000 participants for the 60 items on their social desirability.
set.seed(2021) # Simulation of 1,000 respondents on 60 items. A better simulation should be # consisting of responses produced by specific IRT parameters. <- sample(seq(1:5), 500*60, replace = TRUE, s1 prob = c(0.10, 0.15, 0.20, 0.25, 0.30)) <- sample(seq(1:5), 500*60, replace = TRUE, s2 prob = c(0.50, 0.10, 0.10, 0.15, 0.15)) <- matrix(c(s1, s2), ncol = 60) item_responses <- sample(c("Openness","Conscientiousness","Neuroticism", item_dims "Extraversion","Agreeableness"), 60, replace = TRUE) <- colMeans(item_responses) item_mean <- runif(60, -1, 1) item_difficulty # Then we build a data frame with item characteristics <- data.frame(DIM = item_dims, SD_Mean = item_mean, DIFF = item_difficulty) item_chars = c(1, -1, -3)char_weights
Next, we build a random FC scale using the 60 items with block size 3. You can see from
initial_FC that now all 60 items are divided into 20 triplets.
<- make_random_block(total_items = 60, item_per_block = 3) initial_FC ::kable(initial_FC)knitr
Also let’s see the how the item characteristics look like for each of the 20 triplets.
First, the underlying latent traits. We see that there are some cases where two items measuring the same traits appear in the same block, which is something we want to avoid.
::kable(matrix(item_chars$DIM[t(initial_FC)], ncol = 3, byrow = TRUE))knitr
Then, scores on social desirability. We do see many cases where items differ in their social desirability on a magnitude of >1 on a 5-point scale, within a block. That’s not good.
<- matrix(item_chars$SD_Mean[t(initial_FC)], ncol = 3, byrow = TRUE) sd_initial ::kable(sd_initial)knitr
Lastly, item difficulty. We also see that item difficulties within a block are also inconsistent.
<- matrix(item_chars$DIFF[t(initial_FC)], ncol = 3, byrow = TRUE) diff_initial ::kable(diff_initial)knitr
Next we calculate the energy for
FUN set to be default.
weights is set to -1 for social desirability and -3 for item difficulty because we want the discrepancy of these characteristics to be as low as possible within a block.
The weight for item difficulty is higher to scale for its smaller range than social desirability. Beware about the scaling difference among different item characteristics and use different weights accordingly.
cal_block_energy(block = initial_FC, item_chars = item_chars, weights = char_weights) #>  -25.28441
If IIAs are to be involved we have lower energy value. This is because these randomly generated responses are not likely to be consistent with each other, hence very low and even negative IIAs:
cal_block_energy_with_iia(block = initial_FC, item_chars = item_chars, weights = char_weights, rater_chars = item_responses) #> [,1] #> [1,] -33.31589
Notice that if we give zero weights to all IIAs we will get the same energy value as
cal_block_energy_with_iia(block = initial_FC, item_chars = item_chars, weights = char_weights, rater_chars = item_responses, iia_weights = c(0, 0, 0, 0)) #> [,1] #> [1,] -25.28441
Also, if you want to see the inter-item agreement metrics for each block, you can use
get_iia(). It should not be too impressive and is for demonstration purposes only. Users are suggested to use real world response data to see the IIA within each block.
::kable(get_iia(block = initial_FC, data = item_responses))knitr
To produce an optimized paired FC scale, we have the objective of:
* Keeping items in the same block being from different latent traits;
* Minimizing variance of social desirability within each block;
* Minimizing variance of item difficulty within each block.
For IIAs, we also want to maximize the mean of the four IIAs within each block.
Below is an example run of producing an automatically paired FC. Arguments that may be of interest for users include:
block: The initial paired FC scale, which can be produced in Step 2. If left empty, an FC scale with block size 2 and items presented sequentially will be produced, with total number of items equals to number of rows in
total_items: Default to be number of unique values in
block. Can be a value larger than this value which represents cases where only some items in the item pool are used to build an FC scale.
Temperature: The initial temperature value of the automatic pairing method. Higher temperature is associated with higher probability of accepting a worse solution. It is recommended to leave this value blank and let it be scaled on the energy of
block by specifying
r: Determines the decrease rate of
Temperature. Should be a value between 0 and 1. Larger r values allows more iterations in the optimization process but will slow down the program.
end_criteria: Determines the end criteria for the automatic pairing process. A proportion value scaled on
Temperature. Should be a value between 0 and 1. Smaller values allows more iterations in the optimization process but will slow down the program.
item_chars: A data frame with item characteristics for all items. It is recommended that information irrelevant to pairing be discarded beforehand, but users can also set the corresponding position in
weights to be 0 to bypass these irrelevant item characteristics (Such is item ID).
FUN: A vector of function names for optimizing each item characteristic within each block. For example:
FUN = c('mean', 'var', 'sum'). Also supports customized functions. Defaults to
var for numeric variables and
facfun for factor/character variables.
n_exchange: Determines how many blocks are exchanged in order to produce a new solution for each iteration. Should be a value less than
weights: A vector of integer values indicating relative weights for each item characteristic after calculated by
FUN. Default to be a vector of all 1s.
prob_newitem: Probability of choosing the strategy of picking a new item, when not all candidate items are used to build the FC scale.
If you wish to use IIAs as pairing criterion, here are some arguments that might be useful. Note that
iia_weights are ignored when
use_IIA: Logical. Indicates whether IIA metrics are used as matching criteria.
rater_chars: Item responses for all items by a certain number of participants.
iia_weights: A vector of length 4 indicating weights given for the 4 IIA metrics, including linearly and quadratically weighted AC (Gwet, 2008; 2014) and Brennan-Prediger Index (Prennan & Prediger, 1981; Gwet, 2014). Default to a vector of all 1s.
# Note that this will take some time to run! (~ 1-2 minutes with this setting) # Weights for social desirability score and item difficulty should be set to -1, # because we don't want variance for these characteristics to be big. <- sa_pairing_generalized(block = initial_FC, eta_Temperature = 0.01, result r = 0.995, end_criteria = 10^(-6), weights = char_weights, item_chars = item_chars, use_IIA = TRUE, rater_chars = item_responses)
Finally, let’s see how this pairing method improves from the initial solution!
Let’s first see the total energy compared to the previous one. First are the initial energy, which is identical to what we have calculated in Step 4.
# Initial energy with IIA cal_block_energy_with_iia(block = result$block_initial, item_chars = item_chars, weights = char_weights, rater_chars = item_responses) #> [,1] #> [1,] -33.31589 # Alternative way to calculate initial energy print(result$energy_initial) #>  -33.31589
And the final result:
# Final energy with IIA cal_block_energy_with_iia(block = result$block_final, item_chars = item_chars, weights = char_weights, rater_chars = item_responses) #> [,1] #> [1,] 21.69018 # Alternative way to calculate final energy print(result$energy_final) #>  21.69018
Let’s take a look at how items are matched within each block. First are underlying latent traits.
This time, within each block, the three items are already coming from three distinct latent traits!
(Note: It does not guarantee that items will ALWAYS come from different latent traits after pairing. But if you want to increase the likelihood for such a result, you can increase the weight corresponding to item dimension)
::kable(matrix(item_chars$DIM[t(result$block_final)], ncol = 3, byrow = TRUE))knitr
Next let’s look at difference in social desirability within each block. Item social desirability scores are much closer to each other within each block, but we see that there are still big discrepancies two blocks.
<- matrix(item_chars$SD_Mean[t(result$block_final)], ncol = 3, byrow = TRUE) sd_final ::kable(sd_final)knitr
A more intuitive way to present this: how much have we improved on the average variance for all blocks? Still, it is much lower, but we see there is always space to improve.
# Initial print(mean(apply(sd_initial, 1, var))) #>  0.3317092 # Final print(mean(apply(sd_final, 1, var))) #>  0.04195382
Finally we look at item difficulty. Good improvement is also observed. We see that difference in item difficulty within a block also decreases:
<- matrix(item_chars$DIF[t(result$block_final)], ncol = 3, byrow = TRUE) diff_final ::kable(diff_final)knitr
How much have we improved on the average variance for all blocks in this case?
print(mean(apply(diff_initial, 1, var))) #>  0.4275037 print(mean(apply(diff_final, 1, var))) #>  0.04305659
We also list IIAs for demonstration purposes, which also improves.
colMeans(get_iia(result$block_final, data = item_responses)) #> BPlin BPquad AClin ACquad #> 0.0195425 -0.0498910 0.1263995 0.1595815
In some cases, users may want to optimize item characteristics sequentially, rather than in a simultaneous manner. This makes sense because it is possible that simultaneous optimization will inevitably favor the improvement in one characteristic at the cost of losing the best fit for the other, as we have observed in Step 5.
Two solutions can be made to address this problem:
Pay careful attention to the distribution of each item characteristic and try out different weights for characteristics with different scales. Alternatively, try smaller
end_criteria or larger
n_exchange values to allow for more iterations to be run;
Use a multi-step optimization process, where some item characteristics are optimized first, then others. This involves running
sa_pairing_generalized() several times, which each time optimizing more and more item characteristics. Those characteristics optimized will remain their weight in later stages, but have 0 weights if they are not yet optimized.
With the previous example, we show how method 2 will work, starting from
initial_FC. First we perform optimization on latent traits:
<- sa_pairing_generalized(initial_FC, eta_Temperature = 0.01, FC_1 r = 0.995, end_criteria = 10^(-6), weights = c(1, 0, 0), item_chars = item_chars, use_IIA = TRUE, rater_chars = item_responses)
Then, we optimize based on minimizing variance in social desirability within a block.
<- sa_pairing_generalized(FC_1$block_final, eta_Temperature = 0.01, FC_2 r = 0.995, end_criteria = 10^(-6), weights = c(1, -1, 0), item_chars = item_chars, use_IIA = TRUE, rater_chars = item_responses)
Finally, we optimize bease on minimizing variance in item difficulty.
<- sa_pairing_generalized(FC_2$block_final, eta_Temperature = 0.01, FC_3 r = 0.995, end_criteria = 10^(-6), weights = c(1, -1, -3), item_chars = item_chars, use_IIA = TRUE, rater_chars = item_responses)
First, underlying latent traits. It does look nice like what we have in Step 5.
::kable(matrix(item_chars$DIM[t(FC_3$block_final)], ncol = 3, byrow = TRUE))knitr
Next let’s look at difference in social desirability within each block. It performs better than in Step 5, where the discrepancy within block is removed!
<- matrix(item_chars$SD_Mean[t(FC_3$block_final)], ncol = 3, byrow = TRUE) sd_FC3 ::kable(sd_FC3)knitr
As before, let’s see how much have we improved on the average variance for all blocks. Improvement in social desirability is verified by seeing the decrease in variance.
# Initial solution print(mean(apply(sd_initial, 1, var))) #>  0.3317092 # Simultaneous optimization print(mean(apply(sd_final, 1, var))) #>  0.04195382 # Sequential optimization print(mean(apply(sd_FC3, 1, var))) #>  0.002573133
Finally we look at item difficulty.
<- matrix(item_chars$DIF[t(FC_3$block_final)], ncol = 3, byrow = TRUE) diff_fc3 ::kable(diff_final)knitr
Average variance of item difficulty also decreases!
# Initial solution print(mean(apply(diff_initial, 1, var))) #>  0.4275037 # Simultaneous optimization print(mean(apply(diff_final, 1, var))) #>  0.04305659 # Sequential optimization print(mean(apply(diff_fc3, 1, var))) #>  0.04012038
How about IIAs?
colMeans(get_iia(FC_3$block_final, data = item_responses)) #> BPlin BPquad AClin ACquad #> 0.0335635 -0.0304330 0.1459820 0.1892760
In this tutorial, we have shown how the functionality of
autoFC can be used to automatically build forced choice scales with better matches in various item characteristics.
We note that this automatic tool does not guarantee the production of a scale with best matches in these characteristic simultaneously, but instead provides solutions close to optimal among a search space where exhaustive enumeration of each solution is unrealistic.
We also note that, to produce scales with better overall fit, users are encouraged to: 1. customize their own functions for optimizing each item characteristic; 2. Try out different
Temperature (eta_Temperature) or
r values; 3. Use a sequential process to optimize item characteristics one by one.