Poster No:
2044
Submission Type:
Abstract Submission
Authors:
Ji Hyun Ko1
Institutions:
1University of Manitoba, Winnipeg, Manitoba
First Author:
Introduction:
In randomized controlled trial (RCT), demonstrating a significant interaction effect on outcome measurements is the gold standard way of proving the effects of therapy [1]. Post-hoc analyses can be conducted to discern which direction the outcome variable moved. The ideal scenario would be that there is significant effect in the intervention group while no significant changes were detected in the control group (Figure 1A). However in reality, researchers often encounter unexpected outcomes, e.g., X-shaped crossing (Figure 1B), or significant changes in waitlisted groups after insignificant wait times. These results are often regarded as false positives driven by unintentional bias introduced in the trial procedure, which leads the peer-reviewers and readers to question the credibility of the findings.
In this study, I will demonstrate, through computer simulation, how the current practice of performing post-hoc tests only on a few clusters identified by interaction analysis can structurally bias the post-hoc test results to show more false positives in control condition.

·Figure 1. Interaction effects in an ideal case (A) and in a reality using voxel-based neuroimaging analysis (B).
Methods:
In this simulation study, two different cohorts were divided into two (active treatment vs. control treatment), and the measurements were taken pre- vs. post-. No intervention was provided in control group. Tests were performed 10,000 times (representing 10,000 voxels or brain regions). The treatment effects were expected to be found in 1,000 "voxels." All measures were z-scored (normally distributed). The simulation was repeated for varying degree of treatment effects (0.1-1.0), levels of nonspecific effects of test-retest relative to the interindividual variability (0.1-1.0), and sample size (10-100 per group).
Interaction effects were assessed (treatment group vs. time) for all tests, and p-values were corrected with false discovery rate (estimating q-values). q<0.05 was considered significant. For the test results of which the interaction effects were identified to be significant, post-hoc analyses were performed using paired t-test. The ratio of false positive results in control condition among true interaction effects were calculated, and fitted against the sensitivity of results (ratio of significant interaction effects out of 1,000 simulated treatment effects).
Results:
In all simulations, the false positive rates of interaction effects were maintained below 0.7%. The true positive rates (sensitivity) were noticeably different across simulated conditions (0-100%), which was significantly associated with increased effect of treatment, decreased level of test-retest variability, and larger sample size (p<0.001). Among the true interaction effects, varying degree of false positives were observed in control condition (0-100%), which was significantly associated with the sensitivity (adj. R2 = 0.828; Figure 2).

·Figure 2. Inverse (reciprocal) relationship between sensitivity of Interaction effects (x-axis) and ratio of finding false positive effects in control condition (y-axis). Curves are fitted by y=a/(x+b
Conclusions:
Our simulation results show that the sensitivity of the interaction effect analysis is inversely proportional to the ratio of false positive control effects (Figure 2). In other words, if the interaction effect tests only identified 20% of true effects, 9% of these significant test results will show false positives in control condition. If the sensitivity is lowered to 5%, 20% of these will show false positives in control condition. This relationship was not observed in false positive interaction effects (adj. R2 = 0.0009). In conclusion, the present simulation study confirms that the false positivity rate of control condition can increase if sensitivity of the interaction effect is low (which is almost always the case in neuroimaging studies with 2x2 design), and thus the presence of significant effects in control condition does not disqualify the results of active condition, and it should be interpreted in the context of implicit bias of performing post-hoc tests only in the limited number of clusters identified in the voxel-based interaction effect analyses.
Modeling and Analysis Methods:
Univariate Modeling 1
Other Methods 2
Keywords:
Data analysis
FUNCTIONAL MRI
Positron Emission Tomography (PET)
Statistical Methods
Therapy
Treatment
1|2Indicates the priority used for review
Provide references using author date format
E. Hariton and J. J. Locascio, "Randomised controlled trials - the gold standard for effectiveness research: Study design: randomised controlled trials," (in eng), BJOG, vol. 125, no. 13, p. 1716, Dec 2018, doi: 10.1111/1471-0528.15199.