Generality evaluation of meta-matching models for cognitive function prediction with small datasets

Poster No:

1446 

Submission Type:

Abstract Submission 

Authors:

Minjae Kim1, Juhyuk Han1, Won Hee Lee1

Institutions:

1Kyung Hee University, Yongin, Republic of Korea

First Author:

Minjae Kim  
Kyung Hee University
Yongin, Republic of Korea

Co-Author(s):

Juhyuk Han  
Kyung Hee University
Yongin, Republic of Korea
Won Hee Lee  
Kyung Hee University
Yongin, Republic of Korea

Introduction:

There is a growing interest in leveraging resting-state functional connectivity (RSFC) data derived from resting-state functional MRI to predict non-imaging phenotypes [1]. The introduction of the meta-matching framework has extended this paradigm to predicting non-imaging phenotypes such as cognitive functioning [2]. The meta-matching framework operates under the assumption that most phenotypes are interconnected rather than independent, aiming to transfer predictive models from large datasets (e.g., UK Biobank) to smaller ones (e.g., HCP). However, the efficacy of meta-matching models trained on small datasets remains unexplored. To address this gap, we assess the generality of meta-matching models trained with RSFC and 58 non-imaging phenotypes within small HCP datasets [3] for predicting individual cognitive measures across two independent datasets.

Methods:

The training meta-set comprised 750 HCP subjects, each with 400 × 400 RSFCs based on the 400-region Schaefer parcellation and 58 phenotypes [3, 4]. To assess the generality of meta-matching models for predicting cognitive measures, we used two independent datasets: 418 subjects from the Amsterdam Open MRI Collection (AOMIC) [5] and 116 subjects from the Consortium for Neuropsychiatric Phenomics (CNP) [6]. Each test meta-set included 400 × 400 RSFC matrix and cognitive measure for each subject. We used the Raven's sum score for AOMIC and global cognitive function obtained from the first principal component across 24 cognitive measures for CNP [7]. Four existing models were tested using the meta-matching approach: (1) kernel ridge regression (KRR); (2) deep neural network (DNN) comprising four convolutional layers; (3) advanced fine-tuning where the weights of the last two layers of the DNN was fine-tuned; (4) advanced stacking with KRR, selecting a node with the highest coefficient of determination (COD) for the phenotypes based on the DNN, and training KRR with the selected node. We further evaluated three models: (5) advanced stacking with support vector regression (SVR), where SVR with a Gaussian kernel was trained for the selected node; (6) deep graph convolutional network (DGCNN) consisting of four graph convolutional layers and 1 pooling operator [8]; (7) graph convolutional network (GCN) consisting of three graph convolutional layers [9]. Using the meta-matching approach, each model was trained with the HCP RSFC data to predict 58 phenotypes. The COD was computed between the predicted phenotypes from the trained model and cognitive measures in a subset of the test meta-set (AOMIC or CNP). The node with the highest COD was identified as the most influential in predicting cognitive function within the given meta-set. Subsequently, the remaining RSFC data from a different test meta-set was fed into the trained models, and then cognitive function prediction was made exclusively for the previously identified influential node. The Pearson's correlation between the true and predicted score was calculated to evaluate the model's generalization. We conducted this procedure 50 times and correlations with the true cognitive score were averaged across 50 repetitions (Figure 1).
Supporting Image: figure1_final.jpg
   ·Overview of the meta-matching framework.
 

Results:

The predictive ability of RSFC for cognitive function was compared across seven models within the meta-matching method on the AOMIC and CNP datasets (Figure 2). Overall, the GCN model provided better performance compared to the other models on both datasets. Specifically, it improved generalization performance by an average correlation of 0.11 on the CNP compared to KRR (t = 6.40, p < 0.05).
Supporting Image: figure2_final.jpg
   ·Generalization performance of different meta-matching models trained with the HCP dataset for predicting cognitive measures in the test meta-sets of the AOMIC and CNP datasets.
 

Conclusions:

This study investigated the predictive ability of RSFC for cognitive function using seven different models trained on the HCP dataset within the meta-matching framework. Our GCN model demonstrated superior generalizability across healthy individuals (AOMIC) and individuals with psychiatric illness (CNP), suggesting that it can be used in clinical settings to assess cognitive function and inform treatment decisions.

Modeling and Analysis Methods:

Classification and Predictive Modeling 1
fMRI Connectivity and Network Modeling 2

Keywords:

Data analysis
FUNCTIONAL MRI
Other - meta-matching

1|2Indicates the priority used for review

Provide references using author date format

[1] He, T., Kong, R., Holmes, A.J., Nguyen, M., Sabuncu, M.R., Eickhoff, S.B., Bzdok, D., Feng, J., Yeo, B.T. (2020). Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage, 206, 116276.
[2] He, T., An, L., Chen, P., Chen, J., Feng, J., Bzdok, D., Holmes, A.J., Eickhoff, S.B., Yeo, B.T. (2022). Meta-matching as a simple framework to translate phenotypic predictive models from big to small data. Nature neuroscience, 25(6), 795-804.
[3] Ooi, L.Q.R., Chen, J., Zhang, S., Kong, R., Tam, A., Li, J., Dhamala, E., Zhou, J.H., Yeo, B.T. (2022). Comparison of individualized behavioral predictions across anatomical, diffusion and functional connectivity MRI. NeuroImage, 263, 119636.
[4] Van Essen, D.C., Smith, S.M., Barch, D.M., Behrens, T.E., Yacoub, E., Ugurbil, K., Wu-Minn HCP Consortium. (2013). The WU-Minn human connectome project: an overview. NeuroImage, 80, 62-79.
[5] Snoek, L., van der Miesen, M.M., Beemsterboer, T., Van Der Leij, A., Eigenhuis, A., Steven Scholte, H. (2021). The Amsterdam Open MRI Collection, a set of multimodal MRI datasets for individual difference analyses. Scientific data, 8(1), 85.
[6] Poldrack, R.A., Congdon, E., Triplett, W., Gorgolewski, K.J., Karlsgodt, K.H., Mumford, J.A., Sabb, F.W., Freimer, N.B., London, E.D., Cannon, T.D., Bilder, R.M. (2016). A phenome-wide examination of neural and cognitive function. Scientific data, 3(1), 1-12.
[7] Chopra, S., Dhamala, E., Lawhead, C., Ricard, J., Orchard, E., An, L., Chen, P., Wulan, N., Levi, P., Moses, J., Chen, L., Kumar, P., Rubenstein, A., Aquino, K., Fornito, A., Harpaz-Rotem, I., Germine, L., Baker, J.T., Yeo, B.T., Holmes, A. (2022). Reliable and generalizable brain-based predictions of cognitive functioning across common psychiatric illness. medRxiv, 2022-12.
[8] Zhang, M., Cui, Z., Neumann, M., Chen, Y. (2018). An End-to-End Deep Learning Architecture for Graph Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
[9] Kipf, T.N., Welling, M. (2016). Semi-Supervised Classification with Graph Convolutional Networks. ArXiv, abs/1609.02907