Metaphor in Mind and Machine: Unraveling Conceptual Metaphors for NLP Applications

Poster No:

1017 

Submission Type:

Abstract Submission 

Authors:

Hyeonseop Yoon1, Shin-ae Yoon2

Institutions:

1Hankuk University of Foreign Studies, Seoul, WI, 2Konkuk University, Seoul, WI

First Author:

Hyeonseop Yoon  
Hankuk University of Foreign Studies
Seoul, WI

Co-Author:

Shin-ae Yoon  
Konkuk University
Seoul, WI

Introduction:

This paper delves into the intricate realm of natural language understanding by comparing the cognitive processes of the human brain to state-of-the-art natural language processing (NLP) models, focusing on the comprehension of metaphorical phrases. Metaphor phrase, more commonly known as conceptual metaphor, refers to the understanding of one idea, or conceptual domain, in terms of another. Cognitive linguistics consider language processing as reflecting general aspect of cognition rather than adopting a modular view of mind [1][2] the meaning emerges from context, not a literal text itself. We understand language within context using conceptual metaphor [3]. From the tenet of conceptual metaphor, we seek to unravel the extent to which NLP models can replicate the human language processing, specifically in metaphor.

Methods:

To achieve this, we took the data from Openneuro's pragmatic language for our NLU dataset [4]. The protocol is aimed to distinguish whether given linguistic expression is metaphorical, literal, or absurd or not. We re-analyzed the fMRI data involving 28 Spanish-speaking participants (22.78 ± 1.79, 13 men, 15 women). The experimental paradigm was divided into 2 runs. In each run, 40 experimental events were randomly presented (i.e., 20 literal phrases, 10 metaphorical phrases, 10 absurd phrases). Participants were asked to answer either YES or NO button attached on MRI, whether given linguistic stimuli are specific category followed by (i.e., literal, metaphorical, and absurd). Another experiment involved 43 subjects (26.22 ± 3.14 years, 21 men, 22 women) with linguistic stimuli consisting of 40 literal phrases, 20 metaphorical phrases, and 20 absurd phrases. In this experiment, they had to choose which category (literal, metaphoric, absurd) the given linguistic stimuli belong to. The hits for each trial was evaluated. Concurrently, we employed a range of NLP models, including BERT, GPT-NeoX, and LSTM, to scrutinize their performance in metaphor comprehension. Our analysis encompassed both behavioral and internal representation aspects. The Representational Similarity Analysis(RSA), which is a method comparing neural pattern with non-neural pattern by configurating representational dissimilarity matrixes (RDMs), was applied to both fMRI data and language models' hidden representation by each layers. We have designed neural RDM(N_v^s) using fMRI data within Pysearchlight (radius=6) at the voxel v of subject s, model RDM (M_l^k) using layer embedding at the layer l of model k. Across every voxel of whole brain, we have checked Spearman's rho, t-test [5].

Results:

The results revealed that, as for behavioral accuracy, human accuracy of each categories (literal, metaphorical, absurd) are 93.62%, 92.75%, 90.91% , and the average LLM accuracy are 96.42, 94.42, 93.35 respectively. Latter rivaled or exceeded human performance in recognizing language stimuli. The results in the Representational Similarity Analysis (RSA) lay in the crux of our investigation, which unveiled intriguing patterns of neural similarity between the human brain and NLP models. RSA brain has the significant t-score, calculated by random permutation of 5000. Looking upon a brain map, notably, regions such as the early visual cortex (EVC), posterior cingulate cortex (PCC), inferior frontal gyrus (IFG), and middle temporal gyrus exhibited parallels with previous semantic-pragmatic fMRI research in metaphor processing.

Conclusions:

In summary, our study offers a compelling exploration of the convergence and divergence between human cognition and artificial intelligence in metaphor comprehension. Future endeavors will need to be delved deeper into uncovering the hierarchical and mechanistic underpinnings of conceptual metaphors, which would shed light on the evolving landscape of NLP in cognitive linguistics. These findings are thought to illuminate the remarkable capabilities of NLP models in replicating certain aspects of human cognitive processes.

Language:

Language Comprehension and Semantics 1

Modeling and Analysis Methods:

fMRI Connectivity and Network Modeling 2

Keywords:

Cognition
Language
Machine Learning
Open Data

1|2Indicates the priority used for review

Provide references using author date format

[1] Feldman, J., & Narayanan, S. (2004). Embodied meaning in a neural theory of language. Brain and language, 89(2), 385-392.

[2] Du Castel, B. (2015). Pattern activation/recognition theory of mind. Frontiers in computational neuroscience, 9, 90.

[3] Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago press.

[4] Rasgado-Toledo, J. and Lizcano-Cortés, F. and Olalde-Mathieu, V. and Zamora-Ursulo, M. and Licea-Haquet, G. and Carillo-Peña, A. and Navarrete, E. and Reyes-Aguilar, A.* and Giordano, M (2021). Pragmatic Language. OpenNeuro. [Dataset] doi: 10.18112/openneuro.ds003481.v1.0.3

[5] Lee, J., Jung, M., Lustig, N., & Lee, J. H. (2023). Neural representations of the perception of handwritten digits and visual objects from a convolutional neural network compared to humans. Human Brain Mapping, 44(5), 2018-2038.