Poster No:
988
Submission Type:
Abstract Submission
Authors:
Caroline Ahn1,2, Quan Do1,3, Leah Bakst4,2, Michael Pascale4,2, Joseph McGuire4,1,2, Michael Hasselmo4,1,3, Chantal Stern4,2,1
Institutions:
1Graduate Program for Neuroscience, Boston University, Boston, MA, 2Cognitive Neuroimaging Center, Boston University, Boston, MA, 3Center for Systems Neuroscience, Boston University, Boston, MA, 4Department of Psychological and Brain Sciences, Boston University, Boston, MA
First Author:
Caroline Ahn
Graduate Program for Neuroscience, Boston University|Cognitive Neuroimaging Center, Boston University
Boston, MA|Boston, MA
Co-Author(s):
Quan Do
Graduate Program for Neuroscience, Boston University|Center for Systems Neuroscience, Boston University
Boston, MA|Boston, MA
Leah Bakst, PhD
Department of Psychological and Brain Sciences, Boston University|Cognitive Neuroimaging Center, Boston University
Boston, MA|Boston, MA
Michael Pascale
Department of Psychological and Brain Sciences, Boston University|Cognitive Neuroimaging Center, Boston University
Boston, MA|Boston, MA
Joseph McGuire, PhD
Department of Psychological and Brain Sciences, Boston University|Graduate Program for Neuroscience, Boston University|Cognitive Neuroimaging Center, Boston University
Boston, MA|Boston, MA|Boston, MA
Michael Hasselmo, DPhil
Department of Psychological and Brain Sciences, Boston University|Graduate Program for Neuroscience, Boston University|Center for Systems Neuroscience, Boston University
Boston, MA|Boston, MA|Boston, MA
Chantal Stern, DPhil
Department of Psychological and Brain Sciences, Boston University|Cognitive Neuroimaging Center, Boston University|Graduate Program for Neuroscience, Boston University
Boston, MA|Boston, MA|Boston, MA
Introduction:
Humans can extract rules from limited examples and generalize them across contexts, an ability that is lacking in AI. We propose that selecting the right level of abstraction for rule representations is key to fast, flexible learning, and in humans this process is guided by inductive biases – our pre-existing assumptions about data structure and rule relations1,2. However, these biases can lead to errors in reasoning. It is unknown to what degree these inductive biases are shared across individuals, and whether they prioritize certain features over others during rule learning. We present our behavioral findings from a novel, visuospatial abstract rule learning task, the Cognitive Neuro Abstraction and Reasoning Corpus (CogNARC). Future fMRI work using this task is planned.
CogNARC is an open-response task that tests few-shot learning and requires subjects to generate solutions on an interactive interface. The original task was introduced as a benchmark for AI abstraction and generalization, but has also been used to study human cognition3,4,5. CogNARC reasoning problems are varied in the types and numbers of rules that dictate input-output solutions. A single problem can contain multiple rules with complex conditional relations. Learned rules do not carry over across problems. Therefore, CogNARC is less forgiving to random guessing or brute force methods compared to other measures of abstract reasoning, which tend to be multiple-choice and can be extensively trained upon6,7,8. With CogNARC, we are able to identify where reasoning tends to fail by systematically probing different types of human errors.
Methods:
We collected online behavioral data from 220 subjects (52.27% male) on Amazon Mechanical Turk. Subjects ranged in age from 20 to 35 years (M = 29.6, SD = 4.1). 75 problems were selected to represent a variety of rules and difficulty levels. Subjects were allowed up to 4 hours to complete all problems. Subjects learned input-output transformation rules from 2 – 6 example pairs, then applied the rules to a test input by drawing their own output on an editable grid. They were allowed up to 3 attempts per trial and were paid $5 for task completion, with a performance bonus of up to $15. To identify common errors, we transformed action sequences into graphical representations and applied hierarchical clustering algorithms to group solutions by shared strategies across subjects. We chose the graph analysis approach due to its efficiency in representing abstract concepts and quantifying their relationships8,9.
Results:
Our exploratory examination aimed to identify a qualitative interpretation for clusters that emerged from the data-driven graph analysis. Subjects' accuracy on the CogNARC task (M = 78.9%, SD = 19.4%) greatly outperformed AI programs (21% best accuracy10). While some erroneous solutions could be attributed to carelessness, motor error, or random guessing, most were conceptually close to the correct solution but arose from mis-learning of rule relations. These errors were more evident in complex problems that required learning of hierarchical rules across multiple feature dimensions. By studying these errors, we were able to infer inductive biases of subjects such as a tendency towards color-based over size- or pattern-based rules. In cases where multiple inductive biases were present, a hierarchy of these biases emerged.
Conclusions:
The task design of CogNARC is well-suited for studying the formation and structure of abstract rule representations in humans. In particular, graphical analysis of action sequences allows for in-depth investigation into how common error patterns reflect underlying inductive biases which lead humans to assume rule relations for certain features and ignore others. Future work will study cognitive processes during CogNARC with methods such as eye-tracking, EEG, or fMRI, with the aim of mapping the behavioral results from this study to the underlying brain activity.
Higher Cognitive Functions:
Decision Making 2
Executive Function, Cognitive Control and Decision Making
Reasoning and Problem Solving 1
Learning and Memory:
Learning and Memory Other
Modeling and Analysis Methods:
Classification and Predictive Modeling
Keywords:
Cognition
Learning
Other - reasoning; problem solving; abstract reasoning; rule learning; AI
1|2Indicates the priority used for review
Provide references using author date format
1. Heit, E. (2000), 'Properties of Inductive Reasoning', Psychonomic Bulletin & Review, vol. 7, pp. 569-592.
2. Griffiths, T.L. (2010), 'Probabilistic Models of Cognition: Exploring Representations and Inductive Biases', Trends in Cognitive Sciences, vol. 14, no. 8, pp. 357-364.
3. Chollet, F. (2019), 'On the Measure of Intelligence', arXiv preprint arXiv:1911.01547.
4. Johnson, A. (2021), 'Fast and Flexible: Human Program Induction in Abstract Reasoning Tasks', arXiv preprint arXiv:2103.05823.
5. Acquaviva, S. (2022), 'Communicating Natural Programs to Humans and Machines', Advances in Neural Information Processing Systems, vol. 35, pp. 3731-3743.
6. Raven, J. (2003), 'Raven Progressive Matrices', In Handbook of Nonverbal Assessment (pp. 223-237). Boston, MA: Springer US.
7. Zerroug, A. (2022), 'A Benchmark for Compositional Visual Reasoning', Advances in Neural Information Processing Systems, vol. 35, pp. 29776-29788.
8. Odouard, V.V. (2022), 'Evaluating Understanding on Conceptual Abstraction Benchmarks', arXiv preprint arXiv:2206.14187.
9. Wille, R. (1997), 'Conceptual Graphs and Formal Concept Analysis', In Conceptual Structures: Fulfilling Peirce's Dream: Fifth International Conference on Conceptual Structures, ICCS'97 Seattle, Washington, USA, August 3–8, 1997 Proceedings 5 (pp. 290-303). Springer Berlin Heidelberg.
10. Zhu, G. (2016), 'Computing Semantic Similarity of Concepts in Knowledge Graphs', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 1, pp. 72-85.