Mapping Language in the Brain with AI: A Study on Semantic and Syntactic Representation in Neural Models and Human Brain Activity
Downloads
Understanding how the human brain processes language has been a long-standing challenge in neuroscience and cognitive science. Recent advancements in artificial intelligence (AI), particularly in neural networks, have opened new avenues for investigating the representation of language in the brain. This study explores the relationship between semantic and syntactic representations in neural models and human brain activity. By comparing how deep learning models and the human brain process linguistic structures, this research seeks to bridge the gap between computational models and biological systems. The research aims to analyze the similarities and differences in how neural models and the human brain represent syntactic and semantic information. Using functional magnetic resonance imaging (fMRI) data from brain activity during language processing tasks, and applying AI models trained on large language datasets, this study investigates the neural correlates of syntax and semantics. The results show that certain regions of the brain correspond to the syntactic structures processed by AI models, while others align more closely with semantic representations. The neural network models exhibited high correspondence with brain activity patterns, particularly in tasks involving sentence structure and meaning comprehension. This study concludes that AI models can be used to enhance our understanding of how language is represented in the brain, offering valuable insights into both neuroscience and artificial intelligence.
Alers-Valent?n, H., Fong, S., & Vega-Riveros, J. F. (2023). Modeling Syntactic Knowledge With Neuro-Symbolic Computation. Dalam Rocha A., Steels L., & van den Herik J. (Ed.), Int. Conf. Agent. Artif. Intell. (Vol. 3, hlm. 608–616). Science and Technology Publications, Lda; Scopus. https://doi.org/10.5220/0011718500003393
Alsharman, N., Masadeh, R., Jawarneh, I. A., & Al-Rababa’a, A. (2024). The Stanford Dependency Relations for Commonsense Knowledge Representation of Winograd Schema Challenge (WSC). Journal of Computer Science, 20(9), 1091–1098. Scopus. https://doi.org/10.3844/JCSSP.2024.1091.1098
Auxéméry, Y. (2024). What is psychotherapy today? From psychotherapist to “Psybot:” Towards a new definition. Evolution Psychiatrique, 89(4), 749–792. Scopus. https://doi.org/10.1016/j.evopsy.2024.09.003
Beguš, G., Dabkowski, M., & Rhodes, R. (2025). Large linguistic models: Investigating LLMs’ metalinguistic abilities. IEEE Transactions on Artificial Intelligence. Scopus. https://doi.org/10.1109/TAI.2025.3575745
Chimalakonda, S., Das, D., Mathai, A., Tamilselvam, S., & Kumar, A. (2023). The Landscape of Source Code Representation Learning in AI-Driven Software Engineering Tasks. Proc Int Conf Software Eng, 342–343. Scopus. https://doi.org/10.1109/ICSE-Companion58688.2023.00098
Dodaro, C., Maratea, M., & Vallati, M. (2023). On the Configuration of More and Less Expressive Logic Programs. Theory and Practice of Logic Programming, 23(2), 415–443. Scopus. https://doi.org/10.1017/S1471068422000096
Fraj, M., HajKacem, M. A. B., & Essoussi, N. (2024). Multi-view subspace text clustering. Journal of Intelligent Information Systems, 62(6), 1583–1606. Scopus. https://doi.org/10.1007/s10844-024-00897-2
Funakoshi, K. (2022). Non-Axiomatic Term Logic: A Theory of Cognitive Symbolic Reasoning. Transactions of the Japanese Society for Artificial Intelligence, 37(6). Scopus. https://doi.org/10.1527/tjsai.37-6_C-M11
Graben, P., Huber, M., Meyer, W., Römer, R., & Wolff, M. (2022). Vector Symbolic Architectures for Context-Free Grammars. Cognitive Computation, 14(2), 733–748. Scopus. https://doi.org/10.1007/s12559-021-09974-y
Hofmann, L. (2024). Sentential Negativity and Anaphoric Polarity-Tags: A Hyperintensional Account. Dalam Pavlova A., Pedersen M.Y., & Bernardi R. (Ed.), Lect. Notes Comput. Sci.: Vol. 14354 LNCS (hlm. 109–135). Springer Science and Business Media Deutschland GmbH; Scopus. https://doi.org/10.1007/978-3-031-50628-4_7
Jang, W., Horm, D., Kwon, K.-A., Lu, K., Kasak, R., & Park, J. H. (2025). Leveraging natural language processing to deepen understanding of parent–child interaction processes and language development. Family Relations, 74(3), 1146–1173. Scopus. https://doi.org/10.1111/fare.13198
Kaleem, S., Jalil, Z., Nasir, M., & Alazab, M. (2024). Word embedding empowered topic recognition in news articles. PeerJ Computer Science, 10. Scopus. https://doi.org/10.7717/peerj-cs.2300
Katerynchuk, I., Komarnytska, O., & Balendr, A. (2024). The Use of Artificial Intelligence Models in the Automated Knowledge Assessment System. Dalam Luntovskyy A., Klymash M., Beshley M., Melnyk I., & Schill A. (Ed.), Lect. Notes Electr. Eng.: Vol. 1198 LNEE (hlm. 274–288). Springer Science and Business Media Deutschland GmbH; Scopus. https://doi.org/10.1007/978-3-031-61221-3_13
Li, G., & Yang, Y. (2024). On the Code Vulnerability Detection Based on Deep Learning: A Comparative Study. IEEE Access, 12, 152377–152391. Scopus. https://doi.org/10.1109/ACCESS.2024.3479237
Lierler, Y. (2023). Unifying Framework for Optimizations in Non-Boolean Formalisms. Theory and Practice of Logic Programming, 23(6), 1248–1280. Scopus. https://doi.org/10.1017/S1471068422000400
Liu, X., Li, J., & Zhao, Y. (2022). The Model for Pneumothorax Knowledge Extraction Based on Dependency Syntactic Analysis. Dalam Lect. Notes Oper. Res.: Vol. Part F3781 (hlm. 160–168). Springer Nature; Scopus. https://doi.org/10.1007/978-981-16-8656-6_15
Matsiievskyi, O., Honcharenko, T., Solovei, O., Liashchenko, T., Achkasov, I., & Golenkov, V. (2024). Using Artificial Intelligence to Convert Code to Another Programming Language. SIST - IEEE Int. Conf. Smart Inf. Syst. Technol., Proc., 379–385. Scopus. https://doi.org/10.1109/SIST61555.2024.10629305
Nechesov, A. V. (2023). Semantic Programming and Polynomially Computable Representations. Siberian Advances in Mathematics, 33(1), 66–85. Scopus. https://doi.org/10.1134/S1055134423010066
Orebi, S. M., & Naser, A. M. (2025). Opinion Mining in Text Short by Using Word Embedding and Deep Learning. Journal of Applied Data Sciences, 6(1), 526–535. Scopus. https://doi.org/10.47738/jads.v6i1.438
Römer, R., beim Graben, P., Huber-Liebl, M., & Wolff, M. (2022). Unifying Physical Interaction, Linguistic Communication, and Language Acquisition of Cognitive Agents by Minimalist Grammars. Frontiers in Computer Science, 4. Scopus. https://doi.org/10.3389/fcomp.2022.733596
Semenov, R. (2024). Language Model Architecture Based on the Syntactic Graph of Analyzed Text. Dalam Jordan V., Tarasov I., Shurina E., Filimonov N., & Faerman V.A. (Ed.), Commun. Comput. Info. Sci.: Vol. 1986 CCIS (hlm. 182–193). Springer Science and Business Media Deutschland GmbH; Scopus. https://doi.org/10.1007/978-3-031-51057-1_14
Shcherbina, A. V., Kolianov, A. J., & Pashkovsky, E. A. (2022). Semiotic Aspects of Artificial Intelligence Representation in Socio-Political Discourse. Dalam Shaposhnikov S., Prof. P. Str. 5 Saint Petersburg Electrotechnical University “LETI” Saint Petersburg, Sharakhina L., & Prof. P. Str. 5 Saint Petersburg Electrotechnical University “LETI” Saint Petersburg (Ed.), Proc. Commun. Strateg. Digit.Soc. Semin., ComSDS (hlm. 158–161). Institute of Electrical and Electronics Engineers Inc.; Scopus. https://doi.org/10.1109/ComSDS55328.2022.9769147
Tian, Y., Chen, Z., Yang, J., Xu, B., Guo, Z., Zhang, X., Hao, R., Li, Q., & Sun, M. (2023). Medical Extractive Question-Answering Based on Fusion of Hierarchical Features. Dalam Jiang X., Wang H., Alhajj R., Hu X., Engel F., Mahmud M., Pisanti N., Cui X., & Song H. (Ed.), Proc. - IEEE Int. Conf. Bioinform. Biomed., BIBM (hlm. 3938–3945). Institute of Electrical and Electronics Engineers Inc.; Scopus. https://doi.org/10.1109/BIBM58861.2023.10385572
Vrublevskyi, V. (2024). TRANSFORMER MODEL USING DEPENDENCY TREE FOR PARAPHRASE IDENTIFICATION. Bulletin of the Taras Shevchenko National University of Kyiv. Physics and Mathematics, 2024(1), 154–159. Scopus. https://doi.org/10.17721/1812-5409.2024/1.28
Wijesiriwardene, T., Sheth, A., Shalin, V. L., & Das, A. (2023). Why Do We Need Neurosymbolic AI to Model Pragmatic Analogies? IEEE Intelligent Systems, 38(5), 12–16. Scopus. https://doi.org/10.1109/MIS.2023.3305862
Wu, S., Fei, H., Li, F., Zhang, M., Liu, Y., Teng, C., & Ji, D. (2022). Mastering the Explicit Opinion-Role Interaction: Syntax-Aided Neural Transition System for Unified Opinion Role Labeling. Proc. AAAI Conf. Artif. Intell., AAAI, 36, 11513–11521. Scopus. https://doi.org/10.1609/aaai.v36i10.21404
Xian, Z., Huang, R., Towey, D., Fang, C., & Chen, Z. (2024). TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree Transformation. IEEE Transactions on Software Engineering, 50(6), 1600–1619. Scopus. https://doi.org/10.1109/TSE.2024.3393419
Yang, B., Li, H., & Xing, Y. (2023). SenticGAT: Sentiment Knowledge Enhanced Graph Attention Network for Multi-view Feature Representation in Aspect-based Sentiment Analysis. International Journal of Computers, Communications and Control, 18(5). Scopus. https://doi.org/10.15837/ijccc.2023.5.5089
Yang, G., Xu, S., Li, P., & Zhu, Q. (2024). Spatial Relation Extraction on AMR Enhancement and Additional Markers. Dalam Huang D.-S., Si Z., & Zhang C. (Ed.), Lect. Notes Comput. Sci.: Vol. 14878 LNAI (hlm. 434–445). Springer Science and Business Media Deutschland GmbH; Scopus. https://doi.org/10.1007/978-981-97-5672-8_37
Zhang, X., Wang, S., Lin, N., Zhang, J., & Zong, C. (2022). Probing Word Syntactic Representations in the Brain by a Feature Elimination Method. Proc. AAAI Conf. Artif. Intell., AAAI, 36, 11721–11729. Scopus. https://doi.org/10.1609/aaai.v36i10.21427
Zhang, Z., Wu, Y., Zhou, J., Duan, S., Zhao, H., & Wang, R. (2022). SG-Net: Syntax Guided Transformer for Language Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6), 3285–3299. Scopus. https://doi.org/10.1109/TPAMI.2020.3046683
Zhao, B., Dang, J., & Li, A. (2024). Unraveling Predictive Mechanism in Speech Perception and Production: Insights from EEG Analyses of Brain Network Dynamics. Journal of Shanghai Jiaotong University (Science). Scopus. https://doi.org/10.1007/s12204-024-2729-9
Zheng, Y., Lin, L., Li, S., Yuan, Y., Lai, Z., Liu, S., Fu, B., Chen, Y., & Shi, X. (2024). Layer-Wise Representation Fusion for Compositional Generalization. Dalam Wooldridge M., Dy J., & Natarajan S. (Ed.), Proc. AAAI Conf. Artif. Intell. (Vol. 38, Nomor 17, hlm. 19706–19714). Association for the Advancement of Artificial Intelligence; Scopus. https://doi.org/10.1609/aaai.v38i17.29944
Copyright (c) 2025 Mubasyiroh, Ethan Tan, Isabel Ng

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.