Symbolic knowledge is vital in various human cognitive functions, including reasoning, skill acquisition, and communication. Its integration is also essential for creating AI with human-like capabilities, such as robustness, creativity, and interpretability. Nevertheless, current machine learning approaches still dominantly emphasize learning from large data sets, struggling to effectively and scalably incorporate symbolic knowledge. This leads to fundamental limitations, such as brittle results when faced with complex or novel concepts, and difficulty in understanding or explaining the decision processes of models. Past attempts at integrating symbolic information with neural networks frequently rely on manually created knowledge bases that are defined in specific configurations, thereby impeding their generalizability to new applications and domains. This chapter seeks to introduce interaction and co-evolving mechanisms between neural models and symbolic knowledge bases. It starts from constructing a panoramic learning framework for learning with all experiences (data, rules, knowledge graphs, etc.). Subsequently, it delves into a novel inversion problem of extracting symbolic knowledge from black-box neural models. Finally, based on the components mentioned above, the chapter will explore a blueprint of a lifelong neural-symbolic system that accommodates human intervention.
Book
Neural-symbolic interaction and co-evolving
Frontiers in Artificial Intelligence and Applications
Document Type
Book
Abstract
DOI
10.3233/FAIA230139
Publication Date
8-4-2023
Disciplines
Citation Information
B. Tan, S. Hao, E. Xing, and Z. Hu, “Chapter 7. neural-symbolic interaction and co-evolving,” Frontiers in Artificial Intelligence and Applications, 2023. doi:10.3233/faia230139
IR conditions: non-described