Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Chain-of-thought prompting has attracted much attention in Artificial Intelligence (AI). Large Language Models (LLMs) can be instructed to imitate human thought processes step by step, and they have demonstrated surprising reasoning capabilities. However, when faced with complex reasoning tasks, LLMs perform poorly and often produce inaccurate results. This may be due to insufficient knowledge and poor real-time performance, resulting in incorrect inference chains. Inspired by knowledge augmented deep learning and retrieval augmented generation, a more feasible approach is knowledge guided chain-of-thought prompting generation, which introduces a large amount of knowledge, including common, logical, and factual information, into the process of generating a chain of reasoning. Although a large amount of research has been conducted in these areas, there is still a gap in the survey literature on knowledge-guided chain-of-thought prompt generation. In this survey, we introduce the concept of knowledge-driven chain-of-thought generation and discuss how knowledge plays an important role in the process of chain-of-thought generation and enhancement, both in terms of knowledge sources and knowledge use. Then, evaluation guidelines for chain-of-thought reasoning are sorted out. Next, a benchmark task and a public dataset for chain-of-thought prompting are presented. Finally, we conducted a comprehensive examination of the current opportunities and challenges and formulated a series of recommendations for future research directions. This survey may be of assistance to researchers in the understanding of the latest research developments in these areas.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Comments on this article