Sort:
Open Access Issue
Layered Temporal Spatial Graph Attention Reinforcement Learning for Multiplex Networked Industrial Chains Energy Management
Tsinghua Science and Technology 2025, 30(2): 528-542
Published: 09 December 2024
Abstract PDF (3.7 MB) Collect
Downloads:19

Demand response has recently become an essential means for businesses to reduce production costs in industrial chains. Meanwhile, the current industrial chain structure has also become increasingly complex, forming new characteristics of multiplex networked industrial chains. Fluctuations in real-time electricity prices in demand response propagate through the coupling and cascading relationships within and among these network layers, resulting in negative impacts on the overall energy management cost. However, existing demand response methods based on reinforcement learning typically focus only on individual agents without considering the influence of dynamic factors on intra and inter-network relationships. This paper proposes a Layered Temporal Spatial Graph Attention (LTSGA) reinforcement learning algorithm suitable for demand response in multiplex networked industrial chains to address this issue. The algorithm first uses Long Short-Term Memory (LSTM) to learn the dynamic temporal characteristics of electricity prices for decision-making. Then, LTSGA incorporates a layered spatial graph attention model to evaluate the impact of dynamic factors on the complex multiplex networked industrial chain structure. Experiments demonstrate that the proposed LTSGA approach effectively characterizes the influence of dynamic factors on intra- and inter-network relationships within the multiplex industrial chain, enhancing convergence speed and algorithm performance compared with existing state-of-the-art algorithms.

Open Access Issue
Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering
Big Data Mining and Analytics 2024, 7(3): 843-857
Published: 28 August 2024
Abstract PDF (6.7 MB) Collect
Downloads:76

Previous works employ the Large Language Model (LLM) like GPT-3 for knowledge-based Visual Question Answering (VQA). We argue that the inferential capacity of LLM can be enhanced through knowledge injection. Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks, they may have some limitations, such as the possibility of not being able to retrieve the required knowledge. In this paper, we introduce a novel framework for knowledge-based VQA titled “Prompting Large Language Models with Knowledge-Injection” (PLLMKI). We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection. Unlike earlier approaches, we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs. Furthermore, we leverage open LLMs, incurring no additional costs. In comparison to existing baselines, our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets, namely OK-VQA and A-OKVQA, respectively.

Total 2