AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (789.8 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access | Just Accepted

GaitFFDA: Feature Fusion and Dual Attention Gait Recognition Model

Department of Computer Science and Technology, Tsinghua University,  Beijing 100084,  China

Show Author Information

Abstract

Gait recognition has a wide range of application scenarios in the fields of intelligent security and transportation. Gait recognition currently faces challenges: inadequate feature methods for environmental interferences and insufficient local-global information correlation.To address these issues, we propose a gait recognition model based on feature fusion and dual attention. Our model utilizes the ResNet architecture as the backbone network for fundamental gait features extraction. Subsequently, the features from different network layers are passed through the feature pyramid for feature fusion,  so that multi-scale local information can be fused into global information, providing a more complete feature representation. The dual attention module enhances the fused features in multiple dimensions, enabling the model to capture information from different semantics and scale information.Our model proves effective and competitive results on CASIA-B (NM: 95.6%, BG: 90.9%, CL: 73.7%) and OU-MVLP(88.1%). The results of related ablation experiments show that the model design is effective and has strong com- petitiveness.

Tsinghua Science and Technology
Cite this article:
Wu Z. GaitFFDA: Feature Fusion and Dual Attention Gait Recognition Model. Tsinghua Science and Technology, 2023, https://doi.org/10.26599/TST.2023.9010089

623

Views

246

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 04 August 2023
Revised: 22 August 2023
Accepted: 26 August 2023
Available online: 28 August 2023

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return