Sort:
Regular Paper Issue
FSD-GAN: Generative Adversarial Training for Face Swap Detection via the Latent Noise Fingerprint
Journal of Computer Science and Technology 2025, 40(2): 397-412
Published: 31 March 2025
Abstract Collect

Current studies against DeepFake attacks are mostly passive methods that detect specific defects of DeepFake algorithms, lacking generalization ability. Meanwhile, existing active defense methods only focus on defending against face attribute manipulations, and there remain enormous challenges to establishing an active and sustainable defense mechanism for face swap detection. Therefore, we propose a novel training framework called FSD-GAN (Face Swap Detection based on Generative Adversarial Network), immune to the evolution of face swap attacks. Specifically, FSD-GAN contains three modules: the data processing module, the attack module that generates fake faces only used in training, and the defense module that consists of a fingerprint generator and a fingerprint discriminator. We embed the latent noise fingerprints generated by the fingerprint generator into face images, unperceivable to attackers visually and statistically. Once an attacker uses these protected faces to perform face swap attacks, these fingerprints will be transferred from training data (protected faces) to generative models (real-world face swap models), and they also exist in generated results (swapped faces). Our discriminator can easily detect latent noise fingerprints embedded in face images, converting the problem of face swap detection to verifying if fingerprints exist in swapped face images or not. Moreover, we alternately train the attack and defense modules under an adversarial framework, making the defense module more robust. We illustrate the effectiveness and robustness of FSD-GAN through extensive experiments, demonstrating that it can confront various face images, mainstream face swap models, and JPEG compression under different qualities.

Open Access Issue
Context-Aware Social Media User Sentiment Analysis
Tsinghua Science and Technology 2020, 25(4): 528-541
Published: 13 January 2020
Abstract PDF (16 MB) Collect
Downloads:53

The user-generated social media messages usually contain considerable multimodal content. Such messages are usually short and lack explicit sentiment words. However, we can understand the sentiment associated with such messages by analyzing the context, which is essential to improve the sentiment analysis performance. Unfortunately, majority of the existing studies consider the impact of contextual information based on a single data model. In this study, we propose a novel model for performing context-aware user sentiment analysis. This model involves the semantic correlation of different modalities and the effects of tweet context information. Based on our experimental results obtained using the Twitter dataset, our approach is observed to outperform the other existing methods in analysing user sentiment.

Total 2