Abstract
The robust generalization capacity of deep Convolutional Neural Networks (CNNs) may compromise the reliability of prediction results derived from knowledge distillation methods. This phenomenon arises from the extensive feature learning capabilities of CNN models during training, which lead to more intricate decision boundaries and diminished disparities between teacher and student models. To address this problem, we put forward a methodology named Feature Regulation-based Reverse Distillation (FRRD). This approach incorporates a Collaborative Disparity Optimization (CDO) module alongside a Selective Feature (SF) component. During the training process, the CDO module ensures consistency by minimizing the feature distance for normal pixels while simultaneously enhancing the discrimination of abnormal pixels through increased feature distances. This strategy facilitates a clear distinction among pixels belonging to different classes. Furthermore, the SF module captures correlations among features across various scales, thereby reducing the influence of anomalous information on subsequent inference processes. As a result, the model’s generalization capacity is augmented and its dependability is elevated. Our FRRD method has been appraised using two publicly accessible datasets, namely MVTecAD and BTAD. When it comes to the MVTecAD dataset, our model secures an AU-ROCPL score of 98.7%. On the BTAD dataset, it reaches an AU-ROCPL score of 97.5%.
京公网安备11010802044758号
Comments on this article