Abstract
Federated incremental learning facilitates decentralized and continuous model updates across multiple clients, presenting a promising framework for big data analytics in distributed environments. However, the presence of poisoned or malicious data introduces significant challenges, including compromised model performance and system reliability. To tackle these issues, this paper proposes an efficient and resource-aware machine unlearning method tailored for federated incremental learning. The approach utilizes a membership inference attack mechanism to accurately identify poisoned data based on prediction confidence levels. Once detected, a targeted forgetting mechanism is applied, leveraging fine-tuning techniques to erase the influence of the poisoned data while preserving the model’s incremental learning capabilities. By aligning the distributions of poisoned data and third-party datasets, the method achieves reliable unlearning without introducing excessive computational overhead. Extensive experiments conducted on diverse datasets validate the method’s effectiveness, demonstrating a significant reduction in forgetting time (up to 21.05× speedup compared to baseline approaches) while maintaining robust model performance in incremental learning tasks. This work offers a scalable and efficient solution to the data forgetting problem, advancing the reliability and practicality of federated incremental learning in distributed and resource-constrained scenarios.