Machine Unlearning - Groundbreaking Papers

0
Total Papers
0
With Data
0
Data-Free
-
Year Range
MyRank Year Venue Title Authors Citations Main Technique Key Insight With Data Exact Link
1 2000 NeurIPS Incremental and Decremental Support Vector Machine Learning Gert & Poggio 1902+ Incremental SVM Early work on updating/removing SVM training samples Yes Yes NeurIPS
2 2009 IEEE Multiple Incremental Decremental Learning of Support Vector Machines Karasuyama & Ichiro 154+ Incremental SVM Extended incremental/decremental SVMs to handle multiple updates Yes Yes IEEE
3 2015 IEEE Towards Making Systems Forget with Machine Unlearning Cao & Yang 960+ Retraining-based summation based training for Statistical Query Models Yes Yes IEEE
4 2019 NeurIPS Making AI Forget You: Data Deletion in Machine Learning Ginart et al. 610+ Sharded Learning First scalable algorithm for data deletion in ML pipelines Yes No ArXiv
5 2020 ICML Certified Data Removal from Machine Learning Models Ginart et al. 619+ Certified Removal First certified removal guarantees for linear models Yes No ArXiv
6 2020 NeurIPS Variational Bayesian Unlearning Nguyen et al. 150+ Bayesian Proposes Bayesian approach for approximate unlearning Yes No ArXiv
7 2020 IEEE Machine Unlearning (SISA) Bourtoule et al. 1286+ SISA Training Most cited baseline; partition-retrain strategy Yes No ArXiv
8 2021 IEEE Code Machine Unlearning Nasser et al. 56+ Coded learning utilize linear encoders; perfect unlearning criterion Yes No IEEE
9 2020 CVPR Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks Golatkar et al. 652+ Fisher Information Selective class forgetting in CNNs, benchmark reference work Yes No ArXiv
10 2020 ECCV Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations Golatkar et al. 251+ Black-box Access Forgetting from black-box model access โ€” practical relevance Yes No ArXiv
11 2021 CVPR Mixed-Privacy Forgetting in Deep Networks Golatkar et al. 251+ mixed-box Access Yes No ArXiv
12 2021 AAAI Amnesiac Machine Learning Graves et al. 378+ Statistical Queries Statistical guarantees and multiple retraining-free methods Yes No ArXiv
13 2021 NeurIPS Remember What You Want to Forget: Algorithms for Machine Unlearning Sekhari et al. 395+ Optimization-based Efficient removal while retaining accuracy โ€” optimization-based Yes No ArXiv
14 2021 AISTATS Approximate Data Deletion from Machine Learning Models Izzo et al. 368+ Influence/Newton updates Fast approximate unlearning with small accuracy cost Yes No PMLR
15 2021 ICML Descent-to-Delete: Gradient-Based Methods for Machine Unlearning Neel et al. 352+ Gradient Ascent First gradient-based unlearning method, important for scalability Yes No ArXiv
16 2020 IEEE Learn to Forget: Machine Unlearning via Neuron Masking Zhuo et al. 84+ Forsaken Yes No IEEE
999 2022 CVPR Deep Unlearning via Randomized Conditionally Independent Hessians Mehta et al. 115+ Hessian-based Uses second-order information for efficient unlearning Yes No ArXiv
999 2022 AISTATS Fast Machine Unlearning without Retraining through Selective Synaptic Dampening Foster et al. 134+ Synaptic Dampening One of the fastest retraining-efficient approaches Yes No ArXiv
999 2023 IEEE Zero-Shot Machine Unlearning Chundawat et al. 215+ Zero-shot Highlights efficiency โ€” useful transition paper toward "without data" No No ArXiv
999 2023 CVPR Boundary Unlearning: Rapid Forgetting of Deep Networks via Shifting the Decision Boundary Chen et al. 136+ Boundary Shifting Novel approach using decision boundary manipulation Yes No CVF
999 2023 CVPR CUDA: Convolution-based Unlearnable Datasets Sadasivan et al. 38+ Convolution Shortcuts Prevents learning by adding imperceptible shortcuts Yes No CVF
999 2024 ICLR Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models Kumar et al. 21+ Supervision-free First supervision-free unlearning approach No No ArXiv
999 2024 NeurIPS Large Language Model Unlearning Yuanshun et al. 305+ LLM unlearning study how to perform unlearning Yes No NeurIPS
999 2024 ICLR SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency Fan et al. 201+ Saliency-based Uses weight importance for selective unlearning Yes No ICLR
999 2024 CVPR Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models Zhang et al. 268+ Diffusion Unlearning Directly relevant to Vision Transformers โ€” bridges Part 1 โ†’ Part 2 Yes No ArXiv
999 2025 CVPR Towards Source-Free Machine Unlearning Ahmed et al. 15+ Source-free Enables efficient zero-shot unlearning with theoretical guarantees No No CVF
999 2025 CVPR LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty Li et al. 1+ Uncertainty-based Large-scale unlearning using uncertainty quantification No No CVPR
999 2025 CVPR Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks Wang et al. 5+ Distillation-based General framework for class-centric unlearning tasks Yes No CVPR
999 2023 CVPR ERM-KTP: Knowledge-Level Machine Unlearning via Knowledge Transfer Lin et al. 63+ Masking + distillation Transfers retain knowledge while erasing class-specific features Yes No CVF
999 2023 NDSS Machine Unlearning of Features and Labels Warnecke et al. 265+ Feature/label scrubbing Unlearning beyond instances: remove feature- or class-level information Yes No PDF
999 2023 NeurIPS Towards Unbounded Machine Unlearning (SCRUB) Kurmanji et al. 280+ Teacherโ€“student SCRUB Strong deep unlearning via student trained to diverge on forget set, retain elsewhere Yes No ArXiv
999 2024 ICML Towards Certified Unlearning for Deep Neural Networks Zhang et al. 23+ Approx. certificates for DNNs Bridges certified guarantees to non-convex CNNs with practical protocols Yes No ArXiv
999 2024 ICML Verification of Machine Unlearning is Fragile Thudi et al. 15+ Evaluation pitfalls Shows popular verification tests are brittle; proposes stronger evaluations Both No AeXiv
999 2024 NeurIPS Langevin Unlearning: Noisy Gradient Descent for Machine Unlearning Chien et al. 27+ Noisy GD (Langevin) Unifies DP training and approximate certified unlearning for deep nets Yes No ArXiv
999 2025 ICML A Certified Unlearning Approach without Access to Source Data Basaran et al. 0+ Certified source-free First certificates for data-free unlearning via surrogate distribution matching No No ICML
999 2025 ICML Certified Unlearning for Neural Networks Lee et al. 0+ General NN certificates Certification framework extending beyond convexity to deep CNNs Yes No ICML
999 2025 ICLR Hessian-Free Online Certified Unlearning Kim et al. 0+ Online certificates Enables repeated deletions with certification without Hessians Yes No ICLR
999 2025 ICLR Selectively Unlearning via Representation Erasure (Domain Adversarial) Nguyen et al. 0+ Domain-adversarial erasure Erases target concept subspaces while retaining generalization Yes No ICLR
999 2025 ICLR Adversarial Machine Unlearning Lee et al. 0+ Minimax training Game-theoretic unlearning robust to relearning/attacks Yes No ICLR
999 2025 ICLR The Utility and Complexity of In-/Out-of-Distribution Machine Unlearning Chen et al. 0+ OOD vs ID analysis Clarifies when unlearning is easier/harder; guides CNN practice Both No ICLR
999 2025 ICLR Unlearn and Burn: Adversarial Unlearning Requests Destroy Accuracy Ye et al. 0+ Adversarial forget-set Shows adversarially chosen forget requests can devastate utility; proposes mitigations Both No ICLR
999 2025 ICCV Robust Machine Unlearning for Quantized Neural Networks Zhang et al. 0+ Adaptive gradient reweighting Extends unlearning robustness to low-bit CNN deployments Yes No ArXiv
999 2025 ICCV Forgetting Through Transforming: Federated Unlearning via Class-Aware Representation Transform Chen et al. 0+ Rep. transformation (FL) Federated class unlearning with strong utility retention on CNNs Yes No ArXiv
999 2025 ICCV Reminiscence Attack on Residuals: Exploiting Approximate Unlearning Park et al. 0+ Residual-trace attack Shows traces remain after approximate unlearning; reconstructs forgotten info Both No ArXiv
999 2021 IEEE Federated Unlearning (Federaser: Enabling efficient client-level data removal from federated learning models) Liu et al. 231+ Client update subtraction First practical client-level unlearning in federated CNN training Yes No ArXiv
999 2023 NeurIPS Hidden Poison: Unlearning Enables Camouflaged Poisoning Attacks Di et al. 57+ Clean-label poisoning Reveals new attack surface where unlearning triggers model collapse Both No NeurIPS
999 2024 NeurIPS Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable Bertran; Tang; Kearns; Morgenstern; Roth; Wu 17+ Differencing attack Shows exact reconstruction is possible in some settings; warns for CNN pipelines Both No NeurIPS
999 2024 ECCV Learning to Unlearn for Robust Machine Learning Huang et al. 22+ Joint robust forgetting Couples robustness with unlearning to resist relearning/attacks Yes No ECCV
999 2023 ICML Forget Unlearning: Towards True Data-Deletion in Machine Learning Golatkar et al. 56+ Stronger semantics Clarifies goals/metrics and pushes toward true deletion semantics Yes No ICML
999 2023 CVPR Unlearning with Fisher Masking Liu et al. 6+ Fisher masks Masks parameters by Fisher importance to localize forgetting Yes No PDF
999 2025 ICML System-Aware Unlearning Algorithms: Use Lesser, Forget Faster Zhao et al. 0+ System-co-design Co-designs algorithms with compute/memory constraints for faster CNN unlearning Yes No ICML
999 2025 ICML Not All Wrong is Bad: Using Adversarial Examples for Unlearning Gupta et al. 1+ Adversarial forgetting Drives forgetting using targeted adversarial example generation Yes No ICML
999 2025 ICML Flexible, Efficient, and Stable Adversarial Attacks on Machine Unlearning Zhou et al. 0+ Attack suite Comprehensive attacks exposing failure modes of CNN unlearning methods Both No ICML
999 2024 ArXiv Revisiting Machine Unlearning with Dimensional Alignment Wang et al. 3+ Dimensional alignment Stabilizes CNN unlearning and critiques common metrics Yes No ArXiv
999 2025 CVPR NoT: Federated Unlearning via Weight Negation Sun et al. 0+ Weight negation (FL) Simple operator for client/class forgetting in federated CNNs Yes No CVPR
999 2025 CVPR Unlearning through Knowledge Overwriting: Reversible Federated Unlearning via Sparse Adapters Zhong et al. 6+ Selective sparse adapters Reversible, low-cost unlearning by overwriting with adapters without changing base weights Yes No ArXiv
999 2025 ICML Machine Unlearning Fails to Remove Data Poisoning Attacks Pawelczyk et al. 26+ Negative result (poison) Shows several unlearning methods fail under poisoning; urges stronger protocols Both No PDF
999 2023 ICML Fast Federated Machine Unlearning with Nonlinear Functional Theory Li et al. 72+ Nonlinear functional FL Accelerates federated unlearning; applicable to CNN image tasks Yes No ICML
999 2023 ICML From Adaptive Query Release to Machine Unlearning (Extended Analysis) Guo et al. 7+ Generalization bounds Provides deletion capacity insights guiding CNN unlearning scales Yes No ICML
999 2025 ICML Targeted Unlearning with Single Layer Unlearning Gradient Zikui et al. 1+ Single-layer gradient Efficient layer-wise unlearning for CNNs with targeted edits Yes No ICML
999 2025 ICML SEMU: Singular Value Decomposition for Efficient Machine Unlearning Patel et al. 3+ Low-rank SVD Matrix factorization localizes/reverses forget-set influence efficiently Yes No ICML