Differential privacy (DP) is a mathematically rigorous and widely studied privacy framework that ensures the output of a randomized algorithm remains statistically indistinguishable even if the data of a single user changes. This framework has been extensively studied in both theory and practice, with many applications in analytics and machine learning (e.g., 1, 2, 3, 4, 5, 6, 7).