Scientists at UC Riverside have introduced a groundbreaking method to erase private and copyrighted data from AI models without requiring access to the original training datasets. Their “source-free certified unlearning” approach substitutes original data with a surrogate dataset and injects calibrated random noise to alter model parameters, ensuring selected information is irretrievable. The technique maintains the functionality of the models while reducing the costly and energy-intensive need for complete retraining. This innovation addresses growing legal and ethical concerns, such as compliance with privacy regulations and protection of copyrighted material used to train AI systems like GPT. Researchers demonstrated effectiveness using both synthetic and real-world datasets, offering strong privacy assurances.