The proliferation of AI models across consumer platforms has ushered in a new era of convenience—but it’s also accelerated the erosion of personal privacy. Large language models (LLMs) are trained on staggering volumes of data, including publicly available content and, in some cases, personally identifiable information (PII). That means sensitive metadata—everything from search history and location trails to voice recordings and biometric markers—can be folded into systems that behave...