Discover why a responsible AI framework like Safety by Design must be the standard for innovation in generative AI technology.| Purpose-Built Trust and Safety Solutions | Safer by Thorn
New research reveals the state of sextortion in 2025, with specific insights relevant to platform trust and safety teams.| Purpose-Built Trust and Safety Solutions | Safer by Thorn
Expert-backed CSAM detection and CSE solutions, including classifiers for image, video, and text-based harms against children. See the Safer difference.| Purpose-Built Trust and Safety Solutions | Safer by Thorn
Addressing the risks of nonconsensual image abuse AI-generated deepfake nudes are accelerating the spread of nonconsensual image abuse, making it easier for bad actors to manipulate and weaponize imagery of children and adults alike. Our latest research at Thorn found that 31% of teens are already familiar with deepfake nudes,| Purpose-Built Trust and Safety Solutions | Safer by Thorn