HONOR’s groundbreaking AI Deepfake Detection technology, will be available globally from April 2025. This cutting-edge feature empowers users to combat the growing threat of deepfakes. Using AI-powered real-time analysis of video and image content, it provides immediate warnings against potential deepfake contents.
While the rise of AI has brought incredible advancements, it also poses unseen challenges such as the proliferation of sophisticated deepfakes. According to Entrust Cybersecurity Institute, a deepfake attack happens every five minutes in 2024. These manipulated images, audio recordings, and videos are becoming increasingly difficult to detect, with Deloitte’s 2024 Connected Consumer Studyrevealing that 59% of respondents struggle to tell the difference between human-created and AI-generated content.

While 84% of those familiar with generative AI believe such content should be labeled, HONOR recognizes that proactive detection measures and industry collaboration are crucial for robust protection. Industry leaders share this view, with organizations like the Content Provenance and Authenticity (C2PA)working to develop technical standards for certifying the source and history of digital content, including AI-created assets.

At the forefront of human-centric innovation, HONOR has also taken proactive steps to protect users from the increasingly prevalent threat of deepfakes.Debuted at IFA 2024, HONOR’s proprietary AI Deepfake Detection technology analyzes subtle inconsistencies often invisible to the human eye, including pixel-level synthetic imperfections, border compositing artifacts, inter-frame continuity, and consistency in face-to-ear hairstyle and facial features. Upon detecting manipulated content, a warning will be immediately issued to safeguard users from the potential risk of deepfake.