Instant messaging Telegram, which has more than 950 million monthly active users, takes an important step in the fight against content of sexual abuse of minors. Under fire for its moderation policy, the application has just announced its membership of the Internet Watch Foundation (IWF), a British organization specializing in online child protection.
A significant change of direction
For the first time in its history, Telegram will deploy IWF tools and databases in addition to its own solutions. The application will notably use a system of “hashes” (digital fingerprints) to instantly identify images and videos of known abuse in order to block them before their distribution on the platform. This technology will apply to both real content and AI-generated images. As a reminder, Apple used the same system before completely backpedaling.
This partnership marks a turning point in Telegram’s moderation policy, which joins other tech giants such as Apple, Meta or Google already members of the IWF. This turnaround comes a few months after the arrest of Pavel Durov, CEO of Telegram, in Paris, where prosecutors criticized the platform for its inaction in the face of criminal content.
The IWF previously identified thousands of problematic content on Telegram since 2022, including particularly serious images involving children under the age of two. If the application deleted this content after reporting it, it now takes a more proactive approach with automatic detection tools. Remi Vaughn, PR manager at Telegram, highlighted that the app “already removes hundreds of thousands of abusive content every month” thanks to proactive moderation, AI and machine learning. The IWF tools will strengthen these existing mechanisms for even more effective user protection.
Clearly, the app is less proactive about removing content relating to the sale of weapons, drugs, or copyrighted content. A moderation with variable geometry.
Download the free app Telegram Messenger