The first beta of iOS 18.2 has been made available to developers around the world since yesterday evening. What we notice over time is that not everyone has the same beta, while the US has access to certain Apple Intelligence features, Europe does not have this opportunity. Other differences can also be seen with the beta which was distributed in Australia!
Children in Australia will be able to report nudity content
In the first beta of iOS 18.2 in Australia, Apple is offering a new option to children and teenagers: the ability to report an iMessage when it contains an adult photo or video. As the company explains to The Guardianwhen a report is made, it arrives directly in a service managed by Apple. Several employees who have been trained beforehand will be responsible for verifying whether the report is legitimate and that it is not an error.
If this is a real report, Apple will be able to pull two levers:
- Permanently deactivate the Apple account of the person who sent an adult photo/video to a child or teenager
- Send the report to the police so that the sender is summoned to the police station closest to their home for questioning
Apple also explains to the media that the report that will be sent will contain the previous messages as well as the messages that follow in order to know the context a little better. For example, this will be used to find out if the sender thought they were speaking to an adult or if it was a correspondent error.
Why is the report landing in Australia and not elsewhere?
This feature built into the Messages app is absolutely fantastic for protecting minors, however, why is Apple only limiting it to Australia with iOS 18.2?
Normally, this type of reporting will be deployed in the rest of the world in Apple’s next updates, but if the company favors Australia at the start, it is because Apple must comply with a new law which requires large companies (which own instant messaging) to better secure the experience of children and adolescents.
Since iOS 16, Apple has offered this type of security in France and other countries, it is called “Communication Safety”, however, the experience is not the same. Communication Safety uses artificial intelligence to analyze, hide and warn a child about the photo or video they are about to see. However, no report is possible and no further sanctions will be taken against the sender.