WhatsApp has told the IT ministry it takes reports of child sexual abuse material on its platform "very seriously" and will continue to ensure that effective action is initiated as soon as exploitative content is reported to it, according to a source. The IT ministry had last week drawn attention of the Facebook-owned company to a recent report which claimed that the platform is being used to share child sexual abuse videos and had asked it to take steps to prevent such misuse.
The ministry's move came after a report by Cyber Peace Foundation allegedly found that chat groups on WhatsApp continue to be created and used to disseminate and share Child Sexual Abuse Material (CSAM) in India. In its response, WhatsApp has sought to assure the IT ministry that it will continue to be "vigilant" on the critical issue of child sexual abuse and also expressed a keen desire to take part in any public awareness campaign that the ministry organises in this regard, a source privy to the matter said.
WhatsApp has also said that it harnesses advanced technology, including artificial intelligence, and actively bans accounts suspected of circulating content that exploits children. WhatsApp has said it has "zero tolerance policy" on child sexual abuse, and that 250,000 accounts are banned every month by the platform globally.
When contacted, a WhatsApp spokesperson said the platform relies on the signals available to it, such as group information, to proactively detect and ban accounts suspected of sending or receiving child abuse imagery.
"We have carefully reviewed this report to ensure such accounts have been banned from our platform. We are constantly stepping up our capabilities to keep WhatsApp safe, including working collaboratively with other technology platforms, and we will continue to prioritise requests from Indian law enforcement that can help confront this challenge," the WhatsApp spokesperson said.
The spokesperson added, "WhatsApp cares deeply about the safety of our users and we have no tolerance for child sexual abuse".
Meanwhile, a source said that WhatsApp, in its response to the IT ministry, also stated that it works with technology companies to improve its capability to detect media with child nudity in profile pictures. WhatsApp has said it also uses technology to assess group information to identify and ban group members who are suspected of circulating child abuse imagery.
WhatsApp also told the IT ministry that it encourages users to report problematic content and stressed that reporting becomes crucial as it gives platform the ability to investigate otherwise encrypted private group content, and in turn, take appropriate and urgent remedial action.
WhatsApp said it conducts regular investigations of the Google Play store to identify group link sharing apps (around such material) and works with Google to remove them. To date, over 100 group link sharing apps have been removed from the Google Play store, WhatsApp said adding it has reached out to Google to remove the handful of remaining apps which were surfaced days ago, in its most recent investigation.
As such, WhatsApp -- which has over 200 million users in India -- had drawn flak over misinformation being spread through messages on its platform. Such fake messages and rumours had led to mob violence incidents across the country, prompting the government to seek message traceability. WhatsApp has, however, maintained that bringing such a feature would undermine privacy of users on its platform.