Social media platforms under scrutiny: IGAP calls for improved practices

LinkedIn offers no disclosures on their categories of content-related grievances.

By  Indrani BoseOct 24, 2024 10:58 AM
Social media platforms under scrutiny: IGAP calls for improved practices
WhatsApp discloses the number of accounts it has deleted proactively, to prevent harmful activity. YouTube discloses the number of removal actions it has undertaken as a result of automated detection, without specifically disclosing the categories of content for which they deploy automated detection tools. LinkedIn only discloses the number of content takedowns based on ‘content moderation’ practices.

The Internet Governance and Policy Project (IGAP) has published its latest report, "Social Media Transparency Reporting: A Performance Review", offering a detailed analysis of how Significant Social Media Intermediaries (SSMIs) are complying with India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The report assesses the performance of major platforms, including Facebook, Instagram, WhatsApp, YouTube, Twitter/X, LinkedIn, Snap, ShareChat, and Koo, focusing on their content moderation practices and grievance redressal mechanisms between June 2021 and December 2023.

As concerns over harmful online content, including misinformation and hate speech, continue to grow, IGAP’s report highlights key gaps in transparency across these platforms and calls for enhanced accountability to Indian users.

Categories of content-related grievances

Facebook and Instagram offers 5 content related grievance categories. WhatsApp offers 1 content related grievance category, Twitter/X offers 13, and YouTube offers 9. In the general in-app or in-feed mechanism, and as disclosed within the monthly compliance reports, Facebook and Instagram offer 13 and 12 categories for complaints, Snap offers 11, Koo offers 2, and ShareChat offers 14. LinkedIn offers no disclosures on their categories of content-related grievances.

Most platforms do not offer any comprehensive disclosure on what takes place when they ‘action’ content. In the case of Facebook and Instagram, numbers are offered on content that has been actioned, and some information is provided on what the consequences of actioning content can be. In the context of reports through the India grievance reporting mechanism, this can mean removing content, covering photos or videos with a warning, and restricting content availability in the country.

In the context of actioning content reported through any alternate grievance reporting mechanism, the report only shares that this can include removing content, or covering photos or videos with a warning. Meta’s Transparency Centre discloses that they take additional measures including measures to curb the reduction of spread of borderline content. These kinds of ‘actions’ are not discussed within the monthly compliance reports. In the case of WhatsApp, only disclosures in relation to banned accounts. It has been stated that safety grievances and responses to the same are not recorded as actions taken against the grievance report.

With respect to YouTube and LinkedIn, only content removals have been discussed. However, there may be alternate actions taken for flagged content, including age-based content restrictions, measures to curb the reduction of spread of borderline content, limiting the visibility of content or labelling content These kind of ‘actions’ are not discussed within the monthly compliance reports. In the context of Twitter/X, only disclosures in relation to URLs actioned, without disclosure on what sort of action may have been taken.

Categories of non-content related grievances

Though the Intermediary Rules largely intended disclosures on content related user grievances, some platforms have also chosen to disclose non-content related grievances made on the platform. This includes grievances, or requests in relation to users needing help accessing accounts, requesting access to personal data collected by the platform, or other kinds of support relating to the platform’s product or safety features. Facebook offers 5 such categories of grievances, Instagram offers 3, and WhatsApp and Twitter/X offer 4 such categories.

Information on law-enforcement requests

Though there is no obligation under the Intermediary Rules to disclose the number of law-enforcement requests received by platforms, and the number of requests that were responded to, or actioned against, ShareChat still chooses to disclose this information.Platforms with a global presence including Facebook, Snap, Google, Twitter/X and LinkedIn have chosen to share details of global government requests for information in separately issued semiannual Transparency reports which they have been publishing even before the Intermediary Rules came into effect. These global reports also include country specific information on law-enforcement requests received.

Information on content removed, distinguished by language

While not a mandated requirement, Koo also chooses to share data on the number of content pieces removed, on the basis of language. Koo separately shares the number of pieces removed in English, and in Indian languages. While no other platform discloses this information, it provides valuable insight into the ability of platforms to moderate content across various Indian languages. In a context where platforms have been known to neglect non-English language users, this metric can help gauge how well platforms are performing on vernacular content moderation within the Indian context as well.

Disclosure on actions taken for repeated violations

Only ShareChat discloses details on numbers of accounts that have been permanently banned, and the instances of graded time-based penalties for multiple violations of their policies. While many platforms have a multiple strikes rule for banning accounts with problematic content, this aspect is not usually disclosed in most compliance reports.

Disclosure of proactive monitoring using automated tools, and the number of content categories being proactively monitored

Twitter/X only discloses automated detection of violative content in relation to Terrorism, and Child Sexual Abuse Exploitation, Non-Consensual nudity, and content of similar nature. WhatsApp discloses the number of accounts it has deleted proactively, to prevent harmful activity.

YouTube discloses the number of removal actions it has undertaken as a result of automated detection, without specifically disclosing the categories of content for which they deploy automated detection tools. LinkedIn only discloses the number of content takedowns based on ‘content moderation’ practices. Which could potentially include the use of both human and automated content moderation practices. Koo discloses the use of automated detection for monitoring violations of spam and community guidelines.

Facebook and Instagram also disclose their ‘proactive rate’ across various content categories, indicating the percentage of violating content that the platforms were able to detect before any user complaints were made. This is also a valuable metric that is being disclosed that enables regulators and users to better understand the contexts within which platforms have the most capacity to proactively detect violative content.

While the Intermediary Rules have required that platforms need only disclose the number of communication links that have been removed as a result of proactive monitoring using automated detection mechanisms, the content categories for which such automated tools are deployed is also an important metric to understand a platform’s performance in moderating content, as per the report.

As visible in the case of most platforms, the largest number of user complaints seems to be in the context of bullying, harassment and sexual content. These may also be the categories of content that might be most difficult to monitor through automated means. Clearer disclosures need to be made by platforms on the categories of content they choose to proactively monitor, to understand whether the users’ most prominent concerns are being adequately addressed, the report states.

First Published on Oct 24, 2024 10:54 AM

More from Storyboard18

How it Works

Raymond delivers highest annual revenue of Rs 9,286 crore

Raymond delivers highest annual revenue of Rs 9,286 crore

How it Works

86 percent consumers spend considerable time deciding what to watch next: Report

86 percent consumers spend considerable time deciding what to watch next: Report

How it Works

Blue Star Q4 results: Revenue jumps 26.8 percent to Rs 3,327.77 crore

Blue Star Q4 results: Revenue jumps 26.8 percent to Rs 3,327.77 crore

How it Works

Alphabet paid $20 billion to Apple to maintain its status as the default search engine

Alphabet paid $20 billion to Apple to maintain its status as the default search engine

How it Works

Honda Cars registers total sales of 10,867 units in April

Honda Cars registers total sales of 10,867 units in April

How it Works

LS Elections 2024: Rs 100 crore invested in AI integration in political campaigns

LS Elections 2024: Rs 100 crore invested in AI integration in political campaigns

How it Works

D2C Homeware brand ellementry secures funding from She Capital

D2C Homeware brand ellementry secures funding from She Capital

How it Works

India's online consumer spending expected to reach $300 Bn by 2030: Elevation Capital report

India's online consumer spending expected to reach $300 Bn by 2030: Elevation Capital report