Tanya Sara George

Rethinking Intermediary Liability in the Age of Digital Obscenity

Introduction 

A PIL recently filed in the Supreme Court of India has shed light on an alarming trend: the proliferation of obscene content on platforms that operate under the guise of social media intermediaries. As per a 2023 press release by MEITY, it was observed that several social media platforms such as Telegram and X exploit their classification under the IT Act to function as quasi-pornographic websites. Labelled as social media apps in parlance, they primarily disseminate sexually explicit content, dominate app store charts, and generate significant revenue. 

Legally, these platforms fall under the ambit of a ‘social media intermediary’, defined under section 2(w) of the IT Act. The section holds that an intermediary is “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.” This classification brings social media intermediaries into the ambit of a neutral 3rd party, allowing for safe harbour provision under Section 79. However, these platforms are saturated with content that contravenes domestic obscenity laws. Therefore, this classification, though seemingly benign, allows these intermediaries to evade liability under Indian laws due to gaps in the legal framework. 

This article argues that the gaps in ascertaining intermediary liability in the existing domestic legal framework render it ill-equipped to handle modern challenges posed by these platforms. The author establishes that the domestic framework does not account for the covert features that new social media platforms undertake, such as facilitating the sale of pornographic material and promoting obscene and illegal content, which renders the 20th-century understanding of intermediary liability obsolete in mitigating newfound activities. The article first elaborates on the question of the legality of such platforms in India. Secondly, the author analyses the contemporary strategies utilized by such platforms by exploiting lacunas in domestic law as a means of revenue. Lastly, the author argues for adopting a facilitative threshold for online intermediary liability to adequately ascertain the implicitness of such platforms in their illegality and thereby attune domestic legislation to meet modern tactics. 

How Pervasive are these Platforms? 

The ban on TikTok in 2020 left a gap in the Indian industry for video-sharing and communication apps, which has then been exploited by a dark underbelly of sexually promiscuous platforms, gaining increasing recognition by the day due to the sheer profitability they offer. Many of these apps, such as Bigo Live and Chingari, while having the benefit of negative liability provided by §79, actively facilitate the dissemination of obscene content, directly contravening laws on obscenity. 

For instance, a recent study by the Economic Times has found that various platforms that are the top-grossing apps on Google Play are based on a model wherein content creators publish videos and photos in exchange for a fee or ‘gifts’ from a certain audience. Bigo Live, a video chatting app, categorised for individuals 12+ and above, was found to have various illicit videos in its ambit. Similarly, Chingari, a domestic platform which has a 1 on 1 feature, has been noted for its role in acting as a double for digital prostitution. These platforms often lack age restrictions, enabling minors to access obscene material with alarming ease.

Another notable example is the case of Onlyfans in India. Records show that an overwhelming majority of content creators on this platform engage in selling sexually explicit content. Although gaining international recognition for its obscene content, the platform continues to act as a legally grey pornographic website in various countries, including India, due to its unique branding and recognition. Telegram has also been known to host the dissemination of child sexual abuse material through encrypted channels. The evidence also suggests that Telegram hosts the immediate dissemination of rape videos, creating lucrative offers for individuals to engage in sexual violence against women and children. 

These platforms run unregulated in domestic markets and capitalize on the absence of adequate liability criteria and regulations aimed at curbing evasive strategies employed by them, encouraging an informal but highly profitable trade of sexual content under the guise of ‘content creation.’ 

Determining the Greyness of These Online Intermediaries

Domestically, the ascertainment of obscenity has been drawn from Aveek Sarkar vs State of West Bengal, wherein the SC held that “the question of obscenity must be seen in the context in which the photograph appears and the message it wants to convey.” Applying this principle, it becomes evident that content uploaded on these platforms is intrinsically pornographic. Unlike artistic or cultural expressions protected by freedom of speech, this material thrives in a context explicitly designed to promote sexual content, often for financial gain. It is, therefore, a blatant violation of obscenity standards. 

Similarly, under Section 294 of the BNSS, obscene content is classified as anything that is “lascivious” or “tends to appeal to “prurient interests”. The explicit nature of these platforms, compounded by aggressive advertising strategies, such as provocative push notifications like “a sexy girl is waiting for you,” and bombarding users with sexually charged images and messages fits squarely within this definition. The advertisements themselves, crafted to incite curiosity and demand for explicit content, go beyond merely facilitating obscenity; they actively market it. 

The July 31st DoT order is an emphatic stance on the Indian standpoint on adult content. The government stated that such content violates norms of decency and reiterated that it becomes classified as obscenity. While this order concerned pornographic material, it would ostensibly apply to these intermediaries as it is used as a tangible substitute for such sites. This makes the utilization of the platform in the country prima facie illegal as it constitutes the requisites for obscenity. 

The crux of the issue lies not only in the content hosted but also in the deliberate strategies these platforms employ to circumvent scrutiny while normalizing harmful behaviours. The platforms’ evasive tactics, such as concealing explicit material behind ambiguous advertising or directing users to private paywalls, make it difficult to establish clear-cut illegality. 

MEITY has, on multiple occasions, acknowledged the inherently unlawful nature of these platforms. Notices issued to intermediaries for hosting and proliferating CSAM and obscene content establish the government’s recognition that such activities constitute pornography and thereby directly contravene both obscenity laws and POCSO provisions. Yet, these acknowledgements have not translated into comprehensive enforcement, leaving significant room for exploitation. 

Evasive Advertising Strategies 

These platforms cleverly exploit other, relatively benign social media intermediaries to promote their services. Influencers, with large and often impressionable fanbases, act as informal marketers for these platforms, flagrantly violating government advisories on influencer advertising. A common tactic, known as “breadcrumbing,” involves using legally compliant platforms to redirect users to adult-content platforms like OnlyFans, Bigo Live, and Chingari. This strategy portrays the calculated effort to circumvent direct regulatory oversight while expanding consumer bases through seemingly legitimate channels. 

While this form of marketing strategy is ordinarily inconspicuous, reports have shown the active role undertaken by META platforms in promoting CSAM and other sexual content, which directly correlates to the promotion of these illicit channels. Another report by Stanford University shows that Instagram algorithms connect paedophilic accounts to accounts that are known to sell underage sexual content, a major source of revenue for platforms such as Telegram and Onlyfans. 

Furthermore, these platforms’ advertising practices often blur the lines of legality, operating in direct violation of the ASCI Code and related guidelines. Chapter II of the ASCI Code states that advertisements must not contain any indecent or vulgar content that goes against the prevailing standards of decency. Further, guideline 7 of the guidelines on harmful gender stereotypes states that the advertisement must not contain the sexual objectification of persons to titillate viewers. Yet, these platforms routinely feature sponsored advertisements promoting adult content alongside corporate advertisements, in blatant violation of these rules. 

Further, these video-chatting apps, after downloading, also engage in obscene notifications, such as “A sexy girl is waiting for you, go Match!” and bombard individuals with sexually explicit imagery upon signing up, to ensure a steady source of revenue. Such notifications, concomitant to the subsequent content it provides, wholly satisfy the thresholds for obscenity in India but are free from regulatory scrutiny, as their explicit advertising nature is known only to users who have already consumed content. 

The problem arises in the subtlety of these advertisements as they employ legally valid platforms to promote their advertisements. Social media influencers often use captions that, on the surface, appear to align with artistic freedom and expression. This subtility makes it difficult to distinguish between artistic expression and the illegal promotion of obscene activities. These fall short of explicit nudity or pornographic content but still function as indirect advertisements for their profiles on such platforms, which are skilfully linked on their profiles but not necessarily directly linked to the advertisement itself. Such legal ambiguities are exploited by these online intermediaries to shield themselves from legal liability.

Facilitative Thresholds  

The current domestic classification of online intermediaries under §2(w) establishes that they are passive facilitators and overlooks the distinguished functions they pursue, such as the substantial control they hold in the publication of pornographic content and the promotion of illicit content. These functions render the platforms implicit in their illegality and push them out of the pigeonhole of a neutral third party, and direct that they must be considered on par with accused individuals. 

Although such digital activities would ordinarily elicit the prongs of sections 67, 67A and 67B of the IT Act, it becomes increasingly hard to ascertain intermediary liability due to the massive amount of data and the inconspicuous nature of these functions. This is attributed to the widely construed definition of a social media intermediary in §2(w), as well as the negative liability it holds for ‘intermediaries’. At this juncture, the author proposes that online intermediary liability must be ascertained through facilitative thresholds to meet the evasive strategies employed by social media platforms, such as Meta and Onlyfans. 

Section 79 of the IT Act insulates intermediaries from liability for third-party content, provided they function as neutral conduits. While Section 79(3) attempts to introduce a facilitative threshold by mandating that intermediaries act expeditiously upon receiving notice of illegal content, the section falls short of adequately ascertaining liability by shifting the regulatory burden onto the state. This creates a regulatory lacuna, as intermediaries like OnlyFans can claim ignorance or passivity, further complicating enforcement.

On the contrary, Article 14 of the EU directive manoeuvres over this blemish by placing the liability on the intermediary. This provision has been utilized previously in Google’s case in determining whether the platform was ‘active’ in the alleged illegal activity, and the court thereby ascertained liability. The article places two thresholds. Firstly, that the provider does not have knowledge of the illegal activity being hosted on it. Secondly, on receiving such information, it acts expeditiously to remove it. 

This method allows intermediary liability to be judged on the ‘actual knowledge’ principle, followed by legislations such as Australia and the UK, rather than the state having the burden of establishing the pre-occurrence of knowledge, which ordinarily is only done after the massive dissemination of offensive content. This enables for a reactive rather than preventive approach and thereby fosters a safe and proactive digital environment. 

The passivity presumed under §79 fails to address the contemporary strategies employed by digital platforms. Adopting a liability model similar to the EU Directive could bridge these gaps by introducing active monitoring duties and holding intermediaries accountable for negligence. This shift would not only align India’s regulatory regime with global standards but also ensure that emerging platforms are effectively governed. 

Conclusion

Modern intermediary platforms exploit the ambiguities in India’s legal framework to perpetuate and profit from illegal activities, primarily through the commodification of obscene content. Through their evasive advertising strategies, these platforms create a facade of legitimacy while systematically fostering and monetizing illegal content. The hypothesis underpinning this analysis is that the root of the issue lies in the passive liability framework under the IT Act, which shields platforms from accountability unless they fail to act upon receiving notice.

To remedy this, the article advocates adopting a facilitative liability threshold akin to the EU Directive, which holds intermediaries accountable for their active or negligent involvement in promoting illegal content and significantly diminishes the regulatory burden on the state. 

Tanya Sara George is a 3rd year B.A.LL.B (Hons) student at Maharashtra National Law University, Mumbai.

Image credits: Gustav Klimt, Dame mit Muff (1916)

The opinions expressed in the Blog are personal to the authors. The University does not subscribe to the views expressed in the article/blog and does not take any responsibility for the same.