12 August 2019

The Legality of Making Sense of Data

Data ‘sovereignty’ has become an inescapable buzzword in Indian discussions on data and its use. We’re told, quite accurately, that India with its large population generates what is essentially an untapped goldmine through data, and that we should, as a country, make the most of it. Unfortunately, it isn't entirely clear what that means in its specifics or how we could achieve sovereignty whilst protecting both national interests and individual rights without subsuming one into the other. In this context, the term 'data localisation' pops up often enough but it isn't obvious that the term is tremendously meaningful with reference to contemporary technology particularly since it may not be easily implementable, if at all. Compounding the issue are doubts about our having a plethora of privacy shields at our disposal comparable to the EU-US one which protects transatlantic data flows.

It’s been a while now since we’ve been grappling with large quantities of data: as the costs and ease with which data can be saved have decreased, our tendency to store all the data we can has increased. After all, as a hoarder might explain: what’s retained just might come in useful one day, which would be all well and good if we limited ourselves to storing only our own data. That, however, is not the case: as individuals we may also store others’ data and contribute to their data being publicly exposed without their consent.


What else is the enthusiastic automated suggestion to tag one’s friends in SocMed photos than the barely-consensual breach of another person’s privacy? It’s true enough that a person can opt to disallow others from tagging them but the process is far from intuitive, and the default is often that such tagging is permissible through the standard terms of use of SocMed platforms. Without a certain degree of techno-legal savvy, the choice not to be tagged and to have at least that one aspect of one’s privacy be protected is largely illusory.


Ultimately, the choices we make online tend to find themselves being the substance of data mining processes which find patterns in large amounts of data. Obviously, developing the parameters upon which these patterns are discerned relies heavily on what one’s own beliefs are and data mining is not neutral.


Consider something as simple as determining what length of hemline a dress sold in a particular area would be: you have data about women, hemline lengths worn in the region, age, the percentage of married women, and religion which possibly all play a role in the decisions women make. Except that maybe you fail to factor in ‘climate’ which, in that area, is the single greatest determinant of the choices women make. And, so, present yourself a series of assumptions about why women choose the clothes they do without realising that the length of hemlines may have far less to do with socio-sexual practices than the simple desire of wanting to avoid either a heat-stroke or hypothermia. And, that, of course is a mistake which it is all too easy to see a software techie dudebros making: the industry is not known to be especially welcoming of women.


Big data, it could be argued, presents not so much the opportunity to eliminate our personal prejudices through the use of technology but to express them at scale. And where data is collected indiscriminately, nobody is exempt from the consequences of attempts to analyse big data, which is why it is important that we have clear data protection rules and a transparent understanding of what we’re doing with data.




Legal Recognition

The intersection of human choice and technology is one which the law has been trying to traverse safely for some time now. In the landmark 2017 case of Puttaswamy v UoI in which the Supreme Court recognised that privacy is a fundamental right, it said:



"Data mining with the object of ensuring that resources are properly deployed to legitimate beneficiaries is a valid ground for the state to insist on the collection of authentic data. But, the data which the state has collected has to be utilised for legitimate purposes of the state and ought not to be utilised unauthorizedly for extraneous purposes. This will ensure that the legitimate concerns of the state are duly safeguarded while, at the same time, protecting privacy concerns. Prevention and investigation of crime and protection of the revenue are among the legitimate aims of the state. Digital platforms are a vital tool of ensuring good governance in a social welfare state. Information technology – legitimately deployed is a powerful enabler in the spread of innovation and knowledge.  A distinction has been made in contemporary literature between anonymity on one hand and privacy on the other. Both anonymity and privacy prevent others from gaining access to pieces of personal information yet they do so in opposite ways. Privacy involves hiding information whereas anonymity involves hiding what makes it personal. An unauthorised parting of the medical records of an individual which have been furnished to a hospital will amount to an invasion of privacy. On the other hand, the state may assert a legitimate interest in analysing data borne from hospital records to understand and deal with a public health epidemic such as malaria or dengue to obviate a serious impact on the population. If the State preserves the anonymity of the individual it could legitimately assert a valid state interest in the preservation of public health to design appropriate policy interventions on the basis of the data available to it."

The recognition of these issues, however, did not result in immediate legislative action: a contention made before the Madras High Court in a case between the Tamil Nadu Chemists and Druggists Association and Union of India, decided in 2017, suggested that the rules under the 1940 the Drugs and Cosmetics Act governing the sale of medicines online were inadequate, and that "There is no guarantee for data privacy if the medicines are sold on-line. Disease and treatment are the private information of the patients, which cannot be made available for data mining and for commercial use by on-line pharmacists." Although the court did not delve deeply into the fear of data mining, an almost comparable issue made its way to the US Supreme Court in Sorrell v. IMS Health Inc. decided in 2011. In that matter, the court struck down Vermont's Prescription Confidentiality Law saying:



"Vermont law restricts the sale, disclosure, and use of pharmacy records that reveal the prescribing practices of individual doctors. Vt. Stat. Ann., Tit. 18, §4631 (Supp. 2010). Subject to certain exceptions, the information may not be sold, disclosed by pharmacies for marketing purposes, or used for marketing by pharmaceutical manufacturers. Vermont argues that its prohibitions safeguard medical privacy and diminish the likelihood that marketing will lead to prescription decisions not in the best interests of patients or the State. It can be assumed that these interests are significant. Speech in aid of pharmaceutical marketing, however, is a form of expression protected by the Free Speech Clause of the First Amendment. As a consequence, Vermont’s statute must be subjected to heightened judicial scrutiny. The law cannot satisfy that standard."

There was also a dissenting opinion in the matter by Justice Breyer who was joined by Justice Ginsburg and Justice Kagan:



"The Vermont statute before us adversely affects expression in one, and only one, way. It deprives pharmaceutical and data-mining companies of data, collected pursuant to the government’s regulatory mandate, that could help pharmaceutical companies create better sales messages. In my view, this effect on expression is inextricably related to a lawful governmental effort to regulate a commercial enterprise. The First Amendment does not require courts to apply a special “heightened” standard of review when reviewing such an effort. And, in any event, the statute meets the First Amendment standard this Court has previously applied when the government seeks to regulate commercial speech. For any or all of these reasons, the Court should uphold the statute as constitutional." 

It is not difficult to see that there are valid arguments to be made regardless of which 'side' one is on, and sooner or later, they are arguments which India will have to determine for itself. We are not going to be able to sidestep them in the push for Digital India although, so far, limited digitalisation has meant that we have been able to watch how these issues have played out in other jurisdictions without making firm commitments ourselves.




Discovery of Information in Litigation

One of the first issues that strikes one when it comes to data mining is how to balance personal privacy against public interest. For example, in litigation, are parties allowed to collect each others' data and use it in an attempt to disprove their opponents' claims. There is very little opposition to say, insurance companies trawling through accessible images of people's holidays should they attempt to claim compensation for having their holidays ruined by a tummy bug while their online updates tell quite a different story. However, the standards we apply as a society to holiday insurance fraud are unlikely to be the same as those which would be applied to, say, rape. Would it be fair to require an alleged victim to turn over all of their communications to a third party in order to have it be sifted through to either corroborate or negate their allegations? What if those communications were anyway publicly accessible; could they then be used?

In the US, when a semi-professional basketball player claimed that he became disabled as the result of an automobile accident in Vasquez-Santos v Mathew 2019 NY Slip Op 00541 decided on January 24, 2019, the court allowed eDiscovery noting that 'private social media information can be discoverable to the extent it "contradicts or conflicts with [a] plaintiff's alleged restrictions, disabilities, and losses, and other claims" (Patterson v Turner Const. Co., 88 AD3d 617, 618 [1st Dept 2011])' although it limited access to the plaintiff's accounts and devices in time to those items posted or sent after the accident and in subject matter to those items discussing or showing defendant engaging in basketball or other similar physical activities.


At the moment, in India, we have no clear understanding of what is permissible and what isn't, much less of what should be permissible and seem to tend to 'play it by ear' and hope for the best, so to speak. We certainly recognise electronic documents but tend not to be certain of how to handle eDiscovery as a process.




Data Quality and Consent

Amongst the most important issues, when it comes to data mining is the quality of data and consent for the data having been made available. Take the example of the basketball player in Vasquez-Santos v Mathew, for example. The relevant court order states: "Although plaintiff testified that pictures depicting him playing basketball, which were posted on social media after the accident [which he claims made him disabled], were in games played before the accident, defendant is entitled to discovery to rebut such claims and defend against plaintiff's claims of injury. That plaintiff did not take the pictures himself is of no import. He was "tagged," thus allowing him access to them, and others were sent to his phone."


The court's order seems to indicate that the data available could be bad, in which case, it would be useless to the defendant. This highlights the importance of data being kept up-to-date. However, apart from sparse provisions in the Privacy Rules, 2011, there is little in India which allows individuals to ensure that their data is in fact correct and up to date and even those provisions would do little to aid anyone in a case such as this. What data quality Rules exist tend to be piecemeal in a number of different instruments: for example, the Drugs and Cosmetics (Amendment) Rules, 2019, were notified by the Central Government on 10 January 2019, to make the following insertion into the law:



84AB. Information to be uploaded by the licensee on online portal SUGAM. (1) The licensee granted license under this Part shall register with portal SUGAM (www.cdscoonline.gov.in) and upload information, as per the format provided in the said portal, pertaining to the licences granted for manufacture for sale or distribution of drugs and the information so provided shall be updated from time to time. (2) The information uploaded by the licensee with SUGAM portal under sub-rule (1), shall be verified by the concerned Licensing Authority.

Useful though the are in terms of helping to maintain accurate databases, disparate laws do little to enhance data quality in general, and which could prove to be problematic.


The second issue which the basketball player's case highlights is one of agency and autonomy. The individual did not appear to have control over what was data over himself possibly in part because he allowed himself to be tagged by others in SocMed posts. In essence, a version of his identity was being created by others.


While there are instances where one might have little sympathy for a person's whose rights to privacy are violated by the commentary of others on their lives – would we want an adult criminal's history to be entirely under the carpet, for example – the fact that an identity can be so constructed by others highlights the need to ensure a basic standard of privacy by design rather than to merely facilitate privacy by consent through standard form check-box contracts. That line of thought must, however, work side by side with an understanding of the fact that the right to control what is known of one may be outweighed by others' right to information.


In other words, one's right to be forgotten or at least not to be indexed by a search engine may be superseded by the public's right to know. This issue was considered by the ECJ in the case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, which veered towards making personal information inaccessible but, with the fight to have it be made inaccessible becoming as interesting as it did, the information sought to be hidden is now inescapable.


In India too, courts have tended to respect the right to privacy where there is no overriding public interest in having information be made public, as was seen in the case of a rape victim who wanted her name redacted in the judgment mentioning her name that was reproduced online. There is no statutory clear basis to enforce the right to be forgotten in India though and authorities are not limited to having information be de-indexed. 


Offline too, similar dynamics emerge. In the 2016 case of Mrs S Uppal vs Ministry Of Health & Family, the Central Information Commission upheld the denial of access to a doctor's service book which had been sought via an RTI application along with copies of certain pages stating:

"After hearing parties and perusal of record, the Commission observes that the query under RTI seeking copies of some pages of service book enmasse is nothing short of data mining and indeed is an invasion on the privacy of an individual. During the hearing the Appellant has stated that he seeks only those information from the service book which do not fall within the ambit of personal information of the employee and hence the personal information may be redacted while supplying remaining information. However, the RTI application filed by the appellant is not accordingly worded. Hence, it is advised that the Appellant may file fresh RTI application indicating his exact query. In terms of the celebrated decision of the Apex Court in the case of Girish Ramchandra Deshpande, there is no doubt that: "....the performance of an employee/officer in an organization is primarily a matter between the employee and the employer and normally those aspects are governed by the service rules which fall under the expression "personal information", the disclosure of which has no relationship to any public activity or public interest. On the other hand, the disclosure of which would cause unwarranted invasion of privacy of that individual...""

Cases that make it to the courts tend to be outliers but incidents in our own ordinary, everyday lives are not. It isn't at all common these days for a new acquaintance not to run a quick online search of our names to learn more about us. Ideally, we'd like to be able to have some degree of control over what they find but that can be hard when one's doting aunt posts pictures of one covered with cake at the age of three. The photos themselves may be innocuous but they could well be images which we'd perhaps prefer not to share in, say, certain professional settings.




The Treatment of Data Mining

Thus far, Indian courts have had limited opportunity to consider data mining. In the 2016 case of Karmanya Singh Sareen v. Union of India, the Delhi High Court tangentially indicated that what would hold sway would be consent. This was in a matter that developed upon Facebook's acquisition of WhatsApp giving rise to fears that users' data would be mined and misused.


Privileging consent is largely in line with current Indian jurisprudence and legal mandate: data mining is usually preceded by the acquisition of large data sets whether by buying databases or scraping websites for data. This could potentially give rise a variety of actions under tort law, civil law, and criminal law including those related to breach of contract, commercial misappropriation and unfair competition, unjust enrichment, breaches of privacy and, possibly as a subset of contractual breaches, violations of confidentiality, and the violation of intellectual property rights not least in terms of ‘moral rights’ violations and copyright infringement (assuming the data acquired were copyrightable) as well as in terms of trade mark violations including reverse passing off.


Critically, unauthorised website scraping does seem to be frowned upon by statute, specifically, Section 43 of the 2000 Information Technology Act, which forms a solid though perhaps misplaced foundation upon which to privilege on consent above all else. Given that consent may come to mean little in a world where almost noone reads EULAs and other standard form contracts, and fewer still realise that doing so may be prudent, it would probably make sense to guarantee individuals baseline rights, the violation of which would render an agreement unconscionable and consequently unenforceable, so that consent does not have the opportunity to become the be all and end all of individual rights.


Consent is easily co-opted and choice invariably inhibited. Noone should be able to accidentally consent to having their own lives be derailed which, given that big data is now ubiquitous, is a real fear.


(This post is by Nandita Saikia and was first published at IN Content Law.)

2 August 2019

#FOEIndiaSeries | 14. The Mechanics of Regulation

Free Speech in India

This is one of 14 articles (available via this page) through which I hope to share a sense of free speech and content law in India. Part I of this series considers the socio-legal basis of free speech law in India, Part II explores what regulation, both legal and social, says and, in some cases, what it should perhaps say while Part III, finally, looks at the processes through which free speech regulation is implemented in India.

Wherever possible, I've tried to avoid mention of matters I've been involved in myself. I've also tried to ensure that the series is accessible to non-lawyers.

The terms ‘child pornography’ and 'revenge porn' have been used simply because of how common they are, both in popular discourse and occasionally at law, even though neither term is accurate. 'Child porn' refers to indecent images of children and, where real children feature, is evidence of child abuse in and of itself. 'Revenge porn' generally refers to the non-consensual release of explicit imagery of a woman by a former partner of hers. It, too, is a manifestation of abuse, and is far more an expression of power than an expression of pornography.

Of course, none of the content of these articles is professional advice and it should not be relied on for any purpose. It is tinged with personal opinion, may not be accurate, and is incomplete.  

Posts in the Series

Part I.    The Foundations of the Law

1.    The Parameters of Indian Discourse    
2.    The Backbone of the Law    
3.    Legislative and Other Input

Part II. Regulating the Substance of Speech    

4.    Creative Content and Trade    
5.    Reputation and Honour    
6.    Keeping the State Functional    
7.    Maintaining Law in a Plural State    
8.    Women’s Existence in Patriarchy
9.    Sexual Abuse and Reportage
10.    Privacy and Rights-Based Legislation    
11.    Explicit Content: Choice, Consent and Coercion    
12.    State Paternalism and Public Interest

Part III.  The Processes of the Law    

13.    Keeping Track of Others’ Content
14.    The Mechanics of Regulation



Part III. The Processes of the Law


14. The Mechanics of Regulation


There exist laws which determine what speech is permissible; that is clear enough. There also exist formal mechanisms through which to have the law implemented, to determine what speech is permissible, and to curtail the dissemination of illegitimate speech. These mechanisms tend to function through the judicial system itself, via the use of pre-publication filters such as those of the Central Board of Film Certification, and through the aegis of various self-regulatory industry bodies which deal with various forms of content such as news, general entertainment, and advertisements.
There have also been numerous episodes where the powers that be have discouraged the exercise of free speech fearing that the dissemination of certain speech could lead to law and order problems. The Supreme Court has, however, time and time again indicated that where speech is legal, the State has a duty to protect those who express it. In a 1989 case, it went so far as to say: “We want to put the anguished question, what good is the protection of freedom of expression if the State does not take care to protect it? If the film is unobjectionable and cannot constitutionally be restricted under Article 19(2), freedom of expression cannot be suppressed on account of threat of demonstration and processions or threats of violence. That would t[a]ntamount to negation of the rule of law and a surrender to black mail and intimidation. It is the duty of the State to protect the freedom of expression since it is a liberty guaranteed against the State. The State cannot plead its inability to handle the hostile audience problem. It is its obligatory duty to prevent it and protect the freedom of expression.” [sic]
Under Article 19(2) of the Constitution which the Supreme Court referred to in this case, the freedom of speech and expression can potentially be restricted on a number of grounds but a disruption of ‘law and order’ is not one of them. With regard to the maintenance of law, it is only the possibility of a breach of ‘public order’ which can be a reason to curb free speech. ‘Public order’ is not quite the same as ‘law and order’ as the Supreme Court had already explained decades earlier, pointing out, in effect, that a breach of ‘public order’ involves a far more widespread and damaging breach of the maintenance of law than a mere breach of ‘law and order’ would be. It said:
“Does the expression "public orde[r]' take in every kind of disorder or only some? The answer to this serves to distinguish "public order" from "law and order" because the latter undoubtedly takes in all of them. Public order if disturbed, must lead to public disorder. Every breach of the peace does not lead to public disorder. When two drunkards quarrel and fight there is disorder but not public disorder. They can be dealt with under the powers to maintain law and order but cannot be detained on the ground that they were disturbing public order. Suppose that the two fighters were of rival communities and one of them tried to raise communal passions. The problem is still one of law and order but it raises the apprehension of public disorder. Other examples can be imagined. The contravention of law always affects order but before it can be said to affect public order, it must affect the community or the public at large.”

Despite its being crystal clear that the State has a duty to protect free speech and those exercising it, it is often the case that those who wish to speak keep themselves from saying all that they would like to say because of the propensity of an assorted group of non-State actors ranging from those with easily hurt feelings to those with an axe to grind rushing to court to attempt to have speech be kept from being disseminated.

Considering how vague the law itself often is in defining what constitutes legitimate speech, and how susceptible to interpretation laws which could suppress speech often are, there is an additional layer of regulation which often occurs before speech is communicated or transmitted: self-regulation. In the language of critics of the law, this is — not without cause — referred to in terms of the chilling effect laws have due to the manner in which they can be deployed, however illegitimately, by those who would prefer not to have certain speech be transmitted to anyone. After all, there is no stopping anyone who chooses to initiate legal proceedings against the dissemination of specific speech, and such proceedings are invariably time-consuming, risky-filled and expensive.

When it comes to corporations which deal with content that is essentially speech whether in the form of television programmes, films, music, or books, it isn’t at all uncommon for them to approach a lawyer before the publication of content to try to ensure that the works they intend to publish do not, at the very least, blatantly violate the law with reference to regulations that govern the content of speech, with reference to proprietary rights which determine who may disseminate specific forms of speech, and with reference to contractual obligations which could determine the permissibility of certain speech. This is simply because it can make sense for people to avoid saying what they do not feel a pressing need to say in order to try to avert the possibility of being drawn into legal proceedings of whatever nature.

Self-censorship doesn't always work as intended, even though there are fairly well laid out processes such as those of film clearance through which it may be conducted, in no small measure because the law too often lends itself to being interpreted by those determined to suppress speech in ways that are extremely hard to anticipate. This isn't entirely because of 'bad drafting' (though examples of less than stellar legislative drafting are not difficult to find) but because the law doesn't account for every possible situation which could potentially arise.

Instead, the law deals in generalities, and therefore necessarily falls short of 'direct applicability' to unique episodes in which free speech issues are raised. And, of course, with speech being what it is, and most people not repeating each other verbatim, each time a free speech issue arises, the issue tends to be unique while the relevant laws, as always, remain generalised. Anyone who deals with speech, whether to check what they themselves say or to assess what others have said, is obliged to deal with this discrepancy. The only saving grace, perhaps, is that it is comparable processes that the law requires regardless if one is clearing or assessing the legitimacy of contract although, of course, the outcomes can be rather different when one clears content for oneself and when an external arbiter such as a judge assesses the legitimacy of one’s content.

In the first category of content regulations appear laws which determine the fundamental permissibility of speech such as defamation law, and the so-called anti-blasphemy law. Clearing content for publication with regard to them of often a dicey affair because although not all potentially problematic content would ultimately be held to be illegal even if legal proceedings were initiated in respect of it, it is not easy to predict how a court would view a specific instance of the expression of speech.
In the second category come laws and business practices which relate to proprietary rights: they determine whether otherwise permissible speech can be legitimately disseminated in light of who may 'own' a specific expression or manifestation of speech. For example, the speech in a book may be legal to disseminate from the point of view of purely substance-related content law but actually disseminating the words in the book could be illegal if the book were protected by copyright and one did not have authorization for dissemination either by law or from the copyright owner. And, so, from the point of view of content clearance, considering proprietary rights essentially involves attempting to ensure that those who intend to disseminate content have the right to do so and that their disseminating content would not result in the commission of infringement or in the unfair use of content.

Infringement is fairly well understood at law and involves the exploitation of statutorily-recognised proprietary rights in content without the backing of the law or the permission of rights-owners. These rights could be in the form of various intellectual property rights, and, to avoid violating them, it is important to ensure that those who disseminate the content in which they are embedded own the rights or at least have permission from the rights-owners to share the content. In other words, ‘the chain of title’ must be clear.
When content is created, it usually belongs to its author who could, for instance, be the writer of a film script. Continuing with the example, the author-owner may sell the script to a producer through an assignment deed, who may, after using it in a film, sell the rights to both the film itself and the script underlying it to a distributor who would ordinarily communicate the film in its entirety to the public. The complexity of the chain may vary and the clearance process is intended to ensure that none of the links in it are broken — any breakage may make it illegal of the ostensible owner of the final product to communicate it to the public. And, if the final product is a film, there could be several underlying elements (including music and lyrics) all of which contribute strands to what would ultimately form the chain of title.

In essence, unless specific content belongs to the public domain or is freely available for anyone to use, it must belong to someone. If it belongs to someone, except in a few cases specified by the law where dissemination does not require the permission of the owner, the owner’s permission must be obtained to avoid having the dissemination become illegal.

Apart from rights recognised by statute, however, are also rights recognised by business practice. So, for example, when a reality television show is produced, it usually follows a set format which is detailed in supporting documentation often called a ‘Production Bible’ developed from the stage of the conceptualisation of a show right up to the stage of its production. Business practice, now supported by judicial recognition in many jurisdictions, dictates that the original creator of the format has what are known as ‘format rights’ in the show. Although these rights are not explicitly defined by statute in India, if a production house were to develop a new reality TV show, they would generally obtain ‘format rights clearance’ through a lawyer to try to ensure that its own programme did not inadvertently violate the format rights in a pre-existing reality TV show. And, so, the proprietary rights which are recognised to subsist in content are not strictly limited to those which are defined by statute although it would almost certainly be possible to invoke copyright law in support of a format rights claim.

Proprietary rights are determined by a combination of statutory demand and contractual requirements. There is, however, a third category of law which is purely negotiated within broad limits set down by statute: contractual requirements over and above those mandated by law. For example, for a permission granted for the creation of a film or a book — whether it be permission to shoot at a particular location or to include an image in a book — the persons who grant permission may insist on an acknowledgement of some sort in the credits, or on a favourable mention made of their favourite charity; whatever is expected is usually articulated in a contract. And the limits of what the grantors of permissions may ask for are dictated only by the restrictions imposed by law and the extent to which human imagination may run. So, for example, they cannot legally ask that one commit a crime in exchange for the grant of permission but, assuming that their ‘asks’ are legal and that the relevant contract is valid, part of the clearance process is to ensure that those who receive permissions for the creation and dissemination of their works have fulfilled their contractual obligations.

And a fourth category of law attempts to ensure that statutory demands are met or, at least, that practices are followed which would preempt legitimate claims being made against or in relation to specific content. For example, the 1957 Copyright Act stipulates that there are authors and performers who have a moral right to claim credit should they not be attributed for their work. Due to this, generally, a ‘credit clearance’ would be conducted to attempt to ensure that all those who should be attributed either by law or through contract have in fact been credited in the work. In the case of authors' moral rights, the right is for an author to claim credit and not to be credited. Nonetheless, it makes ethical and business sense to credit authors, if nothing else, to avoid a situation where an author who has the right to claim credit could perhaps stall the publication of a work by rushing to court to claim such credit.

The clearance of content for publication may take place before the content is created — every stage of the creation process may be cleared by lawyers who may look at the concept underlying the content, daily takes in the case of filmed content or drafts in the case of written content, and then at the completed content which, in these examples, would likely be a film, television programme or a book. That said, there are also times when it is only the completed product which would be subject to clearance. A distributor who had acquired a completed film, for example, or a publisher reprinting content acquired from abroad may have the work cleared locally particularly in order to attempt to avoid violating domestic content laws. In such cases, however, the content clearance process could easily become one of attempting to control the potential for damage since it would not be possible to completely rehash content even if it were problematic at such a late stage, and suggested changes would generally be limited to perhaps the deletion of a few scenes, the editing of a few words, and to obtaining licences from the owners of brands which were clearly visible.

Sometimes, content which enters the country from abroad is not problematic because there are specific issues with isolated parts of it that may violate the law. On the contrary, it is sometimes the case that there are few specific issues with the speech embedded in the content but that, from beginning to end, it is framed in a way that is inconsistent with Indian beliefs and the Indian understanding of history. Take a book set in India during the Raj, for example. If it were to speak of 1857, it may clearly view history from the point of view of imperialist white English people and be rife with descriptions of Indian savagery and the murder of innocent Europeans completely ignoring the fact that what the English see as a mutiny against their presumably-benevolent rule in the subcontinent, Indians may well view it as a war of Independence against the British and their entirely-illegitimate usurpation of power in India. In such cases, it is unlikely that the book would violate a specific content law in India, and it is entirely possible that a court would treat it as an academic work and not interfere with its author’s freedom of speech. Nonetheless, its entire framework and thesis could be so repugnant to Indian sensibilities that it could well face an unpleasant public backlash in India.

While there are good reasons not to interfere with the speech of authors because of the manner in which they frame issues — after all, it is often only through contested narratives that a semblance of ‘the truth’ may emerge — it is also worth asking whose stories are being told, for whose benefit they are being told, and whose voice is missing from narratives. Speech tends to be an instrument of the powerful to legitimise themselves, and the only legitimate way to counter the processes through which they use speech to reinforce their power tends to be by ensuring that a multitude of diverse voices are heard. Challenging received wisdom is therefore critical to ensuring that tales are not told only from the point of view of the powerful who, incidentally, are more likely to be backed by significant finances than those they have oppressed. This is rarely easy to achieve in practice especially since socio-financial capital tends to help those who are rich and well-placed to get their word out with far more ease than those who are poor and marginalised.

It is possible to ban products such as books and films from being imported into India or exported from India through a notification issued by the Central Government if the government thinks it is necessary to do. These notifications may be partial or conditional, and they must be laid before Parliament. Also, they must be on one of grounds listed by the statute and cannot be arbitrary; the grounds on which content may be banned include maintaining the security of India, public order, decency or morality. The statute also allows for the import or export of goods to be banned to protect copyrights, to protect national treasures of artistic, historic or archaeological value, and to prevent the dissemination of documents which are likely to prejudicially affect friendly relations with any foreign States or which are derogatory to national prestige. And, finally, bans may be effected to prevent laws from being contravened or for other purposes conducive to public interest. If an attempt is made to import or export prohibited banned goods, the goods are often liable to be confiscated.

The 1962 Customs Act, it is worth noting, deals with banning products and not with banning the speech which may be embedded in them. As such, even if the import of a book is banned, it is possible that, depending on the factual matrix, it may be possible to print an Indian edition of the book and effectively bypass the notification banning the import of the book into India from abroad. Also, although it doesn’t explicitly speak of these issues, the law is aware of the possibility of products which are part of the country’s cultural heritage being illegitimately taken out of India. This is an issue which is particularly important at a time when concerns about the repatriation of stolen artefacts and manuscripts from museums and private collections across the world, and, particularly, from States that were once colonial powers is being spoken of increasingly frequently.

Considering that many of these States even today have restrictive visa regimes, the effect of having content effectively hidden away and inaccessible to people who are not given visas could well be to impede scholarship, and to prevent a more complete understanding of historical circumstances from emerging. Although it is easy to criticise product bans and impediments to the cross-border transfer of products containing speech for their potential to curb free speech, there are also times when they may be necessary to preserve the possibility of engaging in speech that is both free and informed not least by ensuring that products in which speech is embedded do not become inaccessible by being sent to far flung shores. These are not easy waters to navigate particularly since a ban does not always suppress speech.

When concerns arise about content being banned or about free speech being illegitimately exercised, the courts can almost always be approached to address concerns. The judicial system can, however, easily resemble a labyrinth to those unfamiliar with it. And to make matters even more complicated, free speech concerns are rarely matters of the law alone. They are informed by popular opinion which is, in turn, developed with inputs from a vast array of social, cultural, religious, economic, and historical inputs not all of whose influence is easily discernible. It is therefore critical not to jump to conclusions about either the desirability of unrestrained free speech or of the illegitimacy of restrictions placed on content particularly since speech, whatever form it takes, tends to mould society for better or worse.

(This post is by Nandita Saikia and was first published at IN Content Law.)

26 July 2019

#FOEIndiaSeries | 13. Keeping Track of Others' Content

Free Speech in India

This is one of 14 articles (available via this page) through which I hope to share a sense of free speech and content law in India. Part I of this series considers the socio-legal basis of free speech law in India, Part II explores what regulation, both legal and social, says and, in some cases, what it should perhaps say while Part III, finally, looks at the processes through which free speech regulation is implemented in India.

Wherever possible, I've tried to avoid mention of matters I've been involved in myself. I've also tried to ensure that the series is accessible to non-lawyers.

The terms ‘child pornography’ and 'revenge porn' have been used simply because of how common they are, both in popular discourse and occasionally at law, even though neither term is accurate. 'Child porn' refers to indecent images of children and, where real children feature, is evidence of child abuse in and of itself. 'Revenge porn' generally refers to the non-consensual release of explicit imagery of a woman by a former partner of hers. It, too, is a manifestation of abuse, and is far more an expression of power than an expression of pornography.

Of course, none of the content of these articles is professional advice and it should not be relied on for any purpose. It is tinged with personal opinion, may not be accurate, and is incomplete.  

Posts in the Series

Part I.    The Foundations of the Law

1.    The Parameters of Indian Discourse    
2.    The Backbone of the Law    
3.    Legislative and Other Input

Part II. Regulating the Substance of Speech    

4.    Creative Content and Trade    
5.    Reputation and Honour    
6.    Keeping the State Functional    
7.    Maintaining Law in a Plural State    
8.    Women’s Existence in Patriarchy
9.    Sexual Abuse and Reportage
10.    Privacy and Rights-Based Legislation    
11.    Explicit Content: Choice, Consent and Coercion    
12.    State Paternalism and Public Interest

Part III.  The Processes of the Law    

13.    Keeping Track of Others’ Content
14.    The Mechanics of Regulation



Part III. The Processes of the Law


13. Keeping Track of Others’ Content


The law may not only hold one responsible for one’s own speech but also for others’ speech particularly if one has a role in publishing it. There is case law which indicates that, for certain speech-related offences, it would be no defence to say, “But I didn’t write it!” if one were to be accused of a legal wrong. For example, if a book contained defamatory content, or, for that matter, were merely accused of containing such content, its publisher would not necessarily be able to escape legal liability simply because the text had been written by the author, a separate person. Publisher-author agreements may routinely contain clauses which speak of how authors will ensure that publishers do not suffer from adverse legal consequences as a result of publishing their works. However, it isn’t always clear that these clauses are meaningful in any sense of the word not least because one cannot contract one’s way out of criminal liability. And when it comes to defamation and many of the other grounds on which content may be assailed, it is criminal law which may be invoked, and not necessarily civil law.

A number of criminal laws contain explicit and almost identical provisions to tackle cases where corporate bodies, firms, and associations of individuals are accused of having committed offences. These provisions tend to state that when an offence is committed, in addition to the organisation itself, the persons who are in charge of and responsible to the organisation for the conduct of its business would be deemed to be guilty of the offence, and would consequently be liable to be punished. Such persons would only be able to escape criminal liability if they were able to demonstrate that the offence had been committed without their knowledge or that they had done what they could to prevent its having been committed. That said, if it were proved that the offence had been committed with the consent or connivance of any director, manager, secretary, partner, or any other officer of the organisation or that its commission was attributable to the officer’s neglect, the officer in question would be liable to being proceeded against and punished.

These are rather stringent provisions, and they have the potential to make life extremely difficult for individuals who work with organisations, and who are responsible for how the organisation conducts its business even if they are not necessarily involved in the nitty-gritty of its day-to-day functioning. For example, the named publisher of a newspaper could perhaps face legal proceedings for what appeared in the paper even though he may never have seen a particular image, and the decision to place it in the newspaper may have been taken by a colleague without reference to him. He may ultimately escape liability but even dealing with legal proceedings takes a toll.

When it comes to content which is placed on online by users, the intermediaries who run platforms, whether they are online marketplaces or SocMed sites, have some amount of leeway which is granted to them by the 2011 Information Technology (Intermediaries Guidelines) Rules which were issued by the Ministry of Communications and Information Technology under the aegis of the 2000 Information Technology Act. These Rules place a number of obligations on intermediaries and require them to observe ‘due diligence’ — if they meet their obligations, intermediaries are entitled to be shielded from the full force of the law should users upload unlawful content on to their platforms.

In relevant part, under these Rules, intermediaries must publish their terms of use, access, and service, as well as a privacy policy. In the former document, they must inform users not to ‘host, display, upload, modify, publish, transmit, update or share’ prohibited information. Abbreviating what the Rules have to say, the Rules prohibit content which users have no right to post, harms minors, infringes proprietary rights, impersonates or misleads others in regard to the identity of the users who may also be the senders of messages, contains viruses or malware, threatens the safety or integrity of the state or the individuals in it possibly by inciting crimes. Additionally, prohibited information includes content which is ‘grossly harmful, harassing, blasphemous defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful’ in its nature. Intermediaries must not knowingly host or publish such information, or determine the communication of such information. If they remove content which contains prohibited information once they become aware of it either by themselves or through someone else who informs them of it in the manner contemplated by law, they would not generally be liable to be punished for the content having been uploaded or having been accessible through their platform.

The speech which is prohibited under the 2011 Information Technology (Intermediaries Guidelines) Rules falls into two categories: that which is clearly illegal under allied statutory laws, and that whose illegality would not be supported by statute. For example, there is absolutely no doubt whatsoever that paedophilic content is illegal under a number of statutes including one which specifically targets sexual offences against children ie POCSO, and the core criminal law statute: the 1860 Indian Penal Code. Neither is there any remotely credible argument to be made to the effect that such content, particularly where it features real children, should be legal in light of the harm it does. As opposed to this, consider content which is merely ‘disparaging’ — a great deal of content which is otherwise illegal also happens to be disparaging. It is difficult to imagine that defamatory speech would not usually be considered to be disparaging, for instance, particularly by those in relation to whom it was made. However, there are also cases where speech may be disparaging without otherwise being illegal.

Consider terms such as ‘wine and cheese liberal’, ‘Lutyens leftist’, or ‘Khan Market socialist’ with their references to ‘posh’ areas of Delhi. The terms essentially mock upper-class people for what may be perceived as their pretence of being ‘one with the poor’. They may, amongst the various reasons for which they are employed, be used by upper-class people who attempt to silence other people of the same class who seem to evince having a social conscience, and who purport to speak out about issues which primarily impact poor people. Alternatively, they may be used by disadvantaged or marginalised peoples to mock upper claste people who they see as simply being discriminatory or abusive, or, more insidiously, who they perceive as appropriating their voices and their experiences for their own benefit possibly by doing such things as making money by writing about ‘The Subaltern Experience’ or something analogous to it without necessarily the faintest idea of what they’re talking about.

It isn’t entirely inaccurate to refer to those who engage in such conduct as ‘cocktail activists’, and the like: it is, after all, to anyone familiar with Delhi, easy to picture the invariably upper-claste authors of such pieces scrolling through what a few marginalised people say online, picking up a few keywords, adding some likely extraneous material to those words, and pretending to have — or, perhaps, even worse, genuinely believing that they have — written empathetic and insightful text about what they believe are the trials and tribulations faced by marginalised persons despite being almost certain to have completely failed to go beyond AtrocityLite™ since their writing would invariably be entirely divorced from both their own experiences and those of marginalised people. Obviously, such an exercise is likely to be conducted at an upscale eatery whilst occasionally glancing up to thoughtfully look out at ‘the world’ (which could easily mean ‘a well-maintained attached garden’), sipping on ridiculously diluted coconut water on the rocks served looking like a cocktail, and typing away on one of the most expensive devices available in the market. It’s not a flattering picture, and it may well be a comical one, but what is almost unarguable is that there’s more than a sliver of realism embedded in terms such as ‘cocktail activist’ which inspires their adoption even though they are disparaging.

The point, here, of course, is that speech that is disparaging isn’t necessarily content which should be considered inappropriate or unacceptable. Prohibiting it forces on to everyone an aesthetic of civility which is all too often the mode through which the powerful communicate in a manner which may be seen to be temperate but which could easily and does often effect abuse. Civility is not a marker of propriety or of rightness. It is not a marker of respect in and of itself, and there is very little reason to promote the idea that non-abusive disparaging commentary is necessarily unacceptable especially when it is used by marginalised people who tend to be completely ignored when they are soft-spoken and polite. Unfortunately, this is not a view which is in perfect consonance with the law — the 2011 Information Technology (Intermediaries Guidelines) Rules do see disparaging content as problematic speech which deserves to be silenced. Of course, the term is not defined by the law, so intermediaries have a considerable amount of leeway to decide if content is actually disparaging and if it should be removed.

The Rules do not require intermediaries to remove content which is not prohibited. However, if they become aware of prohibited content and fail to remove it, they lose the protection of the shield which the Rules offer them. Due to this, even though the initial determination of whether or not content should be removed is in the hands of intermediaries, there is no guarantee that intermediaries would step in to protect free speech. They have a huge incentive to remove any speech or content in regard to which they may receive a complaint simply to ensure that they are accorded some protection should a criminal complaint be made in relation to the same content. The punishments levied by criminal law, of course, cannot be taken lightly and it is hardly fair to expect persons employed by various intermediaries to risk having to bear them. As such, the environment which is created by the structure of the Rules is hardly conducive to free speech.

The obligations of intermediaries have been weakened through clarifications made after the initial issue of the Rules which, amongst other things, lengthen the complaint response time of which they may avail and clarify what constitutes intermediaries' knowledge of the upload of unlawful content by users. To change the environment, however, that is not enough: it is the basic structure of the Rules which turns intermediaries into arbiters of the legitimacy of speech in the first instance that is problematic.

As with all concerns relating to free speech though, this is not a black and white issue. There are forms of speech which are made online that, amongst other things, are hateful, which incite violence against specific persons and communities, and which should be removed. It isn’t at all clear what a workable and accessible alternative to requiring intermediaries to remove content if it falls within certain prohibited categories would be. Neither is it clear, whether it would be possible to detail what would fall within the scope of prohibited content with a great deal of specificity. After all, the law cannot predict and prepare for every possible expression of thought it may encounter. Laws, by their very nature, tend to deal in generalities and hope for the best. Sometimes, that isn’t enough.

The idea of holding one person accountable for the actions or speech of another isn’t new to the law. It has been closely associated with what the law refers to as ‘vicarious liability’ and usually arises where a person commits a wrong for which the person who is responsible for his conduct may also be determined to be responsible. In addition to this, statutes sometimes recognise committing a crime as a primary offence, and facilitating the commission of that crime as a secondary offence. For example, the 1957 Copyright Act has, for decades, treated committing copyright infringement as a wrong which it has called ‘primary infringement’, and called providing a place at which to commit the wrong ‘secondary infringement’ although the statute doesn’t explicitly label the acts in those terms. Nonetheless, it has stipulated punishments for both primary and secondary infringement.

Those who ostensibly manage content are not always realistically in a position to manage what is said in groups they administer or on platforms they control. There have been attempts to hold the administrators of groups on messaging services responsible for what is said on their groups although it isn’t at all obvious that they would have much control over what members of the group post. They could conceivably try to take preventive measures such as by laying clear ground rules, or possibly responding by removing members who post inappropriate content from groups. However, unless they actually moderate each post before it is published — and that is not how most groups works — it is difficult to see how it is fair to expect a group administrator to be able to keep potentially illegal content from being published.

As technology has developed, so too have the concerns relating to speech, and how best to regulate it particularly when that speech is generated by users of specific platforms who may number in the millions. The answers have rarely been clear, and applying standards seemingly without thinking through them or by apparently applying them without any consideration to context has led to results which are less than ideal. A platform could easily say, ‘No nudity permitted,’ in a bid to curb pornographic content, and then go on to censor images of breastfeeding women even if specific images are not explicit whilst simultaneously allowing images of shirtless men to remain visible.

There have also been attempts at implementing ham-handed measures to restrain problematic speech online. For example, suggestions have been made to censor advertisements for clinics which conduct sex-selective abortion by simply blocking the term ‘sex selection’ along with other terms allied to it. What this inadvertently does is not only restrict access to the advertisements which are problematic but all studies relating to the problem, all reportage of the issue, and, possibly worst of all, all information relating to how such practices may be reported to the authorities if members of the public suspect that they are being carried out at a clinic. The idea of relegating an essentially social problem to ‘technology’ to solve without human input is attractive but it tends to very rarely be workable except — in theory, at any rate — in the rare case where the solution is extremely fine-tuned to address the specific problem it seeks to solve.

Even with human input, it is rarely easy to regulate speech. Copyright infringement, for example, is routinely addressed by having access to links to webpages and websites (which make available infringing copies of works) be disabled usually at the ISP level. The problem with this in practice is that it is invariably effected by having a list of ‘infringing links’ be made by investigators who pass it on to lawyers who may simply (quite literally) ‘cut, copy, paste’ it into a plaint or other document asking for the links to be blocked on the ground of infringing content being available via them. If there is a mistake which is made at any stage of this process right from the time when the list is made, there is a good chance that it will not be corrected by anyone dealing with the issue due to a paucity of time and resources to keep double-checking the list which may contain several hundred items. And, so, it could well be that it is only when URLs begin to be blocked that infuriated internet users could perhaps find that non-infringing websites which they rely on may have been blocked. The simplest solution is obviously to check the list carefully before issuing orders to block sites but doing so takes resources which the authorities are unlikely to have at their disposal. And it is just as unlikely that anyone to whom an order was issued would refuse to block websites, and thereby expose themselves to legal risk.

The law demands that some people bear responsibility for what others say but actually trying to determine who would be responsible and to what extent involves treading on a rickety structure in a minefield. It is not easy for anyone who is involved, and it is not yet clear what the best way to regulate speech is especially when it is online. While issues like copyright infringement are primarily about the enforcement of proprietary rights, other issues such as being able to freely speak (without being attacked by armies of trolls or being unfairly governed by the terms of use of private entities) about subjects ranging from illnesses which we may suffer from to human rights abuses we may be subject to affect all our lives. What we have to say may not pass the tests of civility or otherwise be aesthetically pleasing but our speech — and our silences — influence the way our society is shaped and the manner in which we live our lives. Free speech is an issue which all of us have a stake in, although the law does not yet seem to have decided exactly what its role is or how to implement what it seeks to achieve.

(This post is by Nandita Saikia and was first published at IN Content Law.)

Credit

This site is supported by FrontierNxt.