Sometimes anthropomorphic, at other times, decidedly not, automata have long been a source of unease to humans: they have straddled the line between the animate and the inanimate, between human beings and machines, never quite one or the other in myth, in history which may as well be myth, and increasingly, in what we believe to be ‘real’ life not so much through the mechanical automata of yore but through what could be considered to be their contemporary digital counterparts: deepfakes, intangible and deceptive.
Automata were as gifts to the powerful not least during the Enlightenment, perhaps reminding recipients that those who presented them possessed considerable skill and were not powerless themselves. In doing so, they interacted with hierarchies of social power: dismantling them, reinforcing them, rejuvenating them, and even creating them. Often seemingly benign and charming, and perhaps intended to be just so by their makers and to their contemporaries, for better or worse, retrospection allows automata to be placed in historical contexts which may tend to consider them to be tangible exhibits of the social and cultural anxieties of their times — it is not especially surprising to learn that Marie Antoinette was presented (albeit not by French revolutionaries) a spectacular lady musician automaton a few years before she was beheaded in the throes of the regime-changing French revolution.
The history of automata is often murky and ambiguous. We aren't sure who or what the golden maidens who helped Vulcan, the lame god, were: automata, perhaps, or merely dynamic engravings on armour; Homer’s words, passed down to us, are open to interpretation. We know of the belief in the oracles of the ancient world which sometimes lingers on but not always of the mechanics of their speech, of priests and speaking tubes possibly having facilitated the rendition of their prophecies. We study how automata eked their way into the arts from at least the time of the Renaissance (as references to speaking heads and the elaborate theatre props of the time indicate) and have experienced how, in what must have once been an astonishing reversal, the animate has come to emulate the inanimate not just in the arts but in everyday life. Synchronised dances performed by troupes such as the Tiller Girls and those incorporated into ballets like Swan Lake demonstrate how thin the line between automata and human beings can be as do, in contexts with little claim to aesthetics, the routines of factory workers set to repetitive, unthinking work.
Tools of Capital, Subjects of Free Will
Human beings, except perhaps for the most privileged, were arguably reshaped into automata in service of capital and to realise the dreams of what was to become the industrial age. While it is impossible to argue that large numbers of people, particularly in the global South, are still treated as automata, the self is no longer merely transformed in physical form between the animate individual and the inanimate automaton in the public sphere: it is fragmented into infinitely reproducible virtual components of voice and likeness.
Thankfully, we do not yet have to deal with human clones off the written page. Still, what we must contend with is not the mere fragmentation of mind and personality, whether trauma-induced dissociation or the bifurcation of public and private ‘face’ prompted by supposedly good manners, which we have always known, but a stripping down of our identities into fundamental constituent elements all too often followed by reproduction and monetization of those elements not necessarily with our consent.
Entirely unsurprisingly, the piecemeal dismantling, reconstitution, reproduction and sale of our identities now often used to create the virtual counterparts of automata and ventriloquists’ dummies, never quite dissociated from us and sometimes going so far as to masquerade as us, is cause for considerable consternation. Not only do these practices have the potential to create disinformation at hitherto unseen scales in public life, as when artificially-generated images based on components of individuals’ identities are used to create political content, but also to upend private life when, for example, such images are used to portray explicit encounters which never occurred in real life. They are potentially tools of crime and fascism. They are also potentially tools of benign entertainment and instruction in educational settings. And we are forced to strike a balance between the benign and malign, between permissible free speech about us and unjustifiable abuse of facets of our identities. ‘If it were held otherwise, an entire genre of expression would be unavailable to the general public’1 with caricature, lampooning and parody simply being excised from our linguistic florilegia by law.
The technology being used to recreate human beings in virtual forms, often as composites of real and imagined individuals indistinguishable from recognisable people, is amoral. Like most technologies, it does not, by itself, architect to create unjustifiable or illegal upheaval in either public or private spheres but the ease with which it can be used to do so has meant that the law has desperately tried to regulate both the technology itself and its use. One of the primary mechanisms through which it has aimed to protect real human beings has been by dragging intellectual property and tort laws into the 21st century through personality rights first formulated long before the digital age.
Rights to Images of the Self
A network of rights culled from constitutional law,2 statutory law, and tort law protect so-called personality rights by protecting individuals and aspects of their personalities directly (partly through constitutional law which protects individuals’ privacy), by protecting content drawn from individuals’ personalities (primarily through intellectual property laws), and by protecting individuals against unfair practices conducted by using aspects of their personalities (through data protection, competition, and consumer protection laws). Entwined along each of these branches of law is tort law which tends to complement rights granted by constitutional law and statutory laws; tort law recognises the right to privacy, the right of publicity, the right not be cast in a false light, the right not to have content incorrectly be passed off as being associated with one. It also helps fill in gaps in statutory law: for example, statutory Indian copyright law does not explicitly recognise the right against misattribution as a moral right but it is possible to attempt to assert such a right through tort law.
Together, the laws which protect personality rights provide de jure, if not de facto, legal assurance of individuals' privacy and their right to be left alone on one hand and, on the other hand, facilitate individuals’ right of publicity by enabling them to share chosen aspects of themselves while curtailing others from doing so.
The choice of which aspects of individuals’ identities and (private) lives may be publicly shared belongs to individuals themselves although extremely broad exceptions to this general rule exist particularly to facilitate reportage featuring public figures. One of the ‘burdens of public office’ is that it is ‘difficult to segregate the private life of the public figures from their public life’, as the High Court of Delhi recognised in relation to Maneka Gandhi.3
Thus, individuals who are not public figures tend to have a greater right to privacy than those who are public figures or who thrust themselves into the public eye, and, conversely, well-known individuals tend to have a greater or, at any rate, more-frequently asserted right of publicity than others which enables them to control the commercialisation of their identities and portrayals of their lives.
The law does not require individuals' consent to always be obtained for either components of their identities or portrayals of their lives to be used to synthesize and publicize content featuring them as a whole or through components of their identities and, even if consent is obtained, that consent may not be adequate to legally justify such acts.4 The key to legal justifiability may lie in the perception generated by the content and in the possibility of the occurrence of unjust enrichment at the expense of the individual whose life and likeness has been used.
To an extent, disclaimers distancing content from specific individuals may help mitigate legal exposure provided that they are not misleading5 ‘else, law would be lending its imprimatur to fraud, and no less’ as the High Court of Delhi put it while pointing out that none of the decisions placed before it in a case pertaining to a film depicting the life of an actor ‘postulates, as a principle of law, that the insertion of a disclaimer, disclaiming relationship between the events and characters depicted in the film and real persons would suffice to negate the possibility of any such connection or relationship existing’.6 As a starting point, this position makes sense. However, generative artificial intelligence has significantly reshaped the landscape in which personality rights are protected, asserted, and contested.
Identifying Synthetic Content
Personality rights largely arose in relation to portrayals of individuals and their lives in a time before specific aspects of an individual's personality could easily be digitally isolated and mashed together with the fantastical. While the law does not generally require the insertion of disclaimers elucidating the nature of content which portrays individuals, such disclaimers may be demanded of content publishers, inter alia, under Rule 12(5)(c) of the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules and, in the case of films, by the Central Board of Film Certification. While most disclaimers tend to focus on highlighting the fact that the content to which it is appended is not an accurate portrayal of real life, it has become prudent to mark synthetic content (created or generated by or with the support of artificial intelligence) as such in a manner which is discernible by the content’s potential audience.
An advisory issued by MEITY in March 20247 required the possibility of the ‘inherent fallibility’ of AI and the possible ‘unreliability’ of its output be appropriately labelled in cases where the AI was undertested or unreliable (without specifying what sort of testing would be considered adequate or whether the labelling should be visible to only to users of the AI or to viewers of its output as well). Still, it appears that labelling the output of AI would allay some of the ethical concerns raised by the uncertain sources of the output of generative AI as well as future-proof the content itself should legislators and policymakers later mandate such labelling.
For now, the focus of policymakers seems to have been on identifying bad faith actors when necessary rather than on regulating the receipt and perception of synthetic speech and content: under MEITY’s March 2024 advisory, for example, deepfakes and other synthetic content should be ‘labeled or embedded with permanent unique metadata or identifier’ in cases where intermediaries permit content synthesis using their software or computer resources, and users avail of these facilities to synthesize content which could be used as misinformation or deepfakes.
Further, despite this regulatory lacuna, the 2016 Rights of Persons with Disabilities Act requires government websites and other online content to be accessible via Sections 40 and 46 respectively buttressed, in some cases, by an advisory issued by the Ministry of Information & Broadcasting in April 2025,8 which would, if the requirements of the provisions were to be implemented, likely lead a significant fraction of online content, synthetic or not, to interact with artificial intelligence if only to enhance accessibility by automatically generating subtitles, describing the content of pictures in text (a task at which AI has not been known to excel), and the like. And that being the case, it is likely that a time when it becomes necessary to label synthetic content for what it is will not be far off.
Current Legal Approaches
For the most part, legal discussion of the infringement of personality rights facilitated by artificial intelligence tends to take the form of exploring how the rights of celebrities may be protected; the discourse appears to be primarily driven not by abstruse academics but by commercial expedience. Celebrities, after all, form the section of society whose personality rights are most likely to be asserted either by themselves or by corporate entities acting with regard to them not just because celebrities’ livelihoods and corporate profits are closely tied to monetising those rights but also simply because they are amongst the few who can afford to initiate legal processes for purposes that are not always, in some way, existential.9
Court cases related to personality rights, too, tend to involve celebrities10 with the result that jurisprudence pertaining to the concerns of people who are not public figures is not as well developed even though they may also suffer egregious harms through the violation of personality rights whether by being targetted by scammers impersonating their relatives to extract money from them, by having disgruntled former partners distribute synthetic ‘revenge’ porn in which they ostensibly feature or in other ways which cause significant distress.
In some circumstances, it may be possible to counter the distribution of synthetic content in violation of personality rights by invoking formal legal processes which do not (immediately) refer to courts through mechanisms set up by sui generis statutes such as those pertaining to consumer protection and, possibly, data protection.
The 2019 Consumer Protection Act, for example, recognises the rights of consumers to be protected from unfair trade practices11 which it defines as the adoption of unfair methods and deceptive practices to promote the sale, use or supply of goods or provision of services12 (including by the issue of misleading advertisements13) as well as to seek redressal against both unfair or restrictive trade practices and unscrupulous exploitation.14 The statute contemplates the establishment of a Central Consumer Protection Authority to ‘regulate matters relating to violation of rights of consumers, unfair trade practices and false or misleading advertisements which are prejudicial to the interests of public and consumer’15 and sets up a three-tiered dispute regulation mechanism at district, state, and national levels as well. Further, although it supplements rather than derogates from other laws,16 the 2019 Consumer Protection Act permits competent courts to take cognizance of certain offences such as those pertaining to misleading advertisements and punishable with imprisonment and fines under Section 8917 only if a complaint is filed by the Central Consumer Protection Authority or a duly authorised officer.18
For the most part, however, the enforcement of personality rights is through courts which tend to interpret traditional intellectual property laws in purposive and, occasionally, novel ways to meet the challenges presented by artificial intelligence. In legal discourse too, conversations about the legitimacy of generative artificial intelligence tend to understandably harken back to fundamental concepts such as the idea-expression dichotomy of copyright law, and to split hairs between goodwill and reputation with reference to trade mark law — these concepts form the basis of our understanding of what is legally permissible.
The Inadequacies of the Current System
It is not just celebrities who are adversely affected by the violation of their personality rights. The ease with which those rights can now be violated using synthetic content makes everyone vulnerable to abuse and exploitation, and while our jurisprudence appears to be developing along lines one might hope for, our enforcement mechanisms lag far behind.
The transmogrification of the self venturing into the realm of the inanimate is not a new problem; we've seen it manifest itself repeatedly through history with personality and being misappropriated and identity being a plaything of power. What's new about the problem we now face is one of scale and spectrum: in its newest avatar, brought into being with the support of artificial intelligence, the consequences of synthetic content being publicly distributed are potentially swift, devastating, and irreversible. We do not currently have adequate legal mechanisms capable of matching its swiftness or efficiency to counter it, and the need of the hour is to create such mechanisms.
The violation of personality rights is not merely a commercial issue which affects companies and celebrities. On the contrary, it has become a fight to retain the integrity of the self. It is a fight which, to a greater or lesser extent, affects all individuals in the face of what has effectively become an endless stream of potential threats from scammers to malcontent acquaintances, all of whom have the capabilities of generative artificial intelligence available to them at the press of a button.
NOTES
[1] High Court of Delhi CS (OS) 893/2002 on 29 April 2010, DM Entertainment Pvt. Ltd. v Baby Gift House, MANU/DE/2043/2010 http://student.manupatra.com/Academic/Studentmodules/Judgments/2022/June/MANU_DE_2043_2010.pdf
[2] Supreme Court of India 1994 SCC (6) 632 on 7 October 1994, R Rajagopal v. State of Tamil Nadu, https://indiankanoon.org/doc/501107/; SAIKIA, NANDITA, Business Standard on 25 August 2017, SC's Right to Privacy ruling does more for Indians than we had hoped for, https://www.business-standard.com/article/economy-policy/sc-s-right-to-privacy-ruling-to-benefit-indians-more-than-we-hoped-for-117082500450_1.html (re Supreme Court of India Writ Petition (Civil) 494 of 2012 on 24 August 2017, Justice K S Puttaswamy (Retd.) v. Union of India, https://api.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_24-Aug-2017.pdf)
[3] High Court of Delhi AIR 2002 DELHI 58 on 18 September 2001, Khushwant Singh v. Maneka Gandhi, https://indiankanoon.org/doc/1203848/
[4] High Court of Delhi 57(1995)DLT154 on 1 December 1994, Phoolan Devi v. Shekhar Kapoor, https://indiankanoon.org/doc/793946/ although the receipt of consent may significantly diminish an individual's ability to successfully make a claim against the publication of content which features them; Also see: High Court of Madras O.A. 417 of 2011 and Application 2570 of 2011 in C.S. 326 of 2011 on 27 August 2012, Selvi J Jayalalithaa v. Penguin Books India, https://indiankanoon.org/doc/26686171/
[5] See: Supreme Court of India Writ Petition (Civil) 36/2018 on 26 March 2018, Viacom 18 Media Private Limited vs Union Of India, https://indiankanoon.org/doc/69899441/; High Court of Madras O.S.A. 75 of 2020 and C.M.P. 2945, 2946 and 9240 of 2020 on 16 April 2021, Deepa Jayakumar v. A L Vijay, https://indiankanoon.org/doc/9075307/
[6] High Court of Delhi CS(COMM) 187/2021, I.A. 10551/2021 & I.A. 14436/2021 on 11 July 2023, Krishna Kishore Singh v. Sarla A Saraogi, https://indiankanoon.org/doc/71066966/
[7] Ministry of Electronics and Information Technology, eNo.2(4)/2023-CyberLaws-3 dated 15 March 2024: Due diligence by Intermediaries / Platforms under the Information
Technology Act, 2000 and Information Technology (Intermediary
Guidelines and Digital Media Ethics Code) Rules, 2021, https://web.archive.org/web/20240406133511if_/https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
[8] Ministry of Information & Broadcasting, No. DM/5/2025-DM dated 22 April 2025, Advisory on adherence of Indian Laws and the Code of Ethics prescribed under the Information Technology (Intermediary Guidelines and Digital Media, Ethics Code) Rules, 2021 - reg., https://web.archive.org/web/20250507001649if_/http://www.mib.gov.in/sites/default/files/2025-04/advisory-dated-22.04.2025-1.pdf
[9] See, e.g.: (1) High Court of Delhi CS (OS) 893/2002 on 29 April 2010, DM Entertainment Pvt. Ltd. v Baby Gift House, MANU/DE/2043/2010 http://student.manupatra.com/Academic/Studentmodules/Judgments/2022/June/MANU_DE_2043_2010.pdf; (2) High Court of Delhi CS(OS) No.2662/2011 on 26 April 2012, Titan Industries Ltd. v. Ramkumar Jewellers, https://indiankanoon.org/doc/181125261/; (3) High Court of Madras Application 735 of 2014 and Civil Suit 598 of 2014 on 3 February 2015, Shivaji Rao Gaikwad alias Rajinikanth) v. Varsha Productions, https://indiankanoon.org/doc/26058025/; (4) High Court of Bombay Interim Application (L) 17865 OF 2024 in COMM IPR SUIT (L) 17863 of 2024 on 7 March 2025, Karan Johar v. India Pride Advisory Private Ltd., https://www.livelaw.in/pdf_upload/karan-johar-vs-india-pride-advisory-private-ltd-590255.pdf
[10] See, e.g.: (1) High Court of Delhi CS(COMM) 652/2023 and I.A. 18237/2023-18243/2023 on 20 September 2023, Anil Kapoor vs Simply Life India, https://indiankanoon.org/doc/113724486/; (2) High Court of Delhi CS(COMM) 389/2024 on 14 May 2024, Jaikishan Kakubhai Saraf alias Jackie Shroff v. The Peppy Store, https://indiankanoon.org/doc/192161932/; (3) High Court of Bombay Interim Application (L) 23560 of 2024 in COM IPR SUIT (L) 23443 of 2024 on 26 July 2024, Arijit Singh v. Codible Ventures LLP, https://indiankanoon.org/doc/103091928/; (4) High Court of Delhi CS(COMM) 828/2024, I.A. 40391/2024, I.A. 40392/2024, I.A. 40393/2024, I.A. 40394/2024, I.A. 40395/2024 & I.A. 40396/2024 on 1 October 2024, Manchu Vishnu Vardhan Babu alias Vishnu Manchu v. Arebumdum, https://indiankanoon.org/doc/115544225/; (5) High Court of Delhi CS(COMM) 1053/2024 & I.A. Nos. 46360/2024, 46361/2024, 46362/2024, 46363/2024, 46364/2024 & 46365/2024 on 28 November 2024, Dr Devi Prasad Shetty v. Medicine Me, https://images.assettype.com/barandbench/2024-12-02/dkbh5704/Dr_Devi_Prasad_Shetty_Vs_Medical_Me.pdf; (6) High Court of Delhi CS(COMM) 514/2025 with I.A. 13419/2025, I.A. 13420/2025, I.A. 13421/2025, I.A. 13422/2025, I.A. 13423/2025, I.A. 13424/2025 & I.A. 13425/2025 on 26 May 2025, Ankur Warikoo v. John Doe, https://www.livelaw.in/pdf_upload/highcourtorder-601919.pdf; (7) High Court of Delhi CS(COMM) 578/2025 on 30 May 2025, Sadhguru Jagadish Vasudev v. Igor Isakov, https://images.assettype.com/barandbench/2025-06-02/urxxbv0x/_Sadhguru_Jagadish_Vasudev___Anr_V__Igor_Isakov___Ors_.pdf; (8) High Court of Delhi CS(COMM) 956/2025 on 9 September 2025, Aishwarya Rai Bachchan v. aishwaryaworld.com, https://images.assettype.com/barandbench/2025-09-11/2o7mije7/Aishwarya_Rai_Bachchan_V_S_Aishwaryaworld_Com___Ors_.pdf
[11] Section 2(9)(ii), 2019 Consumer Protection Act, India.
[12] Section 2(47), 2019 Consumer Protection Act, India.
[13] Section 2(28), 2019 Consumer Protection Act, India.
[14] Section 2(9)(v), 2019 Consumer Protection Act, India.
[15] Section 10(i), 2019 Consumer Protection Act, India.
[16] Section 100, 2019 Consumer Protection Act, India.
[17] “Any manufacturer or service provider who causes a false or misleading advertisement to be made which is prejudicial to the interest of consumers shall be punished with imprisonment for a term which may extend to two years and with fine which may extend to ten lakh rupees; and for every subsequent offence, be punished with imprisonment for a term which may extend to five years and with fine which may extend to fifty lakh rupees.”
[18] Section 92, 2019 Consumer Protection Act, India.