Skip to main content

GenAI in Educational Institutions

GenAI is here to stay as one of the tools at the disposal of students in academic institutions regardless of what one feels or thinks about it. It brings with it the obvious problem of completely making up what it presents as facts to the undiscerning as well as more subtle harms such as possibly depriving students of the opportunity to hone their own drafting skills.

While imagining fantasy rather than dealing in fact is, for most practical purposes, anathema in academic settings, helping students to edit their own original essays and the like is not a clear and unequivocal harm since such a function could, for example, support those whose first language is not English and who do not have the benefit of being able to access a human being both able and willing to help them polish their work; given the strength of many classes, the teaching staff at most educational institutions would be immensely overworked if they were to spend vast amounts of time enhancing not just the substance but also the style of each student’s writing.

The concern, of course, is that, regardless of its possible benefits, the use of AI in academic settings is unethical. Unfortunately, this fear is compounded by there being no ironclad recognition of the fact that not all uses of automation or AI for content generation in academic settings either are or should be considered unethical even though some uses are employed as a matter of course.

For example, tools have been used in academia to generate citations in specific formats after users feed in information about sources. It's hard to imagine how such a practice could possibly be considered unethical, and, given how commonplace it is, it isn't even necessarily thought of as ‘using AI’ by those who employ it. On the other hand, prompting GenAI to write an essay of 2000 words on a particular subject and passing it off as one's own clearly isn't ethical, and should the output of the AI either rehash another scholar's work or be a colourable imitation of it, the output could well additionally plagiarise or infringe that scholar's work.

At such extremes, one probably doesn't need a specific policy to determine the occurrence of academic misconduct but much AI use falls between the two extremes such as when AI is used to simply polish a student's own writing. Determining whether or not uses which do not clearly fall at either end of the spectrum of possible uses are acceptable should be a matter of policy.

Drafting policy relating to copyright infringement and plagiarism is relatively straightforward in that, by and large, we know what constitutes plagiarism, what amounts to infringement, and how the two can be avoided even though the two terms, 'copyright infringement' and 'plagiarism', are often incorrectly used interchangeably.

Of course, plagiarism and copyright infringement often overlap but each one has quite a different import; it's entirely possible to commit one and not the other. Nonetheless, in broad terms, plagiarism can be offset by the implementation of clear guidance on citation while copyright infringement can be avoided by only reproducing or adapting copyrighted content with its owner's consent although there are exceptions to this rule of thumb especially in the field of education.

What is far more closely tied to plagiarism than copyright infringement is the violation of so-called ‘moral rights’.

The Indian copyright statute recognises two moral rights: the rights of paternity and integrity, essentially establishing visible authorship and keeping works from being mangled in ways which harm their authors, although authors are required to be proactive to benefit from either one of these rights.

To illustrate: under the 1957 Indian Copyright Act, the statutory right of an author to be identified as such is framed as the right to claim authorship rather than the right to be credited, meaning that it's unlikely that the moral right of paternity, as that right is referred to, would be violated off the bat since, in all cases, the onus is on authors to claim authorship rather than on anyone else to accord credit.

That said, whether or not moral rights are implicated, a failure to accord due credit would almost certainly amount to plagiarism, a rather more nebulous concept than moral rights for the obvious reason that while copyright and moral rights are tied to specific works recognised by copyright law, plagiarism is tied to ‘mere’ ideas which copyright law simply does not protect unless they are expressed in specific forms and formats which it recognises.

Having AI be thrown into the mix as a content generator adds complexity to the issue of determining the occurrence of academic misconduct not least because the use of AI to generate content doesn't automatically always result in the commission of copyright infringement or plagiarism, or, for that matter, in the violation of moral rights although it could do so particularly if the output were to either reproduce pre-existing words of an author verbatim or not or, alternatively, appropriate an author's ideas without credit.

And, that being the case, perhaps the need of the hour isn't so much the development of targetted AI policies but the development of a more broad-based understanding of what AI is and what constitutes acceptable uses of AI, followed by an exercise to update (what one hopes are) existing policies on plagiarism and copyright infringement so that they become more AI-aware, so to speak, both in terms of recognising the impact AI could have on a student's work and in terms of recognising that 'AI detection' tools should be used cautiously since they may not be entirely reliable.

At the end of the day, tools which automate plagiarism detection, as it is called, rely on some form of AI too, and they can find it challenging to accomplish such tasks as differentiating between the reproduction of pre-existing text as illegitimate plagiarism and as the legitimate quotation of precedent, the latter often being necessary especially in the legal field. Thankfully, this is an issue which human intervention can usually ameliorate once a plagiarism report is generated since the difference between the two would almost certainly be obvious to someone working in the field. It is, however, likely indicative of a larger issue which currently plagues AI in all its forms: it cannot be completely trusted when it is left to function by itself without human intervention. 

Requiring intervention is perhaps more of a feature than a flaw when it comes to AI. As humans, we respond to our environments and to new information we encounter in ways that non-sentient intelligence cannot be relied upon to do. And, so, it is by ensuring that there is human intervention in what would otherwise be automatic processes that we can also ensure, to a degree, that those processes are fair and flexible. Ultimately, AI should be a tool at human disposal, not one which disposes humans or their plans.


Note: This post amalgamates comments on the subject made (without reference to the Jindal case) to Hera Rizwan who incorporated a portion of them in her article Did AI Write The Exam? Jindal Law Student’s Fight May Set Academic Rules published on 8 Nov 2024, two LinkedIn posts available here and here, and some previously unpublished content.

Follow nsaikia on LinkedIn | This post is by Nandita Saikia and was first published at IN Content Law.