You might be living under a rock, if you have not heard or used the viral AI application “Chat GPT”. As applications like OpenAI’s ‘ChatGPT’ have gained a lot of traction and become the talk of the town, it is only important that we discuss and appreciate the potential legal issues surrounding the advancement of Artificial Intelligence (“AI”) in the world. AI, as we currently know is generally when a machine or a computer simulates a human intellect in a way that they are coded to think and act like humans. This advancement in computational intelligence has unlocked many potential possibilities of work, one such being the works that require creativity. Generative AI can replicate a person’s way of thinking and thus generate content.
It is safe to suggest that AI has slowly made its way into the life of the common man with the easy availability of consumer electronics and voice assistance applications such as ‘Alexa’ and ‘Siri’. These applications also perform their actions with the utmost quality, if not superior to that of humans. The world of technology has evolved so much that these programs have the capabilities to produce complex works, compared to the ones created by humans, even without the intervention of a human. Generative AI systems like Midjourney, ChatGPT and DALL-E, which create text and images in response to human instructions, have recently skyrocketed in popularity.
The Advent of ChatGPT: The brainchild of OpenAI, “ChatGPT” is built on the GPT (Generative Pre-trained Transformer) architecture, a deep learning model that creates text that resembles human speech via unsupervised learning. It can read and produce natural language fluently and precisely since it has been trained on a vast amount of text data. GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing AI model developed by OpenAI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modeling, and generating text for applications such as chatbots. It is one of the largest and most powerful language-processing AI models to date, with 175 billion parameters. To delve a little deeper, we asked ChatGPT itself on how it functions, and the response is set out below:
“ChatGPT works by using a large language model trained by OpenAI. This model has been specifically trained to generate human-like text based on the context of a conversation. When a user enters a message, ChatGPT processes the input and generates a response based on the words and phrases in the message. The response is generated in real-time, and ChatGPT continues to update its response as the conversation progresses. This allows ChatGPT to engage in natural-sounding conversations with users on a variety of topics.”
Advancements: Recently, OpenAI has introduced the GPT4 with even better functionalities and performance improvements. The important differences between GPT3 and GPT 4 are: (a) understanding images. You can provide visual input and ask questions or get responses. For example, you can provide a wireframe on a napkin and ask the AI to provide a fully functioning website. (b) Bigger memory and text output (c) Multilingual and answering queries in different languages (c) Safety: GPT4 is % less likely to respond to requests for disallowed content. (d) GPT-4 significantly reduces hallucinations relative to previous models, which basically refers to situations when the AI ‘hallucinates” facts and makes reasoning errors.
Anti-AI Trackers: The developers of ChatGPT have also launched a new AI classifier for indicating AI-written text. A few other applications such GPT Zero have been developed which can differentiate a text generated by human from large language models, particularly ChatGPT and alike. This is particularly important since schools and academic institutions have voiced their over concerns that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
The Burning Question: This brings us to the question of how to protect these creative works generated by AI applications, more so, when these applications start to show signs of creativity. What can be done when AI applications go a step further from prototype programming and produce work with originality and novelty? What are the legal provisions classifying such works?
This Article discusses the interplay of a work created by the AI applications and its issue in granting ownership in the current legal scenario and discusses whether there is an imperative need for legislation around the world to be upgraded to be in sync with how the technology around the subject is upgrading.
INTELLECTUAL PROPERTY RIGHTS CONCERNS
One of the main legal risks of ChatGPT or similar AIs is the possible infringement of intellectual property rights. ChatGPT is trained on large amounts of text data, including books, articles, and other written materials. If this training data contains copyrighted works, the output of ChatGPT may infringe the copyright of these works. Doing so may result in legal action against the user, which may be deemed to have contributed to the infringement.
To analyze further, we examine a few basic tenets of copyright law in detail below.
Ownership: As per the principles laid under the Berne Convention, copyright law insists on authorship. For the authorship eligibility for AI applications, we first need to examine whether AI applications can be accorded a legal identity that is separate from its human programmer.
The dispute relating to copyright and authorship has been long observed in a variety of cases around the world. One instance of such a case is the well-known Naruto monkey selfie picture case, wherein a copyright ownership dispute arose when a monkey took a selfie picture by mistake. The owner of the camera, which the monkey had used, claimed ownership of the said photograph. At the same time, organizations representing the rights of animals such as People for the Ethical Treatment of Animals (PETA) argued that the right to the photograph belonged to the monkey itself. Ultimately, the court held that the author/creator of the photograph in this instance cannot be the monkey. Similarly so, the courts must decide whether the above understanding can be used to answer the question of ownership in the works created by AI applications.
Legal Identity: As per Black Law Dictionary, a legal entity is defined as a lawful or legally standing corporation, partnership, association, trust, proprietorship, or individual. It should have the legal capacity to; (a) enter into agreements or contracts; (b) sue and be sued in its own right; (c) incur and pay debts; (d) be accountable for illegal activities, and (e) assume obligations.
In 2017, ‘Sophia’ – an artificially intelligent robot for the first time in the world, was granted full citizenship in Saudi Arabia. It is also interesting to examine the curious case of granting the legal identity of religious idols. In India, religious idols have been conferred a legal identity even though such idols are inanimate objects and would not be able to assume obligations such as litigation or contractual arrangement themselves. However, the legislation allows agents who act as representatives of such idols. The Courts have also allowed the identification of juristic persons to such religious idols. One may argue that similar status may also be granted to other inanimate beings such as AI applications.
ChatGPT also can share personal data from its training dataset with the users. This functionality essentially means that it probably may breach some of the data protection laws around the world.
Further, the FAQs listed at ChatGPT suggest that it was trained on vast amounts of data from the internet written by humans, including conversations. Due to the random nature of the way ChatGPT collects data, much of it relates to individuals, including things they have written or said over the past few years or even decades in a wide range of settings, including social media, personal, websites, and in chat or even in email threads (if they are public). Much of ChatGPT’s strength lies in its ability to bring together all these disparate inputs and analyze them at a scale that was previously impractical. This will inevitably lead to the discovery and establishment of clear connections and associations that might not otherwise be apparent.
In the European Union, for example, the General Data Protection Regulation (GDPR) regulates the use of personal data and requires that data be collected and used only for specific, lawful purposes. Though OpenAI lists in its FAQ that “Please don’t share any sensitive information in your conversation”, we believe that this might not have an adequate impact on the users. Users will be inputting a lot more of personal data, than they might anticipate, which would ultimately be processed by the tool without any restrictions or limitations. Also, as is human nature, people are inevitably going to be entering all kinds of personal data in their “prompts” to ChatGPT to elicit fascinating answers to questions that matter to them, and once submitted it shall form an inevitable part of the system’s database. So it will be important to discuss, ascertain any other legal issues which may also become imminent with the growth and adoption of similar tools.
OTHER LEGAL RISKS:
Bias and discrimination: Although OpenAI lists down a set of content guidelines in its website, it is important to note that one of the biggest legal risks of ChatGPT is that it may produce offensive or defamatory content. As a language model, ChatGPT can generate text similar to human conversation. However, it does not have the same ability to understand the context or meaning of the words it generates. This means that ChatGPT may generate potentially offensive or defamatory content, which may result in legal action against its users.
Inaccurate and misleading content: Additionally, ChatGPT’s ability to generate conversational text has raised concerns about the potential for creating fake messages or other misleading content. This feature can have serious consequences viz., damage to reputation, the spread of misinformation, or even incitement to violence.. The AI models are getting infamous for “hallucinations” for providing inaccurate information and making factual errors. This could potentially lead to misinformation and further dissemination of such information might have enormous implications.
Cyber Threats: With convincing e-mails, cybercrimes and attacks would develop into more sophisticated ones. If a chatbot like ChatGPT is used to automate phishing attempts, e-mail spoofing, e-mail bombings, etc., it would lead to a grave issue.
Very recently, the US Copyright Office weighed in for the first time on whether its output is copyrightable, finding Midjourney-generated images in Kris Kashtanova’s comic book “Zarya of the Dawn” could not be protected, though Kashtanova’s text and unique arrangement of the book’s elements could. The US Copyright Office in its memo have laid down that in each case, what matters is the extent to which the human had creative control over the work’s expression and actually formed the traditional elements of authorship.
It is no doubt that AI applications will become an integral part of human lives in the coming future. However, as seen from the above discussions, the legislative frameworks around the world might not be currently well equipped to deal with the works created by AI applications.
The key is, however, to look for diverse ways to interpret the existing legal provisions that would apply to the present situation. Granting copyright of a work created by an AI application to the original programmer or the developer seems to be unfounded and incorrect, since if that is granted, then it is against the basic principle of granting copyright i.e., the sweat and brow principle. With the unprecedented growth in the technology sector, and more so in the field of AI and GPT, it is about time that the legislators discuss, and the courts give a new interpretation of the copyright law and bring more clarity to this advancing field of technology. At least till the time the legislators and the courts indicate a clearer picture around the questions raised above, the works generated by AIs could be kept in the free public domain. From an end-user standpoint of view, if the works created by AI applications are in the public domain, they will eventually serve the consumers of such works, in the best possible manner, which ultimately is a win-win for the world at large.
Footnotes and References