You might be living under a rock, if you have not heard or used the viral AI application “Chat GPT”. As applications like OpenAI’s ‘ChatGPT’ have gained a lot of traction and become the talk of the town, it is only important that we discuss and appreciate the potential legal issues surrounding the advancement of Artificial Intelligence (“AI”) in the world. AI, as we currently know is generally when a machine or a computer simulates a human intellect in a way that they are coded to think and act like humans. This advancement in computational intelligence has unlocked many potential possibilities of work, one such being the works that require creativity. Generative AI can replicate a person’s way of thinking and thus generate content.
It is safe to suggest that AI has slowly made its way into the life of the common man with the easy availability of consumer electronics and voice assistance applications such as ‘Alexa’ and ‘Siri’. These applications also perform their actions with the utmost quality, if not superior to that of humans. The world of technology has evolved so much that these programs have the capabilities to produce complex works, compared to the ones created by humans, even without the intervention of a human. Generative AI systems like Midjourney, ChatGPT and DALL-E, which create text and images in response to human instructions, have recently skyrocketed in popularity.
The Advent of ChatGPT: The brainchild of OpenAI, “ChatGPT” is built on the GPT (Generative Pre-trained Transformer) architecture, a deep learning model that creates text that resembles human speech via unsupervised learning. It can read and produce natural language fluently and precisely since it has been trained on a vast amount of text data. GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing AI model developed by OpenAI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modeling, and generating text for applications such as chatbots. It is one of the largest and most powerful language-processing AI models to date, with 175 billion parameters. To delve a little deeper, we asked ChatGPT itself on how it functions, and the response is set out below:
“ChatGPT works by using a large language model trained by OpenAI. This model has been specifically trained to generate human-like text based on the context of a conversation. When a user enters a message, ChatGPT processes the input and generates a response based on the words and phrases in the message. The response is generated in real-time, and ChatGPT continues to update its response as the conversation progresses. This allows ChatGPT to engage in natural-sounding conversations with users on a variety of topics.”
Advancements: Recently, OpenAI has introduced the GPT4 with even better functionalities and performance improvements. The important differences between GPT3 and GPT 4 are: (a) understanding images. You can provide visual input and ask questions or get responses. For example, you can provide a wireframe on a napkin and ask the AI to provide a fully functioning website. (b) Bigger memory and text output (c) Multilingual and answering queries in different languages (c) Safety: GPT4 is % less likely to respond to requests for disallowed content. (d) GPT-4 significantly reduces hallucinations relative to previous models, which basically refers to situations when the AI ‘hallucinates” facts and makes reasoning errors.
Anti-AI Trackers: The developers of ChatGPT have also launched a new AI classifier for indicating AI-written text. A few other applications such GPT Zero have been developed which can differentiate a text generated by human from large language models, particularly ChatGPT and alike. This is particularly important since schools and academic institutions have voiced their over concerns that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
The Burning Question: This brings us to the question of how to protect these creative works generated by AI applications, more so, when these applications start to show signs of creativity. What can be done when AI applications go a step further from prototype programming and produce work with originality and novelty? What are the legal provisions classifying such works?
This Article discusses the interplay of a work created by the AI applications and its issue in granting ownership in the current legal scenario and discusses whether there is an imperative need for legislation around the world to be upgraded to be in sync with how the technology around the subject is upgrading.
INTELLECTUAL PROPERTY RIGHTS CONCERNS
One of the main legal risks of ChatGPT or similar AIs is the possible infringement of intellectual property rights. ChatGPT is trained on large amounts of text data, including books, articles, and other written materials. If this training data contains copyrighted works, the output of ChatGPT may infringe the copyright of these works. Doing so may result in legal action against the user, which may be deemed to have contributed to the infringement.
To analyze further, we examine a few basic tenets of copyright law in detail below.
Ownership: As per the principles laid under the Berne Convention, copyright law insists on authorship. For the authorship eligibility for AI applications, we first need to examine whether AI applications can be accorded a legal identity that is separate from its human programmer.
The dispute relating to copyright and authorship has been long observed in a variety of cases around the world. One instance of such a case is the well-known Naruto monkey selfie picture case, wherein a copyright ownership dispute arose when a monkey took a selfie picture by mistake. The owner of the camera, which the monkey had used, claimed ownership of the said photograph. At the same time, organizations representing the rights of animals such as People for the Ethical Treatment of Animals (PETA) argued that the right to the photograph belonged to the monkey itself. Ultimately, the court held that the author/creator of the photograph in this instance cannot be the monkey. Similarly so, the courts must decide whether the above understanding can be used to answer the question of ownership in the works created by AI applications.
Legal Identity: As per Black Law Dictionary, a legal entity is defined as a lawful or legally standing corporation, partnership, association, trust, proprietorship, or individual. It should have the legal capacity to; (a) enter into agreements or contracts; (b) sue and be sued in its own right; (c) incur and pay debts; (d) be accountable for illegal activities, and (e) assume obligations.
In 2017, ‘Sophia’ – an artificially intelligent robot for the first time in the world, was granted full citizenship in Saudi Arabia. It is also interesting to examine the curious case of granting the legal identity of religious idols. In India, religious idols have been conferred a legal identity even though such idols are inanimate objects and would not be able to assume obligations such as litigation or contractual arrangement themselves. However, the legislation allows agents who act as representatives of such idols. The Courts have also allowed the identification of juristic persons to such religious idols. One may argue that similar status may also be granted to other inanimate beings such as AI applications.
ChatGPT also can share personal data from its training dataset with the users. This functionality essentially means that it probably may breach some of the data protection laws around the world.
Further, the FAQs listed at ChatGPT suggest that it was trained on vast amounts of data from the internet written by humans, including conversations. Due to the random nature of the way ChatGPT collects data, much of it relates to individuals, including things they have written or said over the past few years or even decades in a wide range of settings, including social media, personal, websites, and in chat or even in email threads (if they are public). Much of ChatGPT’s strength lies in its ability to bring together all these disparate inputs and analyze them at a scale that was previously impractical. This will inevitably lead to the discovery and establishment of clear connections and associations that might not otherwise be apparent.
In the European Union, for example, the General Data Protection Regulation (GDPR) regulates the use of personal data and requires that data be collected and used only for specific, lawful purposes. Though OpenAI lists in its FAQ that “Please don’t share any sensitive information in your conversation”, we believe that this might not have an adequate impact on the users. Users will be inputting a lot more of personal data, than they might anticipate, which would ultimately be processed by the tool without any restrictions or limitations. Also, as is human nature, people are inevitably going to be entering all kinds of personal data in their “prompts” to ChatGPT to elicit fascinating answers to questions that matter to them, and once submitted it shall form an inevitable part of the system’s database. So it will be important to discuss, ascertain any other legal issues which may also become imminent with the growth and adoption of similar tools.
OTHER LEGAL RISKS:
Bias and discrimination: Although OpenAI lists down a set of content guidelines in its website, it is important to note that one of the biggest legal risks of ChatGPT is that it may produce offensive or defamatory content. As a language model, ChatGPT can generate text similar to human conversation. However, it does not have the same ability to understand the context or meaning of the words it generates. This means that ChatGPT may generate potentially offensive or defamatory content, which may result in legal action against its users.
Inaccurate and misleading content: Additionally, ChatGPT’s ability to generate conversational text has raised concerns about the potential for creating fake messages or other misleading content. This feature can have serious consequences viz., damage to reputation, the spread of misinformation, or even incitement to violence.. The AI models are getting infamous for “hallucinations” for providing inaccurate information and making factual errors. This could potentially lead to misinformation and further dissemination of such information might have enormous implications.
Cyber Threats: With convincing e-mails, cybercrimes and attacks would develop into more sophisticated ones. If a chatbot like ChatGPT is used to automate phishing attempts, e-mail spoofing, e-mail bombings, etc., it would lead to a grave issue.
Very recently, the US Copyright Office weighed in for the first time on whether its output is copyrightable, finding Midjourney-generated images in Kris Kashtanova’s comic book “Zarya of the Dawn” could not be protected, though Kashtanova’s text and unique arrangement of the book’s elements could. The US Copyright Office in its memo have laid down that in each case, what matters is the extent to which the human had creative control over the work’s expression and actually formed the traditional elements of authorship.
It is no doubt that AI applications will become an integral part of human lives in the coming future. However, as seen from the above discussions, the legislative frameworks around the world might not be currently well equipped to deal with the works created by AI applications.
The key is, however, to look for diverse ways to interpret the existing legal provisions that would apply to the present situation. Granting copyright of a work created by an AI application to the original programmer or the developer seems to be unfounded and incorrect, since if that is granted, then it is against the basic principle of granting copyright i.e., the sweat and brow principle. With the unprecedented growth in the technology sector, and more so in the field of AI and GPT, it is about time that the legislators discuss, and the courts give a new interpretation of the copyright law and bring more clarity to this advancing field of technology. At least till the time the legislators and the courts indicate a clearer picture around the questions raised above, the works generated by AIs could be kept in the free public domain. From an end-user standpoint of view, if the works created by AI applications are in the public domain, they will eventually serve the consumers of such works, in the best possible manner, which ultimately is a win-win for the world at large.
“Right to be Forgotten” is a right to have private information removed from the internet search engine, database, websites or other public platforms. The “Right to be Forgotten” gained prominence sometime in 2014 when a matter was referred to the Court of Justice of European Union in respect of the results for the search of Mario Costeja Gonzalez on the search engine depicting action notice of his reposed home.1 Since then, “Right to be forgotten” has evolved in many countries with certain countries having specific provisions in this regard. One such most prominent law is the European Union General Data Protection Regulation (EU GDPR) whereby article 17 provides the data subject a right of erasure of personal data concerning her/him.2
In India the Digital Personal Data Protection Bill, 2022 under section 14 includes the concept of “Right to be Forgotten” whereby the data principal has the right to correct and erase his / her personal data and a data fiduciary upon receipt of such a request (i) correct, complete or update (as the case may be) the data principal’s personal data or (ii) erase the personal data of a Data Principal that is no longer necessary for the purpose for which it was processed unless retention is necessary for a legal purpose. The said Bill has not yet been passed and accordingly there is no specific law directly on “Right to be forgotten”. However, the provisions of the Information Technology Act, 2000 (“IT Act”) provide for requisite protection. The IT
Act under section 66 provides for punishment inter alia for violation of privacy, publishing or
transmitting obscene material in electronic form, publishing or transmitting material containing sexually explicit act, etc. Further, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) published on 25th February 2021 casts an obligation upon the intermediaries such as internet service providers and search engines to take all reasonable and practicable steps within 24 hours of receipt of a complaint, to remove or disable access to content which exposes the private area of an individual, shows an individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct.
Accordingly, it can be said that the premise for protecting an individual’s personal data and violation of privacy have been provided under the laws of India. However, lack of specific enactment or provision of law lead to several cases being filed across the country and the “Right to be Forgotten” have been evolved by the Courts and is a work in progress.
The Hon’ble Supreme Court of India in S. Puttaswamy versus Union of India3 passed by the nine-judge bench, has inter alia, held that the right to privacy is a fundamental right enshrined under article 21 of the Constitution of India and is protected as an intrinsic part of the right to life and personal liberty. Privacy in its simplest form allows individuals to be left alone.
In the case before the Hon’ble Delhi High Court, Zulfiqar Ahman Khan versus Quintillion Business Media Ltd.4 the articles written by the defendants on the basis of harassment
complaints received by them against the Plaintiff in #MeToo campaign were taken down pursuant to the initiation of the proceeding and the Hon’ble Court in an interim order restrained the articles from being re-published on a third-party website. The Plaintiff’s Right to privacy, of which the “Right to be Forgotten” and the “Right to be left alone” are inherent aspects, the Court restrained during the pendency of the proceedings any republication of the said articles or any extracts or excerpts thereof, as also modified versions thereof, on any print or digital/electronic platform.
This is a remarkable way forward on the cases of “Right to be Forgotten” whereby the Court has not only granted interim relief but also further republication of the said articles even with modifications have been restrained.
In another case, the Plaintiff was convicted by the Trial Court in a criminal proceeding which conviction was overturned by the Hon’ble Madras High Court. The Plaintiff moved the Madras High Court seeking “Right to be Forgotten” and an interim order has been passed directing redaction of the name of the Plaintiff from the judgement passed in the earlier criminal proceedings5. In a similar case before the Hon’ble Delhi High Court6, where the Plaintiff was acquitted of all charges under the Narcotics Drugs and Psychotropic Substances Act, 1985 however, the availability of the said judgement caused harm to the Accordingly, the Hon’ble Delhi High Court directed removal of the said judgement which were available on the search engines.
In another case the Hon’ble Delhi High Court7 has remotely dealt with the issue of consent given by the Plaintiff and the Plaintiff’s “Right to be forgotten”. The Plaintiff was lured into shooting explicit scenes of complete frontal nudity for a web series, which project fell through, and the web series was never produced. However, the said videos were uploaded by the producer on its YouTube channel and website, which were taken down by the producer on the request of the Meanwhile, there were other
websites who uploaded the said videos. The Hon’ble Delhi High Court inter alia, relied upon the KS Puttaswamy (supra), Zulfiqaar Ahman Khan (supra), and Rule 3(2)(b) of the IT Rules (whilst dismissing the argument that the Plaintiff was not a ‘victim’ as the
videos were shot with the consent of the Plaintiff and thus Rule 3(2)(b) of the IT Rules shall not be applicable), held that the Plaintiff is entitled “to be left alone” and “to be forgotten”. The Delhi High Court has categorically stated that the consent was in favor of the producer who has acted on the request of the Plaintiff and taken down the said videos and the Defendants do not have any such consent from the Plaintiff. The Delhi High Court has not only directed that the identity of the Plaintiff, shall at all times, not be disclosed including while uploading orders but also directed to (i) remove/pull down the said videos; (ii) to stop communicating to the public the said videos or part thereof on websites, digital platforms, mobile applications including YouTube channels as well as mirror/redirect/alphanumeric websites; and (iii) take down/ delete the said videos from the search results pages. This order is also in the nature of a John Doe order whereby the Plaintiff is permitted to communicate the order to other electronic/ digital platforms, if found to be streaming the said videos.
The Courts have taken a wholistic approach of ensuring that every possibility of tracking a person, his/her actions, etc., who wants to be forgotten is achieved. This objective seems to have been achieved when directions such as, complete confidentiality of the identity of the person approaching the Court even during uploading order passed by the Court, take down of content, removal of content from search engines, etc. have been passed by the Courts, thereby suggesting implementation of the “Right to be forgotten” in letter and spirit.
The Courts while dealing with an issue of “Right to be forgotten” considers and evaluates factors such as the nature of the information which is sought to be taken down and/or removed from the search engines; the impact of availability of such information on the person seeking its removal; public interest in retaining/making such information available to general public, amongst others.
The “Right to be forgotten” has been a welcomed development to further the school of thought that everyone deserves a second chance and should not be judged by their past actions. However, at some stage the “Right to be forgotten” will need to be tested with the right of freedom of speech and expression and both these rights will need to be balanced. Another interesting aspect appears to be a conflict between the “Right to be forgotten” and the “Right to Information” available under the Right to Information Act, 2005 (“RTI Act”) whereby an Indian citizen can access information under the control of the public authorities. One will have to wait and watch if these concerns get addressed by the Courts or the Personal Data Protection Act (once passed) will take care of these concerns.
Footnotes and References
1 Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja González, ILEC 060 (CJEU 2014)
2 Article 17 of EU GDPR – “The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay”
3 (2017) 10 SCC 1
4 2019 SCC Online Del 8494
5 WP (MD) No. 12015 of 2021
6 Jorawer Singh Mundy versus Union of India and Ors. – WP (C) 3918 of 2021 & CM Appl. 11767 of 2021
AI has developed substantially and over the course of time carried out feats describable as miraculous. Repeated triumph over humans in chess and beating a professional 5-0 in the game Go without any handicap, are instances of superseding human intelligence. Proliferation of internet into everyday lives and dependency on it has led to predictive algorithms and other models have evolved to a new concept – machine learning.
Copyright regimes globally have had limited encounters with works created through computers. But granting protection to them was not a difficult task since the work always had a human ‘mind’ enabling it. AI, however, poses a completely different challenge as there is limited, and near non-existent human intervention. Of late, AI has evolved to be able to write news articles and even novels that are good enough to get selected for national prize.
While considering the issue of copyrightability of works created by AI the primary question is: Do the AI works require human intervention, or can AI generate work itself independently. The entailed categorization aides lucidity in that regard:
(1) Works created by AI with human intervention (“AI assisted”).
(2) Works created by AI without (or negligible) human intervention (“AI generated”).
In the first category i.e., AI assisted work, human intervention, and exercise of human creativity (mostly, in the form of programming the AI) makes the work generated by AI liable to protection. However, in the second category i.e., who will be the owner of copyright in AI generated work, unfortunately, is an unknown territory.
There also appears to be two schools of thought present: one that regards AI as dependent (partially, if not, wholly) on human minds to generate AI works and the other one, that regards AI works as completely independent creations of AI.
Per report by a Senior Judge of the IPR Division of the Supreme People’s Court of China published in WIPO, China’s approach has not deviated from the traditional route, and it grants protection only when a work is a product of the author’s intellectual creation. In a dispute pertaining to an intelligent writing assistance system called ‘Dreamwriter’, the Chinese Court had held that the article generated was a written work protected under copyright laws since it was produced by the intellectual creation of the human authors (programmers). The ownership of the copyright in the AI’s work was vested with the person who was the exclusive licensee of the AI software.
Such an approach gives impetus to the theory that AI has not yet developed to the level where it is completely free from human involvement since some level of human intervention is still
involved in the use of AI applications. This theory, if adapted in the current copyright jurisprudence, may bridge the gap between copyright protection and AI works. However, this approach leads to issues regarding defining the parameters for human intervention required for granting copyright protection to a work created by AI.
The copyright regime in the USA only recognizes works that are “fruits of intellectual labor” and “founded in the creative powers of the mind”. Particularly, the USA does not recognize copyright protection for computer-generated works without a human author. In fact, the US Copyright Office’s Review Board in its decision dated 14.02.2022, rejected copyright protection to the AI “Creativity Machine”. The principal ground for such rejection was that the AI failed to meet the basic requirements that an author must be a human being. Over time, the USA has uniformly held that copyright protection can only be extended to creations of human authors and that there must exist a nexus between the human mind and its creative expression, as a prerequisite for copyright protection. The absence of a defined framework has led to conflicting decisions. Initially USA had granted copyright protection to a comic book, Zarya of the Dawn, created by Kris Kashtanova with the aid of the text-to-image engine ‘Midjourney’. However, late in 2022, the US Copyright Office reversed its decision.
The UK grants statutory protection to “computer generated” works to the “person by whom the arrangements necessary for the creation of the work are undertaken” for a period of 50 years from the end of the calendar year in which the work was made. Furthermore, Section 178 of the Copyright, Designs and Patents Act, 1988, defines a computer-generated work as one that is “generated by computer in circumstances such that there is no human author of the work”. Canada, too registered a copyright for a Van Gogh’s ‘Starry Night’-inspired painting titled “Suryast” in favor of two co-authors: Ankit Sahni and RAGHAV, an AI Painting app.
India momentarily granted copyright protection in AI works, only to have a withdrawal notice issued at a later stage. In 2021, an AI painting app named ‘RAGHAV’ was registered in India as a co-author in a copyrighted work titled “Suryast”. The other co-author was Mr. Ankit Sahni, the owner of the AI App. Initially, the Indian Copyright Office rejected an application listing the
AI (‘RAGHAV’) as the sole author for an artwork. However, a second application was filed where the owner of the AI and an AI were named as co-authors for another artwork was allowed. Interestingly, within a year, the Copyright Office issued a withdrawal notice seeking information about the “legal status” of the AI Raghav citing, inter-alia, that copyright in an artistic work and would vest in the “artist”.
In an attempt to enumerate issues within the prevailing copyright laws, Firstly, the Copyright Act, 1957 (CopyrightAct) protects “original” literary and artistic works. However, per a prevailing theory, AI presently, is incapable of creating ‘original’ content and the work created is an adaptation / modification of existing information in the public domain that the AI has accessed / analyzed and has been trained on. This relies on the fact that all AI is fed data sets which are coloured with the biases and the limitations of its human creator.
Moreover, under Copyright Act, the requirement that for a ‘work’ to qualify for copyright protection, it would have to meet the test of ‘modicum of creativity’ laid down by the Supreme Court in Eastern Book Co vs. D.B. Modak. It was held that a ‘minimal degree of creativity’ was required, that ‘there must be some substantive variation and not merely a trivial variation’.
Secondly, the additional statutory parameter to be satisfied is the requirement to fall under the aegis of an “author” as defined under the Act.
The Copyright Act defines work created by computers and proposes the “person” responsible to create the work as the author. Unfortunately, a definition for “person” is not found within the Copyright Act or the rules framed thereunder. Even reliance upon General Clauses Act, 1897, which defines a ‘person’ as “any company or association or body of individuals, whether incorporated or not” proves inconclusive. This might be problematic since AI is not yet regarded as a legal personality in India by any statute and therefore, the current legal framework may not effectively deal with works where the actual creator is not a human or a legal person appropriately.
Recognition of AI other than a person which can be granted the ownership of IP may lead to potential copyright violations. Not only this, but such potential infringement may not be redressed under the existing law since a bare reading of Section 51 of the Copyright Act would show that copyright can only be infringed by a “person”.
If AI is considered as separate entity, distinct from their creator/owner and in such case, the AI cannot be held responsible for cases of infringement under the Act. This lends support to adopt the school of thought that the AI is an extension of the creator specifically for the purposes of liability in cases of infringement of data. This also ensures that consideration paid for the right to use the copyright will go to the owners and in turn, incentivize people to create more AI works. This would lead to substantial commercial issues relating to royalties, with questions arising as to who would receive royalty, if at all the same needs to be paid.
Lastly, the conundrum who will become the owner of the copyright – the human or the AI system designed by him? Principally, AI is a creation of its programmer’s mind since as it is the human who develops the AI’s algorithms. Although the massive developments in AI, some element of human intervention (however, negligible) is still required at this stage, if nothing else then to put the AI into action. The arrangement and selection in terms of data input, trigger condition setting, template and corpus style choices in AI is done by a human programmer. It is also true that due
to machine learning and deep learning capabilities, in future, AI may form new, autonomously generated algorithms in addition to algorithms previously set by humans, and the products obtained from the artificially formed algorithm could be wholly AI ‘generated’ work.
This leads us to a chicken and egg scenario and leaves open the question of who the law would consider to be the person making the arrangements for the work to be generated. Should the law recognize the contribution of the programmer or the user of that program?
Is this then the correct time to deliberate upon a new law for dealing with these ‘intelligent’ machines? How does it bode with the Indian economy? A Parliamentary Standing Committee on Commerce, Rajya Sabha Report dated 23.07.2021 estimates that the benefits from AI related innovations will add approximately USD 957 billion to the Indian economy by 2035. In fact, the aforesaid Report has specifically recommended a “separate category of rights for AI and AI related inventions” and protection of their intellectual property rights, besides review of the existing IPR legislations to “incorporate the emerging technologies of AI and AI related inventions in their ambit”. As this remains to be implemented, the future of Law, as understood until now, is set on a course of massive evolution.
Telecommunication sector providers have enjoyed massive windfalls over the last few years through significant revenues and margins from new business models such as the cloud, security, payments and insurance services. While revenue streams have grown, stringent regulations and the cost of licences and other fees, continue to be pain points for service providers.
With the introduction of the Indian Telecommunication Bill, 2022 (bill), regulations in this and allied sectors are taking a more consumer-centric approach. This is not only specific to India but is also happening in advanced jurisdictions, such as the EU and the US. Focus has been on moving away from increasing compliance by consumers and customers to regulating private parties in a graduated manner. The graduation depends on the underlying technology used or built to provide services to the last-mile customer. The EU distinguishes between phone numbers to determine whether a particular service relies upon a phone number or is independent of it, and therefore whether it is subject to licences, authorisations and regulatory frameworks. Telecommunication laws in the US have also moved away from strict regulation of customer premises equipment, which is managed by the end users and is already subject to qualitative checks before being supplied.
Unfortunately, the bill as presently drafted does not provide clarity regarding customer equipment. It seems end users will have to apply for registration, authorisation or licences from providers. This runs contrary to the global consumer-centric regulatory approach, and will over-regulate services that have been freed from licensing elsewhere. The bill also blurs the lines between commercial-scale services and those that merely connect two individuals through audio, video or data. A light touch, graduated framework, akin to that in Malaysia, could be suitable for India, imposing different compliance requirements on pure software providers and cloud computing resellers. This would allow an appropriate distinction by factoring in the kind of service, the underlying technology and the intended service providers and recipients.
The rapid spread of the internet of things, connected devices already proliferating in our homes, workplaces, and the wider society in smart cities, is reliant upon 5G connectivity. To manage these devices, service providers need to be able to take advantage of the scalability and flexibility offered by the cloud against the backdrop of a nuanced regulatory framework. The success of any policy intervention will depend heavily upon capex concerns. Infrastructure and network sharing will be needed to cap the initial investment and generate value and efficiency in the deployment of the next-generation communications infrastructure. To ensure that such sharing does not morph into cartels, regulations will be vital to preventing price fixing, supply reduction, and investment limitation.
There already exist several telco-multi-network operator alliances, and the global proliferation of 5G may encourage such networks to go beyond just video and audio streaming and add gaming. While the national antitrust regulator does not consider the current situation merits investigation, legislators may propose rules and guidelines to ensure that the telecom market in India does not further consolidate.
With privacy discussions ongoing, the focus of regulations is on ensuring that consumers are highly empowered and do not have to rely upon the discretion of service providers for their rights. The Digital India dream is approaching its realisation in the form of technology and innovation, as well as the regulatory framework. The intent of the legislature regarding regulation seems to be recognising technical convergence and a future-proof, technology-agnostic law and policy environment. Discussions have taken place over reorienting the telecom, innovation and associated segments through a comprehensive legislative framework under a single Digital India Act. The government is no longer obdurate and is accommodating the private sector. There is hope that this conversation between public and private parties, will result in a coherent, cohesive, forward-looking, consumer-friendly framework. The future of the law lies in simplicity, and not duplicating efforts beyond what is necessary.
Technology has assumed a big role in delivering healthcare services, particularly in the wake of the pandemic. The healthcare infrastructure of the country was brought to its knees due to the caseload during the Covid period.
Historically discouraged due to vast technological, financial, and legal barriers, the medical industry has undergone a massive overhaul now. Lowering data costs, penetration of the internet and improving user confidence have led to the steady adoption of telehealth services, a trend that is on the rise.
However, some long-standing concerns remain in the mind of patients such as the lack of transparency about the credentials of the doctor, improper patient diagnosis, concerns regarding misuse of a patient’s health information etc.
Though the practice of telemedicine has been legal in the country, these issues (including an inherent lack of trust on the part of patients, as well as practitioners) continued to persist. In 2020, the Union government came up with a framework for facilitating health-related services. It drafted legislation for public consultation in 2022 to bring further reforms to the existing law.
The Telemedicine Practice Guidelines, 2020 (TPG), were issued to offer assistance to healthcare professionals towards adopting telemedicine and provide protocols for physician-patient relationships. It focuses on patient evaluations and management, continuity of care, referrals for emergency services, privacy and security of the patient records, correspondence etc among other considerations.
As per the law, medical practitioners are required to adhere to the same professional and ethical norms applicable in traditional in-person care and exercise their professional judgment to determine the efficacy of teleconsultation, in the interests of the patient. Furthermore, the practitioners must be aware of any shortcomings of a particular mode of communication and they should inform patients of the same.
If the treatment cannot be done digitally, it must be “paused” or be “validated” with any required diagnostic reports, laboratory investigations, or a local referral to a physical facility, for examination.
The law enables the practitioner to discontinue and disengage from an ongoing consultation if they feel teleconsultation does not serve the purpose.
During digital consultation, the law prevents practitioners from receiving any information from the users without their explicit consent. The professional is not allowed to assume anything, instead, explicit consent is mandated under data privacy legislation for the usage and processing of health information.
The TPG imposes inherent restrictions on the ability of a healthcare practitioner to prescribe medications drugs should be prescribed when physicians are confident that they have relevant and adequate information. While there is a lot of dissatisfaction among practitioners about the list of drugs that can be prescribed over teleconsultation.
In addition to this, the patient continues to be the focal point, and nothing should be done without documentation. A practitioner I have been working with says “digital consultations premise themselves on ‘documentation” it is to keep both the patient, as well as the practitioner safe and aware at all times.
As digital platforms cater to an ever-increasing user base, service providers and regulators are increasingly cognizant of the addictive nature of the services online, and their delivery to the end users. Children are now the focal point of discussions attending practices, which has catapulted regulatory scrutiny and policy-making initiatives.
The precipitation of digital support structure, owing to the social distancing measures and onset of Covid-19 related lockdowns, was felt across social media channels, digital gaming, online chatrooms, wearable and connected devices. The mere ease and convenience afforded by these “alternatives” induced reliance, dependance and consequential addiction to these new trends. For children, these changes are more pronounced, and seemingly renders them susceptible to irreversible physical, psychological, social, and economic harms.
The Humans Rights Watch issued a dedicated report on the data collection practices of EdTech platforms which indicated excessive data collection and data sharing practices employed by EdTech platforms, without any sight of user or guardian consent. These findings are ever-present across territories and demonstrate the use of conscious or passive use of invasive technologies to profile children and facilitate them as targets for personalized marketing schemes. All the information that is being generated to this end, creates a vulnerability for the user [in this case a child] to become searchable and reachable.
Child Sexual Abuse Material
At this juncture, it is highly improper to no longer consider the absolute and real threat of availability of child sexual abuse material (CSAM) over the internet. At a time when the internet is evolving into a system which replicates a physical experience onto the virtual world, good and bad experiences will co-exist, and the susceptibility to be caused harm, in the form of sexual humiliation, is real. These concerns are exacerbated in cases of interactions for children, where the end objective is to ensure that their psychological and physical well-being is preserved.
Recent studies state that there is a spike of about 25% in demands for child sexual abuse material (CSAM); which could be attributed to a rise in the number of digital products and services directed at and availed by children. With increased ease of access provided to sex offenders online, to engage with such children; these findings point towards the urgent need for coaction between guardians, online service providers and regulators alike to sanitize the digital space and create an age-appropriate environment for all users.
As regulators prepare for discussions on pending statutory proposals to ensure online safety, there is an urgent need to assess the technologies available at hand to address such concerns, against the risks it will carry to the fundamental right to data privacy of children online.
In the United States of America, the US providers are obliged to report to the National Center for Missing and Exploited Children (NCMEC) under US law when they become aware of child sexual abuse on their services. The EU law as it stands today [also as an interim measure till August 03,
2024], necessitates voluntary reporting, and hence, member states took into onto themselves to create and prepare for national rules to fight against online child sexual abuse. Unlearning from these experiences, the Indian legislators implemented the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules) which mandate social media intermediaries with a significant user base, to deploy automated tools to identify CSAM and [similar] profane images, while having regard to the interests of free speech and expression, privacy of users. To acknowledge the need for accelerated redressal and removal of CSAM, the IT Rules have shortened the timeline for removal of such content and proposes use of technologies to enable the identification of the originator of such content.
The European Commission has proposed a new legislation (EU Proposal) to combat child sexual abuse online and imposes obligation on service providers to assess the risk of their services’ misuse for the dissemination of child sexual abuse materials or for the solicitation of children (grooming) and propose risk mitigation measures. The presence of chat rooms, voice calls, and livestreams is a service-agnostic feature of a website today and allow predators to initiate contact with young children and direct them to indulge in inappropriate acts in real time with minimal digital tracing. Further, designate national authorities in EU, upon review of the risk assessment reports, may issue a detection order, which will require the service provider to use EU recognized technologies to screen their platform for CSAM. Detection orders are limited in time and intended to target a specific type of content on a specific service. The rules further require application stores to assess the risk of onboarded applications for the purpose of CSAM dissemination and grooming and take reasonable measures to identify child users and prevent them from accessing it. The Regulation imposes the obligation upon service providers, to determine the methods for such detection exercises, and use technologies which are the least privacy-intrusive and are in accordance with the state of the art in the industry.
In addition to such targeted legislations, online service providers are required to comply with their compliances under the General Data Protection Regulation (GDPR), similar data protection statutes, to use privacy-centric tools to ensure legitimate and proportional collection of children information. Drawing from the requirements of the GDPR, the EC member states and entities would resort to conducting impact assessments and related exercises to make these determinations, perform a balancing act.
The detection and removal of CSAM is an issue of public interest, and lawmakers have sought to address concerns and impose obligations, in a targeted manner, having regard to the nature of services, and their intended audience. However, are there technological tools available to address such concerns, or is privacy centric detection a painful oxymoron?
Technology at Hand
Anonymization, encryption, cloud storage allows offenders to circulate CSAM, evade detection by law enforcement agencies; the connectivity presented by Internet of Things (IoT) offer opportunities for interaction between sex offenders and young children, and grant them access to information on the personality traits, behavior, and location of the children.
Platforms implement artificial intelligence and machine learning systems to ensure efficiency and accuracy, in detection, monitoring and removal of CSAM. On the enforcement side, cryptographical hash algorithms are used for file identification and evidence authentication in digital forensics, by assigning a hash or numeric value to the content. By creating databases of hashed CSAM, new material can quickly be matched against already known files.
Search engines and similar service providers use automated web crawlers, to search and index for CSAM content, and implement risk mitigation steps. Anti-grooming technologies evaluate, review conversations between users to detect toxic user behavior, or any potential grooming action, and are being widely used by service providers with child – targeted offerings (re: gaming industry).
CSAM detection requirements are service agnostic and call for wide-spread and excessive screening of content by service providers, as per the applicable laws. This requires online platforms to sidestep end-to-end encryption, which will invariably have a negative impact on users’ privacy. Access to communication content on a general basis, to detect CSAM and grooming, is excessive, disproportionate, and liable for mismanagement. Much recently, report of a father being flagged by a CSAM tool deployed by Google, led to an unforeseen circumstance, which unfurled a long winding investigation, and where the father’s activities in connection with his Android/ Google linked accounts (contacts, images, e-mails) were accessed. This was pursuant to the family making an attempt to share an image of their own child’s groin area with the healthcare practitioner for want of medical care; and ended up being ensnared in an algorithmic system which was designed to flag peoples exchanging CSAM. It is important here to note that while the images were explicit in nature, they were surely not exploitative, it was the lost context which led to an innocent man being reported to law enforcement.
It is also important that alongside the deployment of tools, there is human intervention for making effective determination before recommending criminal investigations, law enforcement measures being initiated against an individual for possession of CSAM.
When Apple made suggestions to commence and implement its own new suite of tools for scanning of images on a user’s phone before the same is uploaded onto the cloud, for detection of CSAM, to ensure that the entire device is not always subject to scan and unwarranted intrusions into the user’s device, or their cloud storage accounts; it faced a lot of backlashes. Apple’s proposal to enable a function to scan for CSAM on a user’s handheld device, to match against a database of
hashed CSAM images had to be withdrawn amidst surveillance concerns presented by regulators and privacy activists alike.
The use of age verification, age assessment measures to identify child users, introduced by the EU proposal, may be a proportionate scheme to address CSAM, grooming concerns; however, identification and enforcement will represent a challenge to authorities, with children, young adults inclined towards misrepresenting their age online, to avail services/ offerings which restrict access. Technologies which facilitate facial, audio recognition to estimate user age, are historically error-prone, whereas age verification against government issued identification, credit information is not a foolproof methodology for age verification. All these methods have varying success, but none have mastered a combination of privacy, efficiency, and affordability yet.
Given the converging manner of services, roles of service providers online, the law must be technology agnostic, future proof in order to create a uniform standard of responsibility upon service providers. To that end, the EU Proposal proposes the establishment of an EU Centre, which will collaborate with industry stakeholders, lawmakers to develop standards, make available technologies for content detection; this will alleviate the burden on the small providers. Furthermore, the EU Centre will give feedback on the accuracy of reporting and help service providers improve their internal processes.
The usage of age tokens, single use QR code created by verifying the age of an individual against government records is being trialed in Australia to grant users access to gambling, alcoholic beverage, and pornographic websites. Solutions which allow for these codes to be created and implemented across sectors, by interoperable platforms, will enable smaller players to be able to onboard these tools without requiring a comparable market presence, technical wherewithal, or financial capabilities, as that of the tech giants.
Organizations must take proactive steps towards the implementation of a privacy by design technical infrastructure within their networks, to ensure proportionality of data collection of minor and major users alike. Taking a leaf out of the existing policy structure around data privacy which requires that new tools, measures are implemented after a thorough impact assessment is created, it is important that private and public entities carry out similar exercises in making determination about: (i) efficacy of the tool in monitoring and detecting CSAM; (ii) any surveillance, intrusion that is percolating to the users’ lives; (iii) if there is an effective mitigation measure, to overcome any erroneous determinations; and, (iv) if human involvement in making a final determination is necessary.
The requirement for having human involvement is necessary, for the fact that the tech giants are acting as sentinels for the purposes of disclaiming any liability, and in turn allowing their suites of tools make the determination for either flagging an individual, denying them access, or making reports to the enforcement agencies. To perform an appropriate balancing act, is the order of nature, and that too must be replicated into the online realm.
 Child Sexual Abuse Directive, and Regulation (EU) 2021/1232 on combating online child sexual abuse. Regulation 2021/1232/EU of the European Parliament and of the Council of 14 July 2021 on a temporary derogation from certain provisions of Directive 2002/58/EC as regards the use of technologies by providers of number-independent interpersonal communications services for the processing of personal and other data for the purpose of combating online child sexual abuse (Text with EEA relevance).
 WWW ’19: The World Wide Web Conference; Rethinking the Detection of Child Sexual Abuse Imagery on the Internet; p/ 2601–2607; also accessible at: https://dl.acm.org/doi/10.1145/3308558.3313482; last accessed on September 10, 2022 at 1445 hrs.
The rules framed by the Bar Council of India prohibits Advocates from advertising or soliciting work. By clicking “I Agree” you acknowledge that, there is no advertisement, solicitation, invitation or inducement (of any nature whatsoever) from TMT Law Practice (Firm). The purpose of this website is to provide the user with information about the Firm, its practice areas and its advocates, which information is being provided on the user’s specific request.
The contents on this website should not be construed as legal advice in any manner and any information obtained or materials downloaded hereof are at the user’s volition and any use thereof shall not create lawyer-client relationship. The Firm is not liable for any action taken by the user relying on material or information available on this website.
The content of this website is Intellectual Property of the Firm