I. Introduction:

At the inception of the fourth industrial revolution, the world acknowledged that technology, artificial intelligence, data and ‘internet of things’ obliterate territorial boundaries, empowering individuals to metamorphose into global citizens. Since then, the unprecedented potential of artificial intelligence (AI) to provide large incremental value to a wide array of sectors including healthcare, agriculture, finance, education, transportation and logistics, manufacturing, energy, retail and sales and developing smart cities along with upgrading the quality of human life has been recognised worldwide including India. However, the extraordinary benefits of AI are accompanied by certain intrinsic issues that go to the core operations and functionality of AI driven systems. Inaccuracies and inventor induced bias are one of the key challenges that plague AI systems. In this article, the author briefly discusses and analyses AI bias and the means to mitigate it so that the power of AI can be harnessed to its complete potential.

II. Is AI Bias Real or Speculative?

In 2018, Amazon was alleged of deploying a flawed hiring system. Turned out that the cause was a biased AI recruitment algorithm that chose men over women, despite women candidates with equal credentials. Despite attempts to edit the program to make it neutral to the term ‘women’, there was no guarantee that the gender bias could be completely done away with. Left with no option, Amazon disbanded the whole AI recruitment software. [1]  The author will delve into the reasons for this bias later in this article.

Imagine certain scenarios: (a) A credit rating system giving preference to certain class of individuals on the basis of their family lineage as opposed to an individual who hails from a historically weaker economic background, (b) A recidivism algorithm which identifies persons of particular colour[2] or from a minority community/ race as criminals, (c) A medical algorithm which favours white patients over black patients[3], (d) A government adopted algorithm which deprives any individual not confirming with binary sexual classification.

The aforesaid scenarios, though unsettling, are not improbable. The illustrations are nothing but an outcome of prejudiced algorithms, otherwise termed as AI bias/ algorithmic bias.

The key factor responsible for algorithmic bias relates to data collection/ input and the resultant discrimination and asymmetry in data aggregation. The fact that inventors of AI solutions incorporate their individual ideological and emotional judgment into the system cannot be ruled out. These biases may range in a wide variety relating to gender, sexual orientation, race, demographic region, ethnicity, economic as well as social factors as demonstrated in the illustrations above.

Data collection bias can have multiple forms such as: (a) systematic bias, which is a result of faulty equipment and machinery, or (b) response bias or social desirability bias which is on account of inaccurate and false response by participants in a demographic subset who purposefully hide socially undesirable traits.

In addition to absorbing and self-learning on the basis of human bias, algorithmic bias can be a result of incomplete data set or limited data selection, otherwise known as ‘selection bias’. One of the key illustrations of selection bias is the Amazon recruitment software. The algorithm’s biased judgment was a result of Amazon’s past hiring pattern of the company for over ten years. Most of the employees hired in the past ten years were men. The software picked up this discriminatory trait, depriving eligible women candidates of the recruitment opportunity. In most situations, data input is tailored, comprising of a part of the demographics, without including all possibilities. The minority data sets which are not fed to the AI system fall prey to AI bias.

AI bias can also be a result of data processing. Illustratively, filtering of data may lead to a state where the data set fails to represent the target population. In some cases, if the data set has missing values as in the case of surveys, the data processor may fill certain variables against the missing values, thereby nudging the data towards a particular conclusion.[4] In certain other cases, AI bias may be the result of ‘confirmation bias’ i.e., the inclination to focus on one’s preconceived notions at the stage of analysis. In a given situation, if an analyst believes that there exists a strong relationship between drunk driving and road accidents, and choses to focus only on those data sets which confirm with the hypothesis, the same would amount to an instance of confirmation bias.

The most disconcerting part of AI systems is that they not only replicate human bias, but also propagate and exemplify it. Despite multiple attempts to correct the recruitment software at Amazon and prevent the system from screening applications on the basis of use of the term ‘woman/ women’, the software continued to discard applications by women owing to its inherent bias based on other variables.

III. Mitigating AI Bias:

One of the chief problems associated with AI is the absolute trust in the algorithmic output. This particularly results in nuisance when the algorithm system is installed by unscrupulous individuals to disseminate ideological bias. Consider a situation where a criminal tracing software is designed by a coder with bias against minority community. In such a case, people from the minority community will be at a higher risk of being denoted as offenders and violators of law without the actual involvement, solely on account of the ideological bias inherited by design. This brings to us the importance of mitigating algorithmic bias.

Alleviating AI bias is critical as algorithms on regularly frequented platforms like Facebook, Amazon, Instagram, etc. can influence user personalities, predict user behaviour and in certain other instances also make automated decisions without any human intervention. Lastly, curbing AI bias is crucial as it reduces the potential of AI by creating mistrust and distorted results.

Data forms the primary driver of AI solutions. The productivity and accuracy of AI predictions is dependant, to a large extent, on the data sets. Therefore, appropriate data input, handling, safety, and security is important to receive the best AI driven solutions. First and most importantly, historical bias in data input should be eradicated so that the progressive AI tools do not become a means to carry forward the ideological bias from the past.

Another way to tackle AI bias is by detecting the source of the bias so that corrective measures can be undertaken. Illustratively, if the source of the bias is selective data selection, then the same may be mitigated by adding additional data which can be representative of the larger target audience. Constant monitoring, updated training and frequent testing of the system helps in early recognition of bias which can be followed by devising corrective measures. Testing AI in a regulated space prior to its implementation in a real-world scenario and creation of AI regulatory sandbox go a long way in mitigating algorithmic bias.

Ensuring inclusion and fairness in data systems is an effective means of mitigating AI bias.[5] One of the most advocated methods of mitigating AI bias is human intervention at requisite check points where consequential decisions are made. Audit of algorithms by external third party or internal red teams is an effective means to reduce AI bias by bringing about greater transparency.

Further, explainability by design ensures fairness and accountability, thereby aiding in mitigation of AI bias. Explainability by design guarantees transparency as explanations required to understand the data and the AI model are in built at the stage of designing the system. Further, explainability and public accessibility to AI system are crucial to assess bias and unfair result, as well as for challenging the AI system legally.[6]

Certain leading AI tech companies such as Google AI[7] and IBM’s Fairness 360[8] have issued recommended practices to mitigate AI bias which can be adopted to simulate and devise effective mitigation strategies. Google AI’s responsible practices include designing AI models using concrete goals for fairness and inclusion in AI, training and testing of the model, systematic and regular checks for unfair bias and analysis of performance. Similarly, IBM’s Fairness 360 offers an open-source toolkit for users to examine, report and mitigate discrimination and bias in machine learning models by deploying fairness and transparency.

IV. AI Bias and Regulatory Perspective:

Globally, countries are attempting to create guidelines for responsible and ethical use of AI, with a focus on mitigating AI bias and discrimination.

a. European Union:

The European Union issued the European Strategy on AI[9] in 2018, followed by the European Union’s High Level Expert Group on Artificial Intelligence’s draft Ethics Guidelines for Trustworthy AI (Ethics Guidelines)[10] in 2019 and the White Paper on AI[11] published in 2020. While providing a non-exhaustive list of requirements for a responsible and trustworthy AI system, the Ethics Guidelines offer a detailed mechanism to mitigate AI bias such as: (a) inclusion and diversity throughout AI system’s life cycle, (b) avoiding unfair historic bias, (c) incorporating universal design principles to include widest possible range of users, (d) allowing human oversight on data systems, (e) privacy and data protection norms to enable trustworthy data gathering, (f) protocols to ensure accessibility of data, (g) auditability of data systems, (h) ensuring data transparency by way of traceability and explainability and (i) ensuring quality and integrity of data input. The Ethics Guidelines recommend the implementation of the aforesaid requirements from the earliest design phase of AI driven systems to bring the best potential of AI and mitigate bias.

In April 2021, the European Union released its Proposal for Regulation for Laying own Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts[12] (Proposed Regulations). These Proposed Regulations address AI bias, amongst other things and provide regulations to ensure AI compatibility with fundamental rights. They facilitate the enforcement of legal rules by creating requirements for trustworthy AI and imposing proportionate obligations on all value chain participants. The Proposed Regulations have adopted a risk-based approach where AI associated risks are classified into four categories namely, unacceptable, high, limited, and minimal risks.

AI systems with ‘unacceptable risks’, acting as a clear threat to the safety, livelihoods and fundamental rights of people are proposed to be banned. Prohibitions under this category cover practices having a significant potential to manipulate or exploit vulnerable groups such as children, persons with disabilities by materially distorting their behaviour to cause harm to them or third person. AI systems identified as ‘high risk’ find use in sectors such as critical infrastructure, education, employment, essential private and public services, law enforcement and administration of justice. These will be subject to strict obligations before being introduced for use in market. All remote biometric identification of natural persons is also classified as ‘high risk’ systems. The ‘high risk’ AI systems are subject to adequate risk assessments and mitigation systems, high quality datasets feeding the system to minimise risks and discriminatory outputs, detailed documentation and transparency providing all information on the system for authorities to assess the system’s compliance, clear and adequate accessibility of information to the user appropriate human oversight measures to minimise risk and avoid ‘automation bias’[13] and a higher level of robustness, security and accuracy. Further, ‘high risk’ AI systems must ensure training, validation, and testing data sets with examination of the possible biases in the system. AI systems with ‘limited risks’ require minimum transparency obligations to avoid risk of manipulation. Illustratively, when chatbots are used, users must be informed about their interaction with the machine. All other ‘minimal risk’ AI systems can be developed and used subject to the existing legislation without any additional legal obligations.

The creation of European Artificial Intelligence Board as proposed, would further facilitate implementation and drive the development standards for AI.

b. Singapore:

Much like the European Union, Singapore’s Personal Data Protection Commission has issued the Model AI Governance Framework in 2019, followed by a second edition in 2020.[14] This framework also provides for training, testing and validation of data sets to curb systematic bias. Further, periodic review and updating datasets can aid in early detection and mitigation of potential AI bias.

c. India:

In India, the National Institution for Transforming India (NITI Aayog) has issued guiding policy documents such as the National Strategy for Artificial Intelligence 2018 and the Working Document: Towards Responsible #AIForAll- Part 1[15]seeking public comments. In February 2021, NITI Aayog released the Approach Document for India, Part 1- Principles for Responsible AI (Approach Document)[16]. The Approach Document fosters principles for responsible management of AI in India in sync with the Indian constitutional morals. A responsible AI management will be an outcome of 7 guiding principles namely: safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and protection as well as reinforcement of positive human values. In order to mitigate AI bias, the Approach Document advocates for human intervention at every stage of consequential decision making so that systematic exclusion as well as automation bias can be avoided. The Approach Document favours for accountability of AI systems to initiate appropriate legal actions.

In the absence of laws that target or govern AI systems, legal actions cannot be undertaken against AI systems for discrimination or bias.,. In the present form, Indian laws are not equipped to regulate AI. Therefore, there is a need for a major legal overhaul. The Approach Document provides for creation of anti-discrimination legislations to regulate decisions arrived at by use of AI. Further, for effectively mitigating AI bias it is also important to enforce the data protection laws which mandate consent from users/ data principles prior to processing of data including automated processing.[17]

VConclusion:

Till date, scientists have not been able to devise an effective mechanism to eradicate algorithmic bias owing to the large datasets processed by the AI system as well as the non-deterministic and mathematically complex nature of the datasets.[18] AI systems cannot be entirely devoid of bias as the specific logic by which it reaches a particular output may not be transparent or it may not be predictable. However, algorithmic bias can be mitigated by appropriate safeguards as demonstrated above. An active participation from multi-disciplinarians such as scientists, ethicists, socialists, regulators, legal professionals, and stakeholders can bring to us a comprehensive means of mitigating AI bias.

AI is increasingly being deployed by both private and state players. Therefore, to ensure the productivity and retain the trust in AI, it is important for regulators to enforce principles for a responsible AI model with a focus to mitigate non-discrimination and AI bias.

Author: Atmaja Tripathy, Senior Associate, TMT Law Practice

Atmaja is pursuing litigation at different courts and tribunals in Delhi, including the Supreme Court of India, High Court of Delhi, Telecom Disputes and Settlement Appellate Tribunal, the Competition Commission of India and the National Company Law Appellate Tribunal. Atmaja is enrolled with the Delhi Bar Council. At law school, she has won multiple scholarships for academic excellence, including the Nanhi Palkhiwala Scholarship for Constitutional Law, Ram Jethmalani Scholarship and the Director’s Gold Medal for Outstanding Excellence in the graduating batch. Her interests in technology, media and telecommunication laws, competition law and constitutional law, have led her to pursue prestigious moots and essay competitions. Atmaja has also published articles on contemporary legal issues in reputed international and national journals like European Competition Law Review, Kluwer Business Law Journal, BRICS Law Journal, All India Reporter and Company Law Journal.”

    Work With Us

    Resume/CV