AI & Ethics Part 2 - How to keep social biases from being embedded in & amplified by AI

Photo credit: Possessed Photography on Unsplash
Photo credit: Possessed Photography on Unsplash

How can the world ensure the ethical and fair use of Artificial Intelligence? Is it even possible, given the democratisation and ubiquity of AI tools with each passing day? Does the world need an AI ethics code?

In this two-part Indepth report, Adgully attempts to answer these questions with insights from a cross-section of industry experts. Part 1 of the report asked the pertinent question – does Artificial Intelligence need an ethical code? Part 2 of the report will dwell on AI and the social biases.

The tech world and international NGOs like Human Rights Watch, Amnesty International, and Electronic Freedom Foundation (EFF) are already responding to the ethical aspects of AI. In 2018, more than 4,000 workers at Google opposed the company’s decision to associate with Project Maven, a defence department initiative for developing better AI for the US military. Employees petitioned Sundar Pichai, CEO, Google and Alphabet, asking for the cancellation of the project.

Also read:

Does Artificial Intelligence need an ethical code? - Part 1

“There’s an unholy alliance between government and the tech industry, because so many governments see tech as the solution to their economic woes,” observes Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham.

Dealing with biases

AI systems take decisions based on training data fed by humans, whose biases invariably creep in. The algorithms may be able to reduce the impact of human biases. But what if they (the algorithms) contribute to the problem by using biases at scale in crucial applications and use cases? A well-documented case has been the one found in the 2016 investigation by ProPublica. An algorithm-based criminal risk assessment tool used in Broward Country, Florida, wrongly labelled African-Americans as “high risk” as compared to white defendants. Now, with the industrialisation and commercialisation of AI at scale, the risks are increasing.

So, what do AI ethics even entail? How do we keep social biases from being embedded in and amplified by AI? “AI ethics, to my mind, in its crude form is about templatising acceptable cultural or societal norms,” says Munavar Attari, Managing Director, Fleishman Hillard India.

“It is also about enabling technology to understand the difference between ‘is’ and ‘ought’. The only way to mitigate social biases in mass AI technology products will perhaps be to find the least common denominator as far as moral issues are concerned. A fair sense of morality, justice, and transparency that is accepted by all societies. This could mean that a body like UN may have to relook at global proclamations such as Universal Declaration of Human Rights or progressively provide a global ethical framework that can be put together basis an intra-national public consultation process. In short, we may have to put a ‘check on the checker’. And communications will play the role of the cog in the wheel for the success of AI adoption by societies at large,” says Attari.

Ethical AI implies that the use and adoption of AI should be transparent, accountable, responsible, and sustainable, notes Codvo.ai Managing Partner Amit Verma. If a system makes decisions on our behalf and gives us suggestions, those decisions should be justifiable and explainable, he says.

Accordion to him, our social biases are inherent in the past data. “If that is the data fed into the AI model without correction, the output will also be biased. For unbiased AI systems, we need periodic monitoring of AI algorithm performance. By keeping a tab on the results of AI algorithms, we can understand if the output is biased. Additionally, firms should make conscious choices about the customers and partners they work with, the composition of data science teams, the data they collect, and how they use it,” Verma adds.

“AI ethics is a framework that helps discern between use and misuse of the technology. It is a set of guidelines that advises on the design and outcomes of artificial intelligence,” says Devang Mundhra, Chief Technology & Product Officer at KredX. Over the past few years, says Mundhra, there has been a lot of deliberation over how human biases can impact artificial intelligence systems – with harmful results.

“At this time, companies are looking at deploying AI systems by implementing necessary measures to avoid any risks and misuse of the technology. To avoid any social biases, business leaders should ensure that they stay up-to-date on this fast-moving field of research. They should consider using a portfolio of technical tools, as well as operational practices such as internal teams, or third-party audits. Moreover, engaging in fact-based conversations around potential human biases could work as running algorithms alongside human decision-makers, comparing results, and using explainability techniques could help point out what could lead the model to reach a decision – in order to understand why there may be differences. Additionally, considering how humans and machines can work together to mitigate biases, including with ‘human-in-the-loop’ processes and investing more, providing more data, and taking a multi-disciplinary approach in bias research, while respecting privacy, would help continue advancing in this field. Lastly, a more diversified AI community would be better equipped to anticipate, review, and spot biases and engage communities affected,” says Mundhra.

At the same time, he adds, it is equally important to be very careful about training data, and feedback loops. Adding enough examples of all kinds of data by noting down actual human biases, or historical biases that would have impacted models, and then explicitly building careful counters to any historical prejudices built on the model.

In many ways AI needs to be built on top of an accurate representation of human society and anything that has been built in a way that is not respecting the differences, and the multi-faceted dimension of identity in my books would be unethical, explains Siddharth Bhansali, Founder, Noesis.Tech and CTO at XP&DLand and Metaform.

He pinpoints the crucial issue of ownership of AI creations. In a lot less esoteric manner, Bhansali adds, what will happen with AI ethics is when you are or co-creating with AI, there’s lot of talk about DALL·E 2 which is Open AIs art generation or image generation tool where you give it a prompt and it creates art based on data it’s been trained with. (The Scenario No. 1)

“Now here, when you create or let’s say co-create anything with AI, who’s the owner? Is it the AI? Is it the creative who gave it the prompt? Is it distributed across all the billions of terabytes of data that it was trained on? So, this whole question of ownership is a real big problem for a lot of people to solve. That is around co-creating with AI: that AI has been trained through the creation of other people. Similar to DALL·E 2, in the software engineering world, there is an AI bot called co-pilot that has been developed by a company called GitHub. GitHub is the world's most famous repository for the largest number of open source projects. So developers go and they save their code into GitHub and it gets accessible for everyone. GitHub AI bot co-pilot has been trained on these open source libraries; they've been trained on these open source materials, which have been contributed by millions and millions of developers out there. So tomorrow, I'm building an application and I use the assistance of a co-pilot to co-create a module; who is the actual owner of the application and who is the owner of the technology? So the ethics is very grey, both from the point of view of who owns the output of something generated by AI, as well as, how is AI being trained? Because if I really wanted to create an AI that fulfilled a particular worldview, it's very easy. I just want to control the data set. So how do we ensure that the data sets being used are governed through an independent and able body that is able to recognize this data center, this training center that we used to train this AI is A] legally sourced; it is data that is allowed to be used for the training of AI bots and B] is representative of a larger world view, and not just that of the creators of that technology company or all of that too,” Bhansali elaborates.

According to DaveAI CTO & Co-founder Dr Ananth, the biggest source of bias in AI is the data itself. “It is important to have data collection sources scrutinised by the population at large, which allows different individuals and companies to explore opportunities to understand and fix these biases in the data.”

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment