Does Artificial Intelligence need an ethical code? - Part 1

Scenario No. 1

Some two weeks ago, Jason M Allen of Pueblo West won first prize in the digital category at the Colorado State Fair for his art work named work “ThéâtreD’opéra Spatial”. It was no ordinary art work. Allen used Midjourney, an artificial intelligence (AI) programme, for creating the artwork by converting text into hyper-realistic graphics.

The art world remained divided over the ethics of such an AI-generated art, with some purists expressing indignation at the way technology is taking dominance over human artistry and originality. Do purists have a point? Will we see machines overtaking humans in every sphere, including the sublime realm of art?

Scenario No. 2 

Killer robots are no longer a filmy construct! In Ukraine, Russia apparently used an AI-powered Kalashnikov ZALA Aero KUB-BLA loitering munitions (drones). Though an official confirmation is difficult to come by, especially from a regime like Russia, unmanned AI-backed robotic weapons have become an unpalatable reality. In fact, there is a global race in amassing patents in Artificial Intelligence and military robotics. Between 2005 and 2015, the US topped the list (26%), followed by China and Russia, according to a research by PaxFor Peace. Since 2016, China has overtaken the US in gathering patents in this regard.

Such autonomous weapons (LAWS), which are capable of independently spotting and killing human targets, do exist even though they contravene the Geneva Convention.

Will we see AI-powered arms replacing humans?

Scenario No. 3

Earlier this year, a Deepfake video of Ukraine President Volodymyr Zelensky surfaced on social media, where he was seen exhorting his army men to surrender during the country’s conflict with Russia. (Deepfakes are artificially made videos of real people saying something which they did not say in reality.) It was a fake video, made with Deepfake, a blend of AI and machine learning. Such Deepfake videos have already become ubiquitous, attaining notoriety for its potent lethal effects on society.

Darker uses

While something as innocuous as an AI-generated digital art may not pose a serious threat, the darker uses of AI are bound to pose enormous challenges for the world at large. Despite the benefits and opportunities thrown up by Artificial Intelligence, nefarious elements use it for darker uses. Some of these are disinformation, deepfakes, autonomous weapons systems capable of using lethal force minus human intervention, etc.

How can the world ensure the ethical and fair use of the AI? Is it even possible, given the democratisation and ubiquity of AI tools with each passing day? Does the world need an AI ethics code?

In this two-part Indepth report, Adgully attempts to answer these questions with insights from a cross-section of industry experts.

Artificial Intelligence has the potential to bring a lot of positive impact to our lives; but it has the other side of the coin, too, where its unethical use has become a big challenge, agrees Codvo.ai Managing Partner Amit Verma. Thanks to the infamous Cambridge Analytica scandal or others, he says, big and small companies are working on identifying how to prevent the unethical usage of AI. 

According to him, ethical AI practices have only three principles:

  1. It has to protect individual rights and privacy.
  2. It has to be non-discriminatory.
  3. It has to be non-manipulative.

“Legally and regulatory-wise, we are not that strong yet against the unethical uses of AI. Hence, it is the responsibility of organisations that develop and provide AI tools to follow ethical AI principles. It means we should set best practices and guidelines for the users, set up social scoring, and continuously monitor its usage for non-violation.”

All kinds of technologies have the potential to be misused and abused, says Devang Mundhra, Chief Technology & Product Officer at KredX. “Despite the extraordinary advances that the world is noticing in Artificial Intelligence, there is a growing concern regarding the ethics and the extent to which algorithms can be imbued with moral values. The primary reason for the misuse of this technology is the lack of transparency, poor accountability, and bias that creeps into these automated tools,” he adds.

 Devang Mundhra
Devang Mundhra

Mundhra feels that awareness creation is of paramount importance. According to him, to intercept these challenges, it is crucial to create awareness about the technology and how it can be misused.

“Organisations need to be transparent (both internally and externally) about how they’re using AI. It is important for firms to undertake necessary measures to ensure that AI ethics are taken seriously. For example, hiring ethicists who work with corporate decision-makers, developing a code of AI ethics, developing AI audit trails, etc. It is also crucial that firms ensure that the data they're using is not biased. Creating better data and algorithms is not just an opportunity to use AI ethically, it’s also a way to try to address biases (racial and gender for example) in the world on a larger scale,” explains Mundhra.

Most importantly, he asserts, organisations need to focus on the development of explainable AIs and understand how the AI makes decisions and be able to explain those systems. “Moreover, firms need to adhere to regulations. Given an increase in the misuse of AI, it is suggested that firms create a body that can evaluate the ethical concerns.”

Siddharth Bhansali, Founder of Noesis.Tech and CTO at XP&D Land and Metaform, is a proponent of strong regulations in this regard. He says, “As much as I'm a proponent of technology companies being able to regulate themselves and develop products keeping in mind social values and social safety and designing to prevent the misuse of their products, AI or otherwise, I unfortunately believe that we're going to need very strong and tight regulations on AI to create products. Primarily because unlike existing products that are out there in the market like Facebook or Instagram, the scope of what AI does is quite limited. And we've already seen how damaging that is in its ability of creating echo chambers and promoting particular points of views and getting people to believe that a piece of information is commonly accepted a fact while it may not be so.”

Siddharth Bhansali
Siddharth Bhansali

He advocates transparent and good public and private partnerships in this regard. “Even though AI is quite limited in its implication with products, we have tried to let companies regulate themselves on how they can design their products to be good for society and prevent the use of those products from misuse. When AI starts taking over more aspects of the product especially when AI is the product itself and not just a picture of it, you're going to need government or global regulatory authorities to ensure that it can only be used in the right ways, in the right manner, or that its development biases have been controlled and have been taken care. So, I think the only way to ensure the use of AI is through very transparent and good public and private partnerships, where the regulation is not looked at as limiting to innovation but as more safeguards for safe innovation in artificial intelligence, as this technology spreads,” Bhansali adds.

AI is a tool which can be used for both great benefit as well as great harm, remarks DaveAI CTO & Co-founder Dr Ananth. We need to take this up in two ways. One, of the research-work is being able to create an AI to differentiate between AI based artifacts versus human created artifacts. Another way is to make sure that data sources to train AI are ethically sourced.

Dr Ananth
Dr Ananth

The Solutions?

What can be done at the industrial/ governmental and societal level in this regard? What will be the challenges in such an endeavour? What is the role of the government and policy makers?

Policy always plays catch-up to technology innovation and the race is getting harder and harder for users and regulators alike. Like any other technology, AI is also ‘value-neutral’, says Munavar Attari, Managing Director, Fleishman Hillard India, a consultancy firm that advises corporates with their internal and external stakeholder communication strategies, which is continuously impacted by rapidly changing technologies.

Munavar Attari
Munavar Attari

The culture and society attaches morals and dogma to its usage. Governments need to urge technology provides to self-regulate, learn from over three decades of experience in creating mainstream technology products. Corporates need to start integrating AI exposure and usage into their ‘code of ethics’ for employees and avoid being laggards as many were to social media usage.

According to Attari, what is genuinely missing is mass public awareness and communication campaigns from governments, civic-tech bodies or civil society about the “AI deluge that is already upon us in so many ways”.

Attari is raising some deep issues and pertinent questions with regard to the AI. “More than social media, AI creates the fundamental need in societies to define ‘what is good’, ‘what is to be protected’ or ‘what is virtuous’ – in other words, societies have to redefine or relook at their objective moral frameworks and rediscover themselves and their collective consciousness. When AI nudges employers about whom to hire or fire or how individuals are profiled on the basis of characteristics, features, etc., it can be perceived as racist, misogynist and discriminatory in one society rather than the other. Therefore, a public debate on societal ethics and its reflection in AI-powered products will be critical. Will each society or culture have its own version of truth or what is allows or disallows? If yes, how? These are the deep fundamental questions that need to be asked before tactical issues related to implementation and usages are discussed,” he explains.

Amit Verma feels that we need a broadly accepted AI governance model. As the interaction of AI technologies with our socio-political and economic environment is increasing, we cannot estimate its consequences quantifiably, Verma says.

“It means we don't know what we are getting into, which is the biggest challenge as I see it. We are at a very nascent stage in both the private and public sectors with setting governance standards in the development and the use of AI. We need a broadly accepted AI governance model, which is sector/ technology/ algorithm agnostic with data governance best practices. Several countries have started implementing national data governance bodies, which usually have representatives from corporations, public entities, and NGOs. Organisations must become early adopters of such frameworks and share feedback to improve the governance model over time. The government should also develop a risk assessment and prediction framework to gauge the psychological impact of various AI tools on human beings,” he explains.

Devang Mundhra feels that it is important for firms, educational institutions, and governments to create awareness about the technology and the downsides associated with the unethical and unfair use of the technology.

According to Mundhra, governments need to bring in regulation against discrimination into the AI and build an ethics framework that ensures privacy and data protection. “These regulations need to focus on the ethical considerations that come with AI technology. Policymakers need to strike the right balance within their policies to tackle ethical issues such as bias, privacy, and discrimination. They cannot view regulation as presenting a choice instead; they should strive to craft regulations that accomplish both goals of promoting innovation and fostering ethical use. Collectively, we need to ensure that the development of the AI systems is diverse and inclusive.”

Siddharth Bhansali also proposes a public-private partnership. “We really do need a tight integration of public and private sponsorship and partnership on developing AI for consumer-grade products and services. What this means is that governments are going to have to be able to create the same routes within which companies can operate. But at the same time, governments will have to, in their own way, enforce the way AI is developed and streamed, and that it is done through a representative set of data, that it's not just trained from a particular perspective, and that minorities especially and their perspectives, and their needs are addressed as part of those training sets as well. So we have to look at developing AI in an inclusive manner. Unlike any other technology that has come before, you know, think about societal inclusivity when you develop cars, for example. Now you do when you think about accessibility, but when you first develop them, it really wasn't something that was top of mind for anyone,” Bhansali says.

He feels that a global perspective is essential. “Even if we're developing something as simple as a ballpoint pen, whether you develop it for America or for India, for Japan or for China, it's the same ballpoint pen, there’s not much that needs to change. But AI is going to be global; AI is going to be influencing things on a global landscape. So, you have to be able to have a global perspective on when you're building these things, and that should not just come down to the data you are using, and making sure that the data is representative of the globe at that point of time. I truly believe that we must enforce diversity in the leadership teams of AI companies, because that's the only way we can really ensure that all perspectives are going to be considered. So I think that's what needs to happen from our government and society level. It's got to be inclusive, it's got to be diverse, it's got to include people from various walks of life. If we truly look at AI for its strengths, which is that it can push humanity forward, but it can equally marginalise certain sections of humanity, if it's not done correctly,” explains Bhansali.

The private sector

How do we ensure the private sector develops the AI technology ethically? What is the role of the sector in formulating a code of ethics for the AI?

Private sector is more market-driven, so if investors and consumers demand ethical AI, the sector will follow, feels Devang Mundhra. “Similar to how market responses forced the use of biodegradable containers, or ethically sourced raw ingredients.Having said that, the private sector can establish bodies that recognize the potential uses and misuses of AI, and take steps to avoid the predictable misuse of AI by committing to a code of ethics. Enterprises should incorporate value-based principles for internal AI activities to guide their conduct and these initiatives should be binding and involve voluntary compliance by companies using the technology,” he adds. 

Attari maintains that there is no silver bullet, but citizen’s trust and ecosystem credibility is critical and everyone has a role to play. According to him, the answer has been same for all previous technologies and that is trial and error. “The only issue is that the stakes are exponentially higher now and the ecosystem has so far fallen short of communicating the pros and cons to the layperson on the street. For instance, how will AI-users (companies) explain to end-users how and why they were (automatically) categorised as ‘high-risk’ for an insurance cover or a some other mortgage? The communication, public acceptance, and the adoption of the technology will be far more challenging than actually creating it,” he says.

The first place to start is to have the right objective, says Amit Verma. “While setting up goals for your AI/ML-based solution, your teams should align on zero negative impact on society. Some ethical AI technology objectives could be identifying fake news, fashion designs, diagnosing rare diseases, inventing a new vaccine, etc. The second step is to check for data biases. AI models can be a great tool in aiding decision-making and are not inherently biased. However, by inputting inappropriate data, inherent biases can emerge, leading to flawed guidance. For example, when datasets reflect adequate social representations, the AI model can have results similar to historical human biases. Identifying biases in the system and training the models on more diverse data to include all types of race/ region/ gender or other attributes can largely avoid any negative impact of AI solutions,” he explains. 

Siddharth Bhansali feels that the private sector needs to be backed by the public sector. It’s going to come down to not limiting regulation, but enabling regulation, he says.

“Regulation that enables the creation of AI technologies are not going to have biases baked in, but I'm not going to have a marginal perspective on what it's trying to do. So the private sector needs to be inspired and supported by the public sector, in both being enabled as well as being tested. One of the largest challenges that AI companies face today and will continue facing is knowing whether or not whatever they have been, is yielding the outcomes they were designing for. And in a small enclosed group of testing, you might say “given these parameters, the AI is doing what we expected to do”. But when it’s something as large as this, it's not going to happen. So that's the plan over there, the project needs to be enabled by the public sector for this,” says Bhansali.

(Tomorrow: Artificial Intelligence and social biases)

Exclusives
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment