2024 AI Odyssey Part-1: AI’s dark underbelly and the road to regulation

Recently, English actor Emma Watson was seen on an online messaging board reading Hitler’s manifesto ‘Mein Kampf’. Well, you guessed it right; it was the handiwork of some online crook (or crooks) using an Artificial Intelligence (AI) tool that could create a digital replica of people’s voice. When ElevenLabs, the AI company behind this voice-cloning tool, noticed the misuse, it imposed restrictions, including the need for users to pay. But such limits were incapable of putting an end to the barrage of AI-created voices.

Generative AI has taken the world by storm, but it has not been without challenges and consequences.

How do you deal with a misinformation-spouting AI chatbot? Who will take responsibility? Is it the AI machine, or those who have fed data into the machine, or the tech company behind that Large Language Model (LLM)? Adgully attempts to find answers to these questions and more in this two-part series.

Digital sweatshops!

The prevailing narrative that AI tools are being dished out by shiny happy people inside those shiny Silicon Valley glass towers is just a modern-day bubble. A 2023 August investigative report by The Washington Post burst that bubble, unraveling the dark underbelly of AI. The investigative report exposes the exploitation of workers in the Global South by tech firms like OpenAI and Meta, revealing the “digital sweatshops” in the Philippines, where AI models are trained taking advantage of low labour costs. The report highlights the often-overlooked human labour involved in maintaining the shiny new toy that is AI.

According to the report, payments were routinely delayed or cancelled at Scale AI, which provided services for Meta, OpenAI, Microsoft, and the US Department of Defense. “While AI is often thought of as human-free machine learning, the technology actually relies on the labour-intensive efforts of a workforce spread across much of the Global South and often subject to exploitation,” says the report.

ChatGPT has already gained global acclaim for its innovative AI capabilities, boasting over a million users within a week of its November release. However, a TIME investigation reveals that OpenAI employed outsourced labour in Kenya, paying workers less than $2 per hour to label explicit content, highlighting the sinister side of the AI industry’s reliance on hidden human labour in the Global South to ensure the safety of AI systems.

“Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” the TIME report quotes the Partnership on AI, “a coalition of AI organisations to which OpenAI belongs”.

Global efforts

There is no doubt that the misuses or risks associated with AI are international in nature, and as such it calls for an international approach. Last year, none other than the top leaders at OpenAI, including Sam Altman, called for an international watchdog, akin to the International Atomic Energy Agency, to regulate the AI.

In 2023, we saw certain efforts in this direction.

The Bletchley Declaration, hosted by the UK in November last year, was the first international AI Safety Summit that outlined the commitment of countries (India included) to harness the potential benefits of AI while addressing associated risks. The Declaration, while emphasising the importance of safe, human-centric, trustworthy, and responsible development and use of AI, highlights the need for international cooperation to address risks posed by AI.

Then there was this G7 Leaders’ Statement on the Hiroshima AI Process, which underscored the potential of advanced AI systems while stressing the need for developing a policy framework for safe and trustworthy AI.

Big tech monopoly

This year, we may witness the potential reinforcement of Big Tech’s monopoly through the AI revolution, raising concerns about collusion and coordination among major players in the AI industry.

A slew of AI start-ups flourished in recent times, but the stark reality is that they have to depend on big-tech. For example, while Google, Amazon, and Microsoft dominate cloud computing, Nvidia has dominance over the chips required for AI tools.

All these point to the aspect: the growing consensus on the need for global regulation in the AI space. Industry experts underscore the importance of regulatory mechanisms to address risks, ethical considerations, and the potential concentration of AI power among a few large companies.

We need to look no further than last year’s Hollywood writers’ strike and The New York Times lawsuit against OpenAI and Microsoft for why there is growing consensus around the need for global regulations, observes Minhyun Kim, CEO of AI Network.

“Concerns over replacement and lost revenue will only continue to get louder, and the only recourse will be litigation. This is not the hallmark of a healthy industry. At the same time, AI’s growth has already created chip shortages, allegations of clickworker exploitation, concerns over training data bias, and serious privacy issues. With the advent of deepfakes, it has also become a powerful mis- and disinformation tool. People both inside and outside the industry have found these issues hard to ignore. The sense is that something needs to be done to ensure AI develops in accordance with the best interests of society,” Kim says.

Today, AI is being used to make crucial decisions, says Sheshgiri Kamath, CEO and Co-founder of Kapture CX. He believes that the global consensus on the need for AI regulation stems from unclear accountability pertaining to such decisions, which could have potential implications for individuals or the society as a whole. Moreover, he adds, the lack of regulations, standards or best practices poses risks across sectors such as healthcare or banking and financial services that are seeing increasing adoption of AI, and deal with sensitive data.

The growing consensus on global AI regulation stems from concerns about ethical use, accountability, and potential risks, says Archit Agarwal, Founder and CEO, Tikshark Solutions.

“The absence of regulation poses challenges, risking unchecked deployment, privacy violations, biased algorithms, and a lack of standardised safeguards, impacting industries and societies worldwide. Unregulated AI development risks biases, privacy breaches, and safety issues. A global regulatory mechanism could establish ethical standards, ensure accountability, and foster innovation by providing a framework that promotes responsible AI practices and international collaboration in the industry,” he adds.

Recognising the global impact of artificial intelligence, the push for international regulation gains momentum, says Devdatta Potnis, CEO, Animeta.

“The consensus stems from the need for uniform standards, ethical concerns, and a level playing field. Without such regulation, industries and societies worldwide grapple with challenges posed by the uncharted terrain of unregulated AI. Unchecked AI development brings forth risks, including biases, privacy concerns, security vulnerabilities, and job displacements. The remedy lies in a global regulatory framework, enforcing transparency, fairness, privacy guidelines, security measures, and accountability,” adds Potnis.

Regulatory mechanism

And then there are the potential risks associated with unregulated AI development and deployment. The world needs a globally acceptable regulatory mechanism that could address these concerns while fostering innovation and collaboration in the industry.

In 2023, the European Commission introduced the AI Act to regulate AI applications in all 27 European Union member states. The Act envisages an ‘AI Office’ to enforce and supervise rules. However, some researchers point out gaps in the Act, such as assumptions about low-risk AI and a lack of reviewable criteria for classifying applications. Concerns include the self-assessment of high-risk AI by developers and the need for independent verification systems.

Minhyun Kim sees two big risks. The first, according to him, is that control of the technology will ultimately reside with a handful of large companies.

“Control, in this case, means siloed ownership of the models, computing resources, and talent, with virtually no accountability other than to the bottom line. The result will be AI serving the profit needs of these companies first, not remaining people first. As a corollary, we risk repeating the mistakes of the social media era, where the lack of regulation resulted in a few companies making hundreds of billions of dollars in profit from user data and behaviour,” he explains.

The second risk, according to him, is deepening inequality. While the jury is still out on whether and when it will happen, rapid mass unemployment would be catastrophic to an already-widening wealth gap. This is because the jobs most likely to be replaced by AI are often held by the people least prepared to handle sudden job loss.

“Regulation can prevent monopolistic control of the technology. It can ensure that AI models are developed in the open source, that copyright laws are respected and data owners are adequately compensated, and that the technology is governed transparently outside of traditional corporate structures. It can also provide the foundation for a transition plan and the creation of programs, such as universal basic income and reskilling, to ease the burden of sudden job loss. And I would argue that regulations like these would do more for collaboration and innovation than an unregulated industry,” Kim says.

Sheshgiri Kamath is certain that unregulated application of AI poses risks such as unclear accountability, human rights violations, and moral value disparities globally. “A necessary solution is a globally acceptable regulatory mechanism, such as standardised agreements or responsible-use frameworks. Collaborative leadership, notably from the US and EU, is vital to balance innovation and address concerns, ensuring ethical AI development,” says Kamath.

(Tomorrow, Part 2 of this report will dwell on the principles and ethical considerations central to a regulatory framework for AI.)

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment