Human abuse of AI ought to be our focus, not AI taking over the world

Anxiety over AI outsmarting and killing humans must not distract us from its here-and-now risks, cautions Jaspreet Bindra, Founder, The Tech Whisperer Ltd., UK.

Perhaps it was Joe Biden who watched ‘himself’, or rather his deep fake, which prompted him to push the AI regulation ‘nuclear button’. “I’ve watched one of me,” Mr Biden said in NYT, where he referred to an experimental deep fake of himself that his people showed him that could create a very convincing ‘presidential statement’. “I said, ‘When the hell did I say that?’”

Every week sees a slew of launch announcements in Artificial Intelligence (AI). The last week, however, was marked by a rush of declarations on how to regulate it. It began with the US surprising the world with Joe Biden’s Executive Order requiring AI majors to be more transparent and careful in their development.

The most impactful takeaway from Biden’s Executive Order is that BigTech and other big foundational model developers like OpenAI, Google, Microsoft, etc., must divulge the results of the ‘red teaming’ safety tests that they must do on each new model, before it is released to the public. These results will be vetted to a high bar on standards by the National Institute of Standards and Technology (NIST). This brings it on par with chemical, biological and nuclear infra testing. The watermarking of AI-generated to clearly mark AI-generated content is very welcome for everyone and should help discern big fakes (like the one that Biden fell for).

The US’ Order was followed by the Global AI Summit convened by Rishi Sunak, attended by 28 countries (China, included) and boasting the star power of Elon Musk, Demis Hassabis and Sam Altman. This led to a joint communique on regulating Frontier AI. The EU is racing to be next, China has something out, and India is making the right noises. OpenAI announced a team to tackle Super Alignment, declaring that, “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

The race to develop AI has turned into a race to regulate it. There is certainly some optimism here – that governments and tech companies are awake to the dangers that this remarkable technology can pose to humankind, and one cannot but help applaud the fact that they are being proactive about managing the risks. Perhaps they have learnt their lessons from the ills that social media begat, and want to perform better this time. Hopefully, we will not have an AI Hiroshima before people sit up to the dangers of it.

On closer look, most of this concern and regulation seems to be directed towards what is loosely called Frontier AI – that time in the future when AI will become more powerful than humans and perhaps escape our control. Most of the narrative around regulating AI seems to be focused on this future worry. My belief, however, is that we need to worry far more about the here-and-now, and the current issues that AI has. Today’s large language models (LLMs) often hallucinate, playing fast and easy with the truth. AI powered ‘driverless’ cars cause accidents, killing people. Most GenAI models are riddled with racial and gender biases, having been trained on biased supersets of data. Copyright and plagiarism problems abound, with disgruntled human creators filing lawsuits in courts for redressal. And then, the training of these humongous LLMs spews out CO2 and degrades the environment.

Instead of super-intelligence-caused doomsday scenarios, which have a comparatively tiny probability, we need to focus on the many immediate threats of AI. More likely, it will be a malevolent state actor who uses deepfakes and false content at scale to subvert democracy, or maybe a cornered dictator who turns to AI-based lethal autonomous weapons to win a war he is losing.

AI might not harm us, but a human using AI could. We need to regulate humans using AI, not AI itself.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment