Almonds Ai CEO Abhinav Jain on the need for reining in AI

In this exclusive interview with Adgully, Abhinav Jain, CEO and Co-Founder, Almonds Ai, delves into the critical topic of global AI regulation. Jain sheds light on the imperative need for international collaboration in shaping ethical guidelines for AI development. With 2024 being a defining year for AI and India’s role in GPAI, Jain emphasises on the challenges posed by the absence of global regulations and discusses potential risks associated with unregulated AI advancement.

Should there be a global regulation in the AI space, and what specific challenges does the absence of such regulation pose for industries and societies worldwide?

2024 is a defining year for AI and India both. As the Lead Chair for The Global Partnership for Artificial Intelligence (GPAI 2024), India clearly outlined in the ministerial declaration its endeavour to promote collaborative AI for global partnership by supporting projects to promote equitable access to critical resources for AI advancements and R&D. The fast development of AI brings both great possibilities and problems. Though AI can transform industries and enhance our living in various ways, the absence of worldwide regulation has dire consequences for individuals, enterprises, and societies.

Unfettered AI is one of the major worries due to its capacity for bias and discrimination. Algorithms trained on prejudiced data can reinforce injustice and result in abusive results. Consider a hypothetical scenario involving a recruitment AI system that favours specific demographic groups based on historical hiring trends. This situation could inadvertently result in the exclusion of deserving individuals from marginalised communities who are equally qualified and deserving of a job offer.

Another significant issue concerns privacy and security. The AI systems that gather and process huge volumes of personal data are prone to hacking as well as misuse. Query the case of identity theft, targeted surveillance, and even manipulation through individuals and populations. In addition, AI allows for quick automation, which might render millions of jobs redundant across sectors such as manufacturing and transportation. This may worsen social inequality and cause economic instability unless proper safeguards and reskilling initiatives exist. These are only some challenges arising from the lack of global AI regulation. There is a need for an internationally coordinated effort that covers all bases to ensure AI’s use only benefits people and not harms them.

What are the potential risks associated with unregulated AI development and deployment, and how a globally acceptable regulatory mechanism could address these concerns while fostering innovation and collaboration in the industry?

Imagine if self-driving automobiles purchased by people based on perfect algorithms malfunction and lead to accidents or discriminatory decisions, such as showing preferential attitudes towards certain individuals over others. Or think for a moment about acquiring powerful AI systems falling into the wrong hands, who subsequently utilising them to commit crimes.

To address these concerns, a globally acceptable regulatory mechanism should focus on several key principles:

  1. Developers and deployers of AI systems should take the blame for impacts brought about by their creations or be transparent in whatever they do.
  2. AI systems should be developed and implemented so that they do not reinforce or intensify prevailing social prejudices.
  3. Individuals’ data should be protected from unauthorized access and misuse through robust safeguards.
  4. Man-made machines may never operate without oversight and control from humans.

However, effective risk management balances both a reduction in risks and innovation. First, there should be an elastic regulative framework to work alongside the high rate with which AI develops. It could include creating international standards and best practices, promoting industry-driven projects or initiatives, as well as supporting cooperation between the government’s researchers’ practitioners. By incorporating these measures, we can guarantee that AI is created and used in a responsible manner – galvanising the foundation for mankind’s future based on this super-power technology.

In your opinion, what key principles or ethical considerations should be central to any regulatory framework for AI? How can these principles balance the need for innovation with the imperative to protect individuals and societies from the potential harms of unchecked AI advancement?

In my view, the AI regulatory framework should focus on two main things: human well-being and ethics. Here are some key principles that should be central to such a framework:

Human control: It is essential to emphasize that machines should never operate autonomously without human oversight and responsibility. AI systems must be utilised under the strict control of humans who will always take a crucial role.

Transparency: It is necessary for us to be able to comprehend how the AI algorithms operate, on what data they are based, and by means of which mechanisms these tools make decisions.

Fairness: AI systems should not reinforce or intensify existing biases in society. AI should be used fairly and in an inclusive manner so that it benefits everyone equally.

Privacy & security: Data privacy and security of individuals need to be fiercely guarded. Measures should be in place to guard against unauthorized access, misuse and even discrimination that may occur based on personal data.

Accountability: Developers and deployers of AI systems should be responsible for their creations. They should be accountable for any damage caused by their systems and ensure that they are used positively.

  1. Regulations should promote building safety and risk mitigation practices as part of the design process, instead of killing creativity.
  2. Principle-based guidelines still provide a rather wide space for varied, tailored applications across fields and scenarios.
  3. Ways of fostering global collaboration and knowledge sharing around challenges and best practices stimulate innovation responsibly.
  4. Establishing independent bodies that oversee and conduct risk-based appraisals can help to advance applications while safeguarding people at the same time.

What steps can be taken to ensure the acceptance and enforcement of AI regulations on an international scale, considering diverse cultural, economic, and technological landscapes?

The potential for AI regulations to achieve both wide acceptance and enforcement internationally requires a multi-faceted approach that recognizes, avoids traps in, navigates the insane variety of cultural and economic landscapes across the globe. Effective global governance will need the following:

  1. Such inclusive initiatives as the Global Partnership on AI, where representatives from various sectors and regions agree to draft guidelines or standardised frameworks collectively.
  2. Governments adapt their regulatory frameworks to suit national needs under the umbrella of internationally accepted ethical standards.
  3. Firms investing in their internal ethics boards and self-regulatory practices, which are based on globally endorsed standards. The global platforms for knowledge sharing on the best practices of regulation, and applications in different sectors.
  4. Involvement of various stakeholders like civil society, academia and technical experts to guide the policymakers on what is happening in society and what challenges have arisen.
  5. Regulations offer public outreach and communication related to purpose, benefits, and protections.
  6. Risk-based oversight is provided by Regulatory bodies, algorithms impact assessments and mechanisms of accountability.

International regulation of AI is a complex field, and the diverse landscape must be navigated collectively, nevertheless. By focusing on collaboration, engagement and adaptability we can develop a global structure that encourages responsible AI development while ensuring people and societies are protected from the risks of unconstrained progress in AI. Working together, we can make sure that AI turns out to be a force of good in our globally interconnected world.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment