Regulating AI Part 1: Ground realities, trust & privacy concerns and how to bell the cat

With growing adoption of Artificial Intelligence (AI) in all spheres of life, business, science & technology, defence, medical science, etc., time has come to monitor this growth and put some rules to govern what is fast becoming a new ecosystem.

Also Read: Regulating AI Part 2 - How the world is doing it and what India can do

The fears of AI models acquiring human-like agency and becoming sentient is no longer confined to the realms of science fiction. A recent news report brings to the fore the dangers of AI. As per news reports, a virtual test carried out by the US Air Force went awry when an AI-driven drone made the decision to “kill its operator” to remove any potential move to control or impede its mission.

Today, this concern is not limited to fictional narratives; even tech mavens share apprehensions about it. Who takes the responsibility when an AI model goes rogue – the algorithms or the humans who fed those algorithms into the systems in the first place?

These potential risks, along with numerous other challenges linked to AI, emphasise the necessity of establishing a comprehensive regulatory framework. Nevertheless, determining the appropriate boundaries in this context is a delicate task. And it is also important to find the right balance because it is a delicate line to tread. On one hand, there is a need for regulations to govern AI; however, it is equally important to ensure that these regulations do not hinder or impede innovations in the field.

The Indian scenario

What are the key components of a regulatory framework that India should have in place to effectively govern AI technology and its applications across industries? 

To effectively govern AI technology and its applications across industries, India needs to have a comprehensive regulatory framework that addresses the key components, such as ethical and responsible AI, data protection and privacy, algorithmic transparency, standards and interoperability, cybersecurity, IP rights, compliance and enforcement, etc.

AI is a field which is complex and evolving with new kinds of applications. India’s mixed sectoral economy needs a regulatory framework which encourages technological advancements in a flexible, trusted environment, opines Shayesta Shahzabeen, Strategy Development & Innovation Lead, BC Web Wise.

She feels that sector-specific regulatory frameworks for different industries are crucial. She feels that it is important to encourage cross-industry collaboration with a common platform featuring different industry experts and thought leaders, technocrats, etc. Other important components are data-protection and privacy governance and guidelines framework, transparency in algorithms and systems accountability.

To effectively govern AI in India, the regulatory framework should encompass several key components, says Pavan Punjabi, Chief Integration Officer, Makani Creatives.

“Clear ethical guidelines need to be established, ensuring that AI technology aligns with societal values, privacy protection, and fairness. Strengthening data protection laws is essential to safeguard personal and sensitive data, emphasizing informed consent, anonymization, and secure storage. Accountability and transparency should be prioritized, requiring organisations to disclose the logic behind AI algorithms, promoting transparency and accountability. Safety and security regulations are crucial to ensure that AI systems undergo testing, certification, and protection against malicious use,” he adds.

Regulatory sandboxes

Creating regulatory sandboxes can enable controlled experimentation and innovation in AI while providing a safe environment for testing and validating new AI applications. This can help regulators better understand emerging technologies and adapt regulations accordingly.

Regulatory sandboxes are controlled environments or frameworks created by regulatory authorities to facilitate the testing and experimentation of innovative products, services, or technologies, such as AI. These sandboxes provide a safe space for companies and developers to trial their novel ideas, products, or services in a controlled manner without being subjected to full-scale regulatory compliance.

The primary purpose of regulatory sandboxes is to strike a balance between regulatory oversight and fostering innovation. By allowing participants to operate under relaxed regulatory requirements or exemptions for a specified period, regulatory sandboxes encourage the development and deployment of new technologies while enabling regulators to understand potential risks, challenges, and implications associated with these innovations.

The regulatory framework for AI needs to be very comprehensive and essentially needs multiple segments of intervention as it is evolving, points out Gyan Gupta, Product Evangelist, Bada Business. According to him, the industry is currently struggling with data governance and with AI it becomes trickier.

According to him, a very comprehensive data governance structure needs to be in place to monitor how the data is being collected, how it is being stored, protection of privacy rights, how it is being shared and processed. He adds that this will ensure that all data has user approvals and have been secured, and no privacy rights have been breached.

“An element of trust is very critical in the scheme of things and that can be established by incorporating complete transparency. All organisations in this domain have to be made answerable on the above parameters. Algorithm accountability also needs to exist, wherein organisations will have to define how the algorithms are being used to remove biases in the system because continuous usage of AI in the system creates biases. Ethical guidelines are very crucial and need to be well defined for generative AI. As each sector will have its own challenges, there can be sector-specific regulations to align with the product framework. The government needs to create a national testing environment, a sandbox for everyone to test applications.”

Responsible AI development

India’s emergence as a global technology powerhouse has led to a rapid expansion in the development and deployment of AI technology across various sectors. However, amidst this technological advancement, concerns have arisen regarding the responsible and ethical use of AI in the country. To address these concerns, it is crucial to identify and focus on specific areas within India’s current regulatory framework that require improvement or enhancement. By strengthening these areas, India can foster an environment that promotes the responsible and ethical development and deployment of AI technology, ensuring its benefits are harnessed while minimizing potential risks and societal challenges.

The next few years will see a host of AI applications across different industries and sectors, says Shayesta Shahzabeen. “Let’s take the use case of Generative AI in the media and advertising industry. This will lead to questions on ownership, IPR and other issues which can lead to a set of challenges,” she adds.

India’s current regulatory framework for AI requires enhancement in several specific areas, opines Pavan Punjabi. Strengthening enforcement and compliance mechanisms is crucial to prevent data breaches and unauthorised access, ensuring robust data protection. Introducing guidelines to address ethical considerations and mitigate biases in AI systems is necessary to promote fairness and non-discrimination. Regulatory oversight can be enhanced by establishing specialised bodies equipped with AI expertise, enabling effective compliance monitoring and addressing emerging challenges. Furthermore, investing in AI education and training programs will empower professionals with the necessary skills for responsible AI development and deployment, fostering a knowledgeable workforce capable of navigating the complexities of AI technology.

In the realm of AI, ensuring accountability and regulation becomes paramount to address potential risks and ethical concerns. As AI systems become increasingly pervasive, it becomes crucial to establish a central regulatory agency, says Gyan Gupta.

Concurring with Gupta on this, Pavan Punjabi also feels the need for the establishment of a central agency. “A new framework is being created nationally and globally. There are improvement areas like data governance, transparency, algorithm accountability and definition of ethics. We have to be mindful of our country’s landscape, how do we balance the ethics and manage the biases that we create. India, being a multicultural country with diverse perspectives and biases, requires careful consideration of individual biases in the development and deployment of AI systems. To ensure accountability in this process, the establishment of a central agency becomes crucial. This agency could be placed under the central administrative portfolio of a ministry, such as the Ministry of Science and Technology. By doing so, India can effectively regulate the creation of AI systems, taking into account the unique cultural context and individual biases, thus fostering responsible and ethical development in the field of AI,” he states.

Potential consequences

The absence of a robust regulatory framework for AI in India could have significant consequences across various aspects, including economic growth, privacy, security, and more.

Without proper regulations, the potential risks associated with AI, such as algorithmic bias and discriminatory practices, may go unchecked. (Remember the 2019 study, which found that a health care risk-prediction algorithm in the US showed racial bias.) This could lead to adverse societal impacts, exacerbating existing inequalities and biases. It could also undermine public trust in AI systems, hindering their widespread adoption and hampering economic growth driven by AI technology.

And privacy concerns become more noticeable without adequate regulations. AI systems often rely on vast amounts of personal data, and without proper safeguards, there is an increased risk of data breaches, unauthorised access, and misuse of personal information. This could compromise individuals’ privacy rights and erode trust in digital technologies.

Moreover, the absence of a robust regulatory framework may also impact national security. AI applications can be vulnerable to exploitation by malicious non-state actors for cyberattacks, misinformation campaigns, or other nefarious activities. In the absence of stringent regulations, the potential for such security breaches and their subsequent consequences increases.

Without a robust regulatory framework for AI in India, there is a heightened risk of societal harm, compromised privacy, increased security vulnerabilities, and hindered economic growth.

India’s current regulatory framework for AI is still evolving, points out Shayesta Shahzabeen. According to her, the following are the areas that need strengthening:

  1. Data protection & Privacy, Ethical principles to address issues like bias, discrimination.
  2. Sector-wise regulations & principles.
  3. Testing mechanisms adherence to safety & reliability.
  4. Capacity building and collaboration environment.

According to her, in the absence of a strong framework for AI-related data privacy and protection, there’s a risk of certain players taking advantage of open AI systems to their benefit.

A weak regulatory framework for AI in India can have significant consequences across various domains, points out Pavan Punjabi. It can impede economic growth by introducing uncertainty and limiting the adoption of AI, hindering innovation, productivity, and overall economic development across industries, he adds.

“Inadequate regulations also pose risks to privacy and security, potentially leading to breaches of personal data, erosion of trust, and hampering the progress of AI applications reliant on data. Insufficient guidelines increase the likelihood of biases, contributing to discriminatory outcomes in crucial areas such as hiring, lending, and law enforcement. Moreover, the absence of robust regulations raises ethical concerns, including the potential for the unethical use of AI technology, such as surveillance, manipulation, and infringement of individual rights,” Punjabi adds.

According to him, neglecting safety considerations further exacerbates the situation, as AI systems without proper regulations may pose risks to human safety, public health, and critical infrastructure. He stresses that it is essential for India to develop and enforce a strong regulatory framework to mitigate these potential consequences and foster responsible and beneficial AI development and deployment.

The absence of a regulatory framework will have many ramifications, warns Gyan Gupta.

One would be that of ethical concerns arising out of not having a framework in place leading to discrimination and bias incorporation in the system. Inherent biases will imbalance decisions when these tools are being used in decision-making. The privacy risks will be very high if we don’t have a strong data protection framework in place making individuals very vulnerable to such technologies as AI can collect vast amount of data, sometimes even without the knowledge of the user. Unauthorised use and misuse will be rampant if no checks and balances are there. Lack of accountability is yet another potential outcome when there is no one to monitor organisations, developers and users, and it can backfire and diminish the positive growth that AI can bring about,” Gupta explains.

According to him, when a regulatory authority is absent, it leads to uncertainties regarding the legal and ethical obligations of organisations. In such a scenario, organisations may face a dilemma: they can either take the wrong path by disregarding ethical considerations or err on the side of caution by holding back from fully embracing and investing in AI technology. This hesitation and lack of clarity in decision-making can have significant economic implications, he adds.

He further adds that security concerns are also huge as such systems are prone to security vulnerabilities leading to limited consumer protection. Also with respect to international collaborations, there need to be clear definitions which won’t be possible in the absence of regulatory frameworks.

(Tomorrow Part 2: How does India's regulatory framework compare to international standards and best practices in governing AI technology, and how can India strike a balance between promoting innovation and technological advancement in AI.)

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment