Decoding ChatGPT Part 2: The language conundrum, intelligence, and misuse

As of now, ChatGPT cannot be termed perfect just as Bloomberg journalist Joe Weisenthal realised when he asked it to write his obituary. In the days to come, can we expect an error-free and perfect ChatGPT (or other chatbots with much more sophistication or human-like intelligence) which will overwhelm us with its intelligence? Or will an intelligent chatbot with a mind of its own remain restricted within the pages of sci-fi writers or as the unbridled fantasies of futurists?

Also read:

Decoding ChatGPT Part 1 - Will it dethrone Google? Not really, say experts

The short answer, says Nitin Raj, CEO and Co Founder, Riverum, is that while we may expect some progress in AI technology in the coming days, the reality is that an error-free and perfect ChatGPT (or any other one) is still a long way off. He observes that while AI technology is advancing at a rapid pace, it is still limited in its ability to understand and respond to human emotions, which is essential for it to truly be considered an intelligent chatbot.

“ChatGPT (or any other chatbot) will require vast amounts of data, both informational and conversational, to have any chance of developing the kind of intelligence needed to respond accurately to human emotions. This requires not just data, but also time and effort to properly train the system to recognise and interpret emotional context. In addition, there are still many unknowns in terms of the exact type of data needed, making it difficult to predict just how long it will take to train a system to the level of a perfect ChatGPT. That said, there is hope that in the near future, AI technology will become more advanced and able to better understand and respond to human emotions. However, we are still a long way off from creating a perfect ChatGPT with human-like intelligence,” says Raj.

The sentient factor

Eyebrows were raised when Google engineer Blake Lemoine, who was on its Responsible AI team, said that its conversational AI is ‘sentient.’ He found that it has “feelings, emotions and subjective experiences,” a claim refuted by Google, which took action against Blake for breaching confidentiality.

Can AI have human emotions?

Artificial Intelligence can never replicate or produce those human emotions, asserts TRA CEO Chandra Mouli. He maintains that it is difficult to put algorithms to emotions, humour, sarcasm, irony, etc, which are subtle and human means of communication, and are difficult for AI to decipher yet.

“AI’s inability to experience the said human specifics is a reason it cannot generate it with originality, and experience, and not just learning. This is an essential aspect that will still separate human intelligence from artificial intelligence,” he points out.

The language factor

Another key factor is the machine’s inability to comprehend the complexity of multiple languages, just as it is unable to capture the subtleties and nuances of human emotions in their entirety. The Atlantic columnist Ian Bogost, for instance, found that ChatGPT doesn’t “truly understand the complexity of human language”. He realises that any responses it generates “are likely to be shallow and lacking in depth and insight”.

It is unlikely that ChatGPT or any other language model will ever be completely error-free and perfect, asserts Nikita Bhate, Director - Digital Integration Strategy at Chimp&z Inc. She feels that language is a complex and nuanced system, and there are always going to be edge cases and exceptions that the model may not be able to handle. “Additionally, as language and usage of language are constantly evolving, it will require the model to be retrained and updated to reflect the new information. However, OpenAI and other research organisations are constantly working to improve language models like ChatGPT and make them more accurate and versatile,” she adds.

Concurring with Bhate on the language part, Hitarth Dadia, Chief Marketing Officer, NOFILTR.GROUP, says that we ourselves don’t understand the complexities of language as a concept. “The pronunciations of different things have different impacts at places. There is a vast cultural diversity when it comes to language. Also, there is no universal language yet; we all assume values of certain sentences, emphasis, and way because of just how we are wired. Not that we purposely think about it and make those purposeful decisions; it’s just how we are wired or how we have evolved to communicate. Considering that an AI is tasked with optimising itself again and again and it has data-sets and a sheer quantity of data that it synthesises. No human can possibly do that. So obviously, if there is any room for improvement, it has to be in AI. If anyone could explain depth in terms of language, literature, vocabulary, communication, ironically enough, it would be AI. Because we are articulate enough to understand our language. And the problems that we will face while communicating through AI or with AI, those problems will help us articulate our language better so we'll understand, that this is why it works, this is why it doesn't work. It won’t happen anytime soon, since it is AI. But we are still a little far away from that since we are also constantly evolving in our ways. And know yours is the same. If you consider the data that AI also has to synthesise, it’s going to keep changing. So, it’s going to evolve at a much faster rate,” explains Hitarth Dadia.
And it is so pertinent in a country as diverse and disparate as India, points out Anand Chakravarthy, Chief Growth Officer, Omnicom Media Group India. According to him, it is a major lacuna with respect to the usage of language in the Indian context.

“The future of text and voice search in India is vernacular. As a result, in the Indian context, language is going to be a challenge for ChatGPT as it has been trained in English. While Google Search in languages is not yet perfect, it has made significant progress in this area. So, for ChatGPT to get a large share of searches in India, language is key, and it looks like this progress is some time away,” says Chakravarthy. 

Mitesh Kothari, Co-founder and CCO, White Rivers Media, feels that the technology is in its nascent stage, and as such it is error-prone. He believes that human behaviour is convoluted and when something is meant to function in sync with it, errors are bound to happen. He adds that it is common for any technological revolution like ChatGPT to undergo this initial phase of errors like code 1020, busy servers, inapt responses, built-in biases, etc.

“It needs to be up to the minute for us to utilise the latest knowledge and come up with trending, fresh content. In the current form, it is also sensitive to tweaks in input phrasing, so the same thing asked differently may yield different responses. We need to wait more as the technology is still in the learning phase. Rectifying steps taken in the precise direction can set things right and we can have a disruptive technology to work with,” notes Kothari.

The pitfalls

At the same time, we should also be talking about the darker side of chatbots like ChatGPT, such as an ability to programme malware and phishing emails. And critics have already pointed out its inherent biases, including declaring that the best scientists are white and male. Another potential danger is that the likes of ChatGPT can be misused or manipulated by those hellbent on indulging in disinformation and hate campaigns.

There is only one way to keep a check on such transgressions: Human intervention!

Technology can sometimes be scary because of its highly advanced potential to be modulated in any direction, points out Mitesh Kothari. According to him, this is true for any tech innovation and can be solved with human intervention, which is the key to regularising the mechanisms and functioning, especially in the initial phase. He feels that we still need to wait more to see how things unfold for ChatGPT.

Systems based on AI depend on the training pool to generate responses, and because humans are involved in selecting and correcting responses, such human biases also become visible in the AI model, says Chandra Mouli. According to him, “One key factor which will help in ensuring reducing biases is that the AI trainers should be as diverse as possible. Even very basic, incorrectly and inconsistently worded phishing mails get fooled, and AI software will make those phishing scams seem more real, so it is important that humans become more aware of the possibilities of how scams of the future will change and be prepared.”

Everyone agrees to the fact that it is prone to be misused. So, it is important to have strong deterrent measures in place.

ChatGPT is an extremely powerful conversational chatbot; however no one discounts the fact that it uses an AI language model which operates on an algorithm which reads enormous data and texts; it can be programmed or manipulated, says Chinmay Chandratre.

“We are already into an era where data privacy is being questioned and are aware how evolution of social media has changed one’s behaviour. It definitely manipulates your opinions, views and sentiments. On the similar lines, there are high changes of ChatGPT that can be altered. Hence, it becomes extremely crucial to prevent these AI systems with ethical guidelines, robust regulations and safety measures to avoid being misused,” adds Chandratre.

Nitin Raj believes that ChatGPT and other AI systems have the potential to be incredibly powerful tools, but also come with a host of potential risks. On the one hand, he points out, AI systems like ChatGPT can be used to create more personalised and efficient communication experiences, providing faster and more accurate responses.

“On the other hand, these systems have the potential to be manipulated to spread disinformation, or to reflect existing biases in the data they are trained on. I am particularly concerned about the inherent biases of ChatGPT and the way it can be manipulated by agents of disinformation. ChatGPT is trained on certain patterns, and these patterns can be biased towards certain groups or interests. This can lead to a situation where certain narratives become more widely accepted, without any real consideration of their accuracy,” Raj elaborates.

According to him, AI has the potential to be programmed to create malware and phishing emails with ease and at a much faster rate than humans ever could. “If this happens with dedicated effort and funding, the foreseeable outcome of that could lead to a surge in cyber attacks and data breaches and in the worst case scenario resulting in potentially catastrophic consequences,” he adds.

With every great technology comes the greatest of cons to it, says Hitarth Dadia. “This is one of the biggest issues when it comes to AI because an AI is as good as the data-set that is given, and the data-set is as good as the agent or the human being who is programming those data-sets into that AI. So let's say if anyone had malicious intent, they could program a similar data-set into an AI and AI will respond accordingly. It's just going to amplify our best and worst elements as a civilization and as a society and as human beings. If you are scared about all those things, we should be extremely cautious,” he says.

Dadia stresses that the AI industry has to be extremely cautious. According to him, who gets the power to program an AI has to be unbiased, open, and transparent.

“So, as long as proper hygiene is being maintained in the kind of data-sets given to AI and the kind of programming done to AI,  we don't have anything to worry about. But obviously we don't live in a utopian scenario, and there will be a couple of bad apples. But the good that comes out of this is going to outshine the cons that it provides. Obviously, there will be a lot of hiccups down the way, in terms of its sheer intelligence; it's going to open up so many doors that we haven't even thought about that the cons will be a very small section of it. People are smarter than us. They are definitely working on it. We shouldn’t be that scared about malicious intentions because it comes with the territory. Social media itself has so many rotten apples, but we're still dealing with it. It's going to be a similar thing for AI as well,” he explains.

The threat is certainly real, says Praveen Yeleswarapu, Head - Product Marketing & Engagements, BluSapphire. “ChatGPT or similar systems aren’t capable of distinguishing between truth/false or understanding human emotions. They just work on how baseline AI models are trained. Hence, building robust governance around model training programs is of utmost importance. At the same time, bad guys are always around, so building strong cyber resilience frameworks to contain ever-evolving cyber threats is a necessity,” he adds.

The right and the wrong

What generative AI like ChatGPT gets right – and wrong? Nikita Bhate breaks it down this way:

  1. Lack of understanding: Generative AI models like ChatGPT are based on statistical patterns, which means that they don’t have a deep understanding of the text they are generating. This can lead to nonsensical or irrelevant responses.
  2. Bias: Generative AI models can perpetuate and amplify biases present in the data they are trained on. This can lead to unfair or discriminatory responses.
  3. Lack of creativity: Generative AI models can only produce text based on patterns it has seen before, which means it cannot generate truly novel ideas or responses.
  4. Lack of context: Generative AI models may not be able to understand the nuances of a conversation or the context in which a statement was made, which could lead to inappropriate or offensive responses.
  5. Lack of accountability: Generative AI models like ChatGPT are not able to take responsibility for their actions, which makes it difficult to hold them accountable for any harm they may cause.

Despite these challenges Nikita Bhate believes that generative AI like ChatGPT has enormous potential to assist in many tasks such as creating text, stories, poetry, and other forms of creative content, as well as natural language processing tasks, such as language translation, summarization, and dialogue systems. It is important to continue to research and develop the technology to address these challenges and to use the technology responsibly and ethically.

Concurring with  Bhate, Nitin Raj says while recognising and embracing that the potential of AI, including ChatGPT, it is important that measures are taken to ensure that the technology is not misused or abused. We must ensure that AI is developed and used responsibly, and that appropriate safeguards are in place to prevent its misuse, he concludes.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment