• News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print
BW Businessworld

'Simplest Guardrail For AI Hallucinations? Be Skeptical, Double Check Outcomes & Don't Anthropomorphise AI'

In an interaction with BW Businessworld, Jitendra Putcha (Global Head, Data, Analytics and AI, LTIMindtree) talks about Generative AI and how it's here to stay but requires guardrails and regulations. He also shares LTIMindtree's vision about the impact and newer avenues Generative AI could help create broadly in three segments for the company

Photo Credit :


Jitendra Putcha, Global Head, Data, Analytics and AI at LTIMindtree

In an interaction with BW Businessworld, Jitendra Putcha (Global Head, Data, Analytics and AI, LTIMindtree) talks about Generative AI and how it's here to stay but requires guardrails and regulations. He also shares LTIMindtree's vision about the impact and newer avenues Generative AI could help create broadly in three segments for the company. 


Enterprises have worked with AI for a while now. But the spotlight is now on AI. Are companies increasing their efforts with AI-backed services and tools? 

Companies are increasingly adopting AI services and tools. This rate of adoption can be viewed through two lenses - one is more exploratory and evaluative in nature, while the other is of immediate value realisation (revenue and cost). Automation and data-driven business models are whitespaces where value of AI is being realised quickly. On the other hand, improved decision making, increased customer experience, product and services innovation, data storytelling are areas where enterprises are treading with caution given the time taken to value realisation, which have intangible long-term benefits. 

How important is it for tech companies to hop on the Generative AI bandwagon? 

Generative AI fosters the creation of original materials from existing content such as text, code, audio recordings or images. This technology has the potential to reimagine almost all aspects of software applications and can impact industries and services across consumer goods, retail, fintech, media and entertainment, marketing, metaverse, etc. While GAI is still evolving, drawing the most from its potential will require organisations to imbibe the philosophy of HITL (Human In The Loop) and remain mindful of the security issues, privacy concerns, and inherent biases of these technologies. By building guardrails – principles and ethical standards, factoring public inputs, establishing governance and regulation, and following legalisations, tech companies can ride the wave of innovation and edge that this technology can offer, sooner. These guardrails will also be necessary for tech companies wanting to protect and penetrate the next big leap.

Lot of companies are scrambling to avail and deliver Generative AI-based services. But there are dangers such as ‘hallucinations’ to worry about. Don’t you think there needs to be guardrails around developments in Generative AI space? 

There are always side effects when a shift of this nature emerges. A significant downside of GAI models is that they might produce content that has falsehoods embedded or alludes to an extended truth – distorting reality. These aspects are referred to as hallucinations. The simplest guardrail is to be sceptical and be willing to double check an outcome, and not anthropomorphize AI. But what if we fail to understand the source of the content itself? This poses a bigger problem! To abate such situations, globally accepted foundational standards will need to be instated, clearly delineating laws and ethical regulations. It must be a global agreements with an all-agreed stipulation, and not the siloed efforts of the technology fraternities or governments. While the objective is not to create a “one-size-fits-all” directive, this overarching semantics could set the base for country-specific regulations. 

Could you describe the problems that could come to the fore with AI hallucinations?

While tech companies are rushing to infuse AI into almost all product suites, this excitement has a significant weakness, an undertone. Making subtle changes to images, text, or audio can deceive these systems into perceiving things that aren’t there. There are umpteen examples of hilarious and horrifying tripping that GAI can create, which may not pose a “threat” if it is taken in the right spirit. But the problem lurks when a sophomore considers a GAI response to be “true” and “tends” to believe it as the state of verity (academics), or machines identify rifles as helicopters (security), or a piece of code for a critical application written by a machine, which is not peer reviewed (technology) goes live, or pathological translations (linguistics), etc. Such situations arise because little is understood about how these systems function or how they break, and if they break – how they would behave. We have experienced this in some form and shape since the advancement of AI over the last few years in business, politics, movies, among others. 

How can hallucinations be prevented by companies?

While hallucinations are relatively new and necessarily a “not-so-deserving” byproduct of GAI and large language models, active efforts are being taken to prevent this syndrome. Research is being carried out to detect and predict hallucinated contents in neural sequences, tackle overfitting, and protect IPs and “watermark” content generated by GAIs. All this is done using Reinforcement Learning from Human Feedback (RLHF), solving quantitative reasoning problems with language models, knowledge graphs, etc. But these frameworks are still evolving and not without limitations. They take a considerable amount of time to build, which may be a luxury considering the pace of evolution. 

What is your reading on Indian companies and their work on Generative AI in the past few months? What are the conversations like amongst your peers?

GAI is a disruptive force, and it comes with its share of concerns and dazzles, but it is here to stay! There will be increased automation and reduction of legacy services with the intervention of these technologies and Indian companies need to gear up to address the kind of work arising from it. 

Acting fast is the key - AI upskilling for employees, collaborating with clients to create an AI roadmap, partnership with academia and startup ecosystem, are some avenues to invest. This is “the” inflection point. Time is rife for Indian IT companies to up their game and reimagine the way they do business, gearing towards innovation and deep tech consulting. The analysts’ sentiment is that GAI may slow down market share gains and deflate pricing for Indian IT companies in the short term, but only time can prove (or disprove) this hypothesis.

What is LTIMindtree working on in the space of Generative AI?

At LTIMindtree, we have made early investments in building GAI capabilities in our products and services business. The team consists of distinguished data scientists and researchers (Masters and PhDs) who are involved in tracking, learning, and researching these developments, including actively engaging with open-source communities and leading cloud providers. Our areas of research have been around report/content generation, intent discoveries, appropriate content identification, etc. We are working with our partners to further propel these ideas across the client base, helping them identify and prioritize industry-wise use cases, create narratives, and build niche solution offerings, guided by our principles of responsible and ethical AI. LTIMindtree is working to infuse GAI capabilities across proprietary industry-aligned solution offerings. 

How is LTIMindtree ensuring that its AI-backed services/products are safe and secure?

At LTIMindtree, we keep a tab on data and model drifts. Temporal drifts make models decay over time and their predictions less reliable. Recalibration of models over new data, updating security layers and software are tactics to ensure these models are safe from a technology and business perspective. Further, we are cautious while using off-the-shelf models and prefer to train models grounds up so that we know what goes in, what happens inside and what comes out. We also understand possible biases that the models may exhibit and consciously try to rectify them or communicate to our clients. If open source and vendor models, which are trained in open clouds, are used for further training and inferencing, we are extremely cautious of where the client data is going and how it will impact their ecosystem. AI scrutiny and forensics, as a technology, is still taking its baby steps. Safety remains the frontier and only milestone to achieve.

What is LTIMindtree’s plan for the future in Generative AI?

At LTIMindtree, we are excited and positive about the impact and newer avenues GAI could create broadly in three segments. We will continue to work with our leading partners to co-create IPs, develop and share knowledge base, invest in nurturing the right talent pool, tools, technologies and framework, and create cost-effective solutions for client problems. We are amping up our work in the direction of identifying the right use cases, embedding the right technology to unleash the power of GAI, and propagating accurate information to our clients. Within the organization, Generative AI will be extensively leveraged in our internal processes and systems to improve employee experience, productivity, and quality. Our proprietary Fosfor product suite (solving for the data-to-decisions lifecycle) is a pioneer in the Generative AI efforts. We continue to advance it with the latest developments to ensure its adoption is seamless for clients.

LTIMindtree remains committed to monitoring ethical and legal implications and maturity, guided by our principles of responsible, explainable and ethical AI.  

Also Read: ‘We Need AI Policy In Cos, Industries & Countries’: Cisco India Chief

Tags assigned to this article:
Generative AI Artificial Intelligence (AI) LTIMindtree