• News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print

Srinath Sridharan

Independent markets commentator. Media columnist. Board member. Corporate & Startup Advisor / Mentor. CEO coach. Strategic counsel for 25 years, with leading corporates across diverse sectors including automobile, e-commerce, advertising, consumer and financial services. Works with leaders in enabling transformation of organisations which have complexities of rapid-scale-up, talent-culture conflict, generational-change of promoters / key leadership, M&A cultural issues, issues of business scale & size. Understands & ideates on intersection of BFSI, digital, ‘contextual-finance’, consumer, mobility, GEMZ (Gig Economy, Millennials, gen Z), ESG. Well-versed with contours of governance, board-level strategic expectations, regulations & nuances across BFSI & associated stakeholder value-chain, challenges of organisational redesign and related business, culture & communication imperatives.

More From The Author >>
BW Businessworld

Why Guardrails For AI Ethics Is An Important Ask

Artificial Intelligence is not a new subject and it has been around for over six decades as a formal discipline that brings together sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology and computer science)

Photo Credit :


The famous sci-fi writer Isaac Asimov foretold the shape of artificial intelligence (AI) to come. This was much before the ideology was even incubated into commercialisation. He developed the Three Laws of Robotics, as an antidote to the challenges of autonomous AI.

  • “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
  • “A robot must protect its existence as long as such protection does not conflict with the First or Second Law.”

In his later years, he added a fourth law (or rather Zeroth law as he wanted this new law to precede the earlier ones) in his fiction novel – a plot where robots take over governing the planets and humans.

  • Zeroth Law - "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

 AI 101 

Artificial Intelligence “is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

- John McCarthy, an early proponent of AI

Artificial Intelligence is not a new subject. It has been around for over six decades as a formal discipline that brings together sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology and computer science). Of late, we are seeing the beginning of the commercialisation of AI in retail settings – with the launch of ChatGPT from Google, as well as Bard from  Microsoft, it is even becoming an everyday conversation.

But all of us have been using AI tools regularly, for quite some time now. Simply because it’s embedded in many technologies or gadgets we use –  maps to navigate the cars, facial recognition software on our phones or attendance system at work, chatbots with various apps we use, digital assistants like Siri, voice-based assistants that enable work productivity, and video gaming. The important technologies that enable AI are Natural Language Processing, machine learning and deep learning.

Artificial Intelligence is a human-made technology, designed to simulate, replicate, replace or to add to human intelligence. Artificial Intelligence tools are built to use voluminous amounts of data, and differentiated data sets (structured data as well as unstructured data), to analyse them and derive insights. Here is a word of caution: AI projects that don’t use an effective design or that use incorrect or partial data could end up with potentially wrong and even harmful outcomes. In many cases, the ability to reverse engineer why an outcome was derived, becomes a challenge in the field of AI. Yet researchers are not worried about the hypothesis of AI overtaking human intelligence in the immediate future.

Concerns around Ethics

That AI will have a larger share of its presence in our daily lives is a given. The criticality of the debate we need to have is how should AI be deployed in the functioning of individuals, communities, enterprises, and governments. The cause and effect of AI and how to guardrail its ethical boundaries is ripe for discussion, mainly because of the way digital intersection into human society has influenced and impacted human values, societal identity, human rights, and human interaction with the rest of the world. Having a framework around what constitutes AI ethics is critical from the point of view of societal existence. Such an ethics framework would be able to segregate what constitutes risky outcomes or behaviours, and what would be acceptable as benefits of AI tools.

This is where the entire ecosystem of AI stakeholders and user industries have to come together and build a set of moral tenets, and socially responsible and measurable techniques for using AI responsibly for social good. This would necessitate that the stakeholders have to understand the various concerns and even ultimately question “what makes humans”. This foundational question would have to guide the ethical expectation from AI – especially in how it should influence and impact society, human behaviour, industrial outcome, fundamental rights and the role of society. This alone will put to rest the concerns of how automated learning systems of these AI could create a newer or artificial machine-generated form of consciousness or its own behavioural outcomes. This has to also offer assurances that AI development will avoid privacy concerns, bias and discrimination.

An ideal state of AI ethics will have to offer traceability, inclusiveness, and responsible usage of resources and data. It will offer safety and security considerations for the world.  For example, we are users of AI solutions like facial recognition. An ethical AI framework will develop and transparently announce guidelines on how it uses those data and the capabilities of such a system. A few principles of ethical guidelines would include:

Fairness: The data sets used, and the algorithms deployed, especially around personal information, have to ensure that there are no biases in terms of race, gender or ethnicity.

Explainability: for any AI bias or error, one needs the ability to trace back through the maze of complicated and asymmetrical algorithmic systems and data structures. Users of AI should be able to explain the source and authenticity of the data and what their algorithms perform.

Transparency: has to be maintained for every action and usage of all inputs in the entire AI process.

Misuse: Who will have the expertise and authority in ensuring that AI will not be misused? After all, AI algorithms and systems created may end up with bad non-state actors (or event state actors at times) to produce a negative impact.

Responsibility: the participants should take responsibility for all usage and outcomes of AI. This especially has to be developed within the framework of existing or newer laws, regulations of various nations, regulators, and institutional mechanisms. Any negative outcome could impact the loss of life, health, individual or public safety, loss of financial resources, and many more.

Ethical Dilemmas

India is rapidly becoming a hub for artificial intelligence (AI) research and development, with a growing number of startups and companies working in the field. As such, there is increasing interest in developing AI regulations that can help guide the responsible use of AI in India.  In 2018, the NITI Aayog launched the National AI Strategy, which aims to promote the development and adoption of AI in India. The government has also launched initiatives to train and upskill Indian workers in AI-related technologies. Artificial Intelligence is being used in a variety of applications in India, including healthcare, education, finance, agriculture, and transportation. For example, AI is being used to improve healthcare outcomes by analysing medical data and assisting with diagnosis, while in agriculture, AI is being used to optimise crop yields and reduce waste.

There is a school of thought that opines that it would be impossible, not just difficult to design broadly adopted ethical AI systems. This is from the view that ethics are generally hard to codify, and in a modern-day context, even difficult to implement. The underlying principle of context matters when it comes to explanations around ethics-based norms. The greater doubt is whether the power and authority of the actors would matter. In addition, we see constantly changing social and cultural norms and standards. Will we be able to develop an ethical code that can be an umbrella one, covering all the above issues?

Apart from the development of an ethical framework, questions abound on who decides on what’s ethical. What’s the grievance redressal mechanism for any such breach? What’s the overall governance mechanism? Who supervises the ethical regulations?  This is why this will be a tough journey in shaping policies, frameworks and regulations around AI.

Will humans lose control over AI and will it lead to machines managing humans? One argument against this hypothesis is that human domination in the world is to human intelligence. From the inception of humankind till now, humans have evolved to create and use tools and techniques to control other species. Humans have also utilised their cognitive abilities. Can they continue doing so, without AI overtaking them with its own speed, accuracy and better intellect-based capabilities? This is what Singularity is about – a point where it’s irreversible that technology’s supremacy over humans is set. Another way of understanding it is to accept that the human species won’t be the most intelligent ones on this planet.

This is the reason why the topic of “ethical AI” is important. Ensuring society’s ability to build an ethical AI is essential for its long-term stability and positive impact. It surely does have tremendous potential for social good, and with many commercial applications to alleviate social issues – be it in healthcare, education, governance, amongst each of the other sectors that impact how we live, what we do or don’t. With its abundant emerging tech talent, India could benefit from the commercialisation possibilities of its techpreneurs. This is the reason why India has to gear up well in terms of AI regulations. This is one topic where it is better to err on the side of abundant caution, and proactively build technical capability in its policy, regulatory and governance mechanism.

Srinath Sridharan is Policy Researcher & Corporate Advisor

Tags assigned to this article:
Srinath Sridharan magazine 22 April 2023