AI Fear Factor - How Much Do They Know?
With AI and the commercial interest of surveillance capitalism business models, this deep driving desire is at peril
Modern societies are formed on concepts of free will and self-governance. Societal authority is put on opinions and feelings of people. While people have common ideas around universal concepts like voter knows best, customer is always right etc., subconsciously, free will itself is influenced by a lot of data points - cultural, spiritual, personal etc. In this age, AI can automatically manage and manipulate these data points at a massive scale- in effect, hacking human feelings, attitudes, beliefs, behaviours and in essence, changing the very fibre of society itself.
AI with deep learning neural networks improve themselves by learning on more and more data and establishing complex associations and patterns. Algorithms learn to collect data and combine it with seemingly irrelevant data in new and mysterious ways to develop various relationships with a person. Recognizing objects on the road and making appropriate decisions, identifying customer behaviour patterns to spot fraud or recommend products or even identify sexual, political or religious leaning to capitalize on them are all in the realm of AI.
Principles of persuasion was always applied in technology to keep people engaged with products and services. Even though historically the price of this persuasion was not that high, AI and advancements in machine cognitive capabilities makes this extremely effective and scalable to massive levels influencing billions of people. Case in point; as evidenced recently in US elections, Myanmar and the fact that even 70% of YouTube Videos watched are AI recommended.
Through phones, wearables, home appliances, IOT devices, etc., data is constantly being fed to the cloud. With the amount of personal data out there, availability of super computing power and advancements in AI and cognitive capabilities of machines, there are additional challenges in making sure that the individual and the larger society is protected.
With the spring of AI, data privacy and protection needs to be looked at very seriously in a new light.
Regulatory framework to protect individual and society
Typically regulations cover a) type of data being collected b) purpose of data collection/the use of the data and c) rules on data retention. It required companies to use collected data only in order to provide core services and companies were required to anonymize personal data. With AI in the picture, purpose limitation, data minimization and data retention(a, b, c) regulations are inadequate and a challenge without completely overriding the goodness of AI. With AI systems, establishing a data path of how a decision is made is difficult and with more and more data, how a decision will change is also difficult to predict. For example, personal data can be removed as per the data retention but cannot be unlearned. Anonymizing the data is not going to help since, with enough of big data segments from various sources, one can re-identify almost everybody. The problems with current regulation multiply, since AI algorithms are like a black box and they evolve beyond the understanding of the creators; they are not deterministic, they are stochastic.
Regulatory frameworks should rather start looking at business outcomes and derivative work products. With AI one can combine various types of massive amounts of data and come out with mysterious linkages and outcomes making seemingly innocuous content more disruptive than explicit content.
Primarily, focus of regulation should be on how the data being collected affects the revenues of the organization collecting the data and which entities have access to use the AI Models trained on the data.
Cost of free
When Gillette offers a free safety razor, common understanding is, it is because of the lifetime cost of the cartridges. If an online service or social media is free, not much attention is paid to understand their business model, how do they make money? Usually it is a matter of ‘surveillance capitalism’ where the personal data and insights from it with the help of AI is what drives the revenues.
Loosely defined phrases like, “Data could be used by associates and subsidiaries for offering service etc.” in privacy policies of companies should not make the cut any more. Any derived work of the data should remain and be used by only the company collecting the data with the users’ consent.
Having an insight into the data being collected may not help much but it would help to know how it impacts the business in terms of value.
Personal consent and anonymity
How can one give personal consent for data collection and utilization if the outcome cannot be clearly defined? Providing data for a critical healthcare service will be in the interest of the user but the transparency in terms of utilization of machine learning models is absent today. It will be very difficult for a regular user to understand the impact of sharing such data. The only way to remedy the situation is to be more stringent in terms of the utilization of the data and give the user an option of anonymity.
Currently, incognito browsing is only applied to local sessions on the device users are connecting from, companies are still free to identify and collect data from these sessions. This should become truly anonymous. i.e. the user should have complete control over whether someone out there is identifying them or not and it should be strictly adhered to by companies with the fear of being penalised by regulatory authorities.
"You are what your deep, driving desire is. As your desire is, so is your will. As your will is, so is your deed. As your deed is, so is your destiny." - Brihadaranyaka Upanishad
With AI and the commercial interest of surveillance capitalism business models, this deep driving desire is at peril.
Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.