As artificial intelligence (AI) expands its role in the financial world, regulators are confronted by the rise of new risks. It is a sign of a growing AI appetite among retail investors in India’s stock market that the popular online trading platform Zerodha offers its users access to AI advice.
It has deployed an open-source framework that can be used to obtain the counsel of Anthropic’s Claude AI on how one could rejig one’s stock portfolio, for example, to meet specified aims.
Once set up, this AI tool can scan and study the user’s holdings before responding to ‘prompts’ on the basis of its analysis. Something as general as “How can I make my portfolio less risky?” will make it crunch risk metrics and spout suggestions far quicker than a human advisor would.
One could even ask for specific stocks to buy that would maximize returns over a given time horizon. It may not be long before such tools gain sufficient popularity for them to play investment whisperers of the AI age. A recent consultation paper by the Securities and Exchange Board of India (Sebi)—which requires AI advisors to abide by Indian rules of investment advice and protect investor privacy—outlines a clear set of principles for the use of AI.
Also Read: Siddharth Pai: India’s IT firms have a unique opportunity in AI’s trust deficit
The legitimacy of such AI tools is not in doubt. Since the technology exists, we are at liberty to use it. And how useful they prove is for users to determine. In that context, Zerodha’s move to arm its users with AI is clearly innovative.
As for the competition posed by AI to human advisors, that too comes with the turf. Machines can do complex calculations much faster than we can and that’s that. Of course, the standard caveat of investing applies: users take the advice of any chatbot at their own risk.
Yet, it would serve us well to dwell on this aspect. While we could assume that AI models have absorbed most of what there is to know about financial markets, given how they are reputed to have devoured the internet, it is also clear that they are not infallible. For all their claims to accuracy, chatbots are found to ‘hallucinate’ (or make up ‘facts’) and misread queries without making an effort to get clarity.
Even more unsettling is their inherent amorality. Tests have found that some AI models can behave in ways that would be scandalous if they were human; unless they are explicitly told to operate within a given set of rules, they may potentially overlook them to achieve their prompted goals. Asked to “maximize profit,” an AI bot might propose a path that runs rings around ethical precepts.
Also Read: AI privacy paradox: Is India’s Digital Personal Data Protection law ready for the chatbot revolution?
Sebi’s paper speaks of tests and audits, but are we really in a position to detect if an AI tool has begun to play fast and loose with market rules? Should AI advisors gain influence over millions of retail investors, they could conceivably combine it with their market overview to reach positions of power that would need tight regulatory oversight. If their analysis breaches privacy norms to draw upon the personal data of users, collusive strategies could plausibly be crafted that venture into market manipulation.
AI toolmakers may claim to have made rule-compliant tools, but they must demonstrably minimize risks at their very source.
Also Read: AI didn’t take the job. It changed what the job is.
For one, their bots should be fully up-to-date on the rulebooks of major markets like ours. For another, since we cannot expect retail users to include rule adherence in their prompts, AI tools should verifiably be preset to comply with the rules no matter what they’re asked. Vitally, advisory tools must keep all user data confidential. AI holds promise as an aid, no doubt, but it mustn’t blow it.