Artificial Intelligence did not arrive in Indian law with a bang. It entered quietly—first as a research assistant, then as a drafting companion, and now, increasingly, as a silent influence on decision-making. Unlike previous technological changes, AI does not merely improve speed or efficiency. It changes how legal thinking itself is shaped.
In India, the conversation around AI and law often swings between two extremes: fear of replacement and blind optimism. Both miss the real issue. AI is not replacing lawyers or judges. It is reshaping responsibility, and Indian law is absorbing this change in phases—some visible, others unfolding quietly in the background.
Today, most advocates encounter AI as a tool that helps them read faster, draft quicker, and research deeper. Legal databases, summarisation tools, and drafting assistants are already embedded in everyday practice. This phase fits comfortably within existing legal frameworks. The advocate remains accountable; the machine has no voice of its own. Under the Advocates Act, 1961, responsibility cannot be delegated to an algorithm. If an AI tool produces an incorrect citation or a fictional judgment, the liability rests squarely with the lawyer who relied on it.
Yet, even at this stage, cracks are visible. AI hallucinations—confidently stated falsehoods—have already surfaced in court filings worldwide. Indian courts will not be immune. The danger is not malice, but misplaced trust. When speed overtakes verification, the quality of justice suffers.
A more complex phase is now emerging. AI is beginning to influence legal decision-making indirectly. Predictive tools suggest likely outcomes. Analytics estimate bail probabilities or litigation risks. While judges and lawyers may insist that machines do not decide cases, the psychological influence of algorithmic suggestions is real. When patterns generated from past data begin to guide present judgment, questions of fairness arise.
India’s constitutional framework is particularly sensitive here. Articles 14 and 21 demand equality, fairness, and due process. Historical legal data is not neutral; it carries the imprint of social, economic, and institutional bias. An AI trained on such data may unknowingly reinforce the very inequities the Constitution seeks to correct. If an algorithm nudges a decision, who is responsible for examining its reasoning? Unlike a witness, an algorithm cannot be cross-examined.
Beyond courtrooms, AI is increasingly shaping judicial administration. Case allocation systems, cause-list prioritisation, delay prediction tools, and court management dashboards are becoming part of the system. These technologies do not decide guilt or innocence, but they determine when and how a litigant is heard. Access to justice is no longer influenced only by human backlog, but by algorithmic logic—often invisible to the litigant.
The challenge here is opacity. When administrative discretion is replaced or supplemented by automated systems, transparency becomes essential. Without disclosure and auditability, justice risks becoming efficient yet inscrutable.
The next phase, already visible in regulatory spaces, is perhaps the most consequential. AI systems are increasingly used to flag suspicious financial transactions, detect tax anomalies, and trigger regulatory scrutiny. In such cases, AI does not merely assist—it initiates action. For the citizen or business affected, the first encounter with the State may now be an automated alert.
Indian law currently lacks a clear framework to address this. There is no comprehensive statute governing AI accountability, no enforceable right to demand algorithmic explanations, and no settled jurisprudence on liability when automated systems err. As regulatory reliance on AI expands, constitutional challenges are inevitable.
Looking further ahead, India will eventually confront questions that once belonged to science fiction. When AI-generated legal advice causes harm, who pays damages—the developer, the deployer, or the professional who relied on it? Can AI systems be audited with the rigor applied to expert witnesses? Should certain high-risk uses of AI be prohibited altogether?
These questions cannot be postponed indefinitely. Law has always evolved in response to power. AI is not merely technology; it is a new form of power—quiet, scalable, and persuasive.
Indian courts, including the Supreme Court of India, have repeatedly shown that constitutional principles adapt to changing realities. The same clarity will be required now. India does not need to reject artificial intelligence in law. But it must insist on accountability, transparency, and human oversight.
The future of Indian law will not be decided by machines. It will be decided by how wisely humans choose to use them.
Author
Sumanth Kumar Garakarajula
Advocate | Founder, Sumantu Law Associates
Sumanth Kumar Garakarajula is an Indian advocate focusing on technology law, constitutional issues, and emerging legal risks at the intersection of AI and governance. Through litigation, advisory work, and public legal education, he examines how law must evolve without surrendering accountability to automation
