By Sumanth Kumar Garakarajula
Advocate | Former Media Professional | Executive Member, Central Board – ISAIL
Artificial Intelligence (AI), particularly Large Language Models (LLMs), is steadily entering legal practice and judicial administration. From legal research and drafting to translation of judgments and case management, AI tools promise speed and efficiency. However, beneath this promise lies a little-known but serious risk—prompt injection—which has the potential to undermine the fairness, integrity, and credibility of the justice delivery system.
What Is Prompt Injection?
Prompt injection is a method of manipulating an AI system by embedding carefully crafted instructions within user inputs. These instructions can override or bypass the system’s original safeguards, causing the AI to generate biased, misleading, or procedurally unsound outputs.
In a legal context, this could mean that an AI tool intended to provide neutral legal assistance is induced to selectively interpret law, emphasise certain precedents, or suppress critical legal principles—often without any visible indication that manipulation has occurred.
A Timely Judicial Warning
The gravity of this issue was recently highlighted by Justice P. S. Narasimha of the Supreme Court of India, who cautioned that artificial intelligence has the potential to replace human thinking if not properly understood and regulated. He emphasised the need for the legal fraternity to engage with AI before it begins to shape, and possibly take over, the professional lives and reasoning processes of advocates and judges.
This warning assumes added significance when viewed alongside vulnerabilities such as prompt injection, which operate silently and can influence legal outcomes without overt detection.
Real-World Cybersecurity Concerns
The risks associated with prompt injection are not theoretical. Recent cybersecurity incidents reported globally have shown how AI systems, including widely used commercial platforms, were targeted through prompt-injection–based attacks. These incidents demonstrated how malicious inputs could extract restricted information, override safety controls, or alter AI behaviour.
Such developments underscore a critical reality: if leading AI platforms are susceptible to prompt injection, the consequences for AI-assisted legal and judicial systems—which demand far higher standards of accuracy, neutrality, and accountability—could be severe.
Why It Matters to Law and Courts
Law is founded on neutrality, consistency, and reasoned adjudication. If AI tools are used for summarising cases, identifying precedents, or supporting judicial workflows, prompt injection can subtly influence outcomes by:
- Highlighting favourable judgments while marginalising binding precedents
- Introducing skewed or incomplete legal reasoning
- Distorting factual summaries and case timelines
Such manipulation directly threatens natural justice, equality before law under Article 14, and procedural fairness guaranteed by Article 21 of the Constitution.
Risks for Advocates and Litigants
The increasing reliance of advocates on AI-assisted drafting and research tools also raises concerns. Compromised AI outputs may result in:
- Incorrect legal advice
- Omission of mandatory statutory provisions
- Misleading or fabricated citations
This exposes advocates to ethical and professional liability and places litigants at serious risk of prejudice.
Judicial Use of AI: A Double-Edged Sword
India’s digitisation efforts—through eCourts and AI-driven translation and research tools—are progressive and necessary. However, judicial reliance on AI brings with it the danger of automation bias, where AI-generated outputs are trusted without sufficient human scrutiny.
When combined with prompt injection vulnerabilities, such reliance can institutionalise errors and make accountability difficult to trace.
Legal Vacuum and Accountability
Existing statutes, including the Information Technology Act, 2000, do not specifically address manipulation of AI systems used in legal or judicial processes. This raises critical unanswered questions:
- Who is liable for a manipulated AI output?
- Can prompt injection amount to interference with the administration of justice?
- Could such manipulation, in appropriate cases, constitute contempt of court?
The Way Forward
To ensure AI strengthens rather than weakens the justice system:
- AI must remain assistive, not determinative
- Human oversight must be mandatory
- Judicial use of AI should be transparent and auditable
- Dedicated legal and judicial AI ethics committees should be institutionalised
Conclusion
Prompt injection is not merely a technical flaw—it is a constitutional and institutional challenge. Justice P. S. Narasimha’s caution, coupled with recent cybersecurity incidents involving AI platforms, makes it clear that the legal profession must understand artificial intelligence before it begins to reshape legal reasoning, advocacy, and adjudication itself.
As India moves toward AI-enabled justice, vigilance, regulation, and ethical clarity will be essential to preserve judicial independence, fairness, and public trust.
