New Delhi— Chief Justice of India (CJI) Bhushan R. Gavai on Monday acknowledged growing concerns over the misuse of Artificial Intelligence (AI) tools, including the circulation of morphed images of judges, but made it clear that regulating such technology lies within the executive’s domain, not the judiciary’s.
“We have seen our morphed pictures too,” the CJI remarked while hearing a Public Interest Litigation (PIL) seeking a legal framework to govern the use of Generative AI (GenAI) in judicial and quasi-judicial institutions. “This is essentially a policy matter. It is for the executive to take a call,” he added.
The bench, also comprising Justice K. Vinod Chandran, observed that governance of emerging technologies falls under policymaking and not judicial intervention. The matter was later adjourned for two weeks at the request of counsel.
The PIL, filed by advocate Kartikeya Rawal and argued with the assistance of advocate-on-record Abhinav Shrivastava, urged the Centre to create legislation or a policy ensuring the “regulated and uniform” use of GenAI in courts. The petition differentiated GenAI from conventional AI, stressing that its ability to autonomously generate new data, text, and reasoning patterns introduces serious risks, such as “hallucinations” — instances where the system fabricates case citations or invents legal principles.
“GenAI’s black-box nature creates ambiguity within the legal system,” the plea stated, warning that its opaque functioning could produce fake case laws, biased interpretations, or arbitrary reasoning — potentially violating Article 14, which guarantees the right to equality.
The petitioner emphasized that judicial systems rely on precedents and transparent reasoning, while GenAI’s decision-making processes often remain inscrutable even to developers. This opacity, the plea argued, makes regulatory oversight nearly impossible. It also cautioned that GenAI models, trained on real-world data, can amplify existing social biases, potentially undermining citizens’ right to information under Article 19(1)(a).
Additionally, the petition highlighted the threat of cyberattacks on AI-driven judicial platforms, warning that automating court processes without robust cybersecurity measures could expose sensitive data and compromise judicial integrity.