The misuse of Grok, the artificial intelligence (AI) chatbot of X (formerly Twitter), to generate sexualised images of women without their consent has once again brought India’s weak AI guardrails into sharp focus. Experts underscored that such incidents highlight not just the absence of robust safeguards on AI platforms but also the urgent need for clearer liability frameworks and stricter penalties for harms caused by generative AI systems. Similarly, Akshaya Suresh, Partner JSA Advocates & Solicitors, said there is an urgent need for strong content and safety controls on AI systems. These include prompt level checks to screen and block unlawful prompts, as well as system-level safeguards to ensure that models do not generate illegal or obscene content even when such prompts are attempted. Suresh noted that under the Information Technology Act and the Intermediary Guidelines, the first step is the takedown of unlawful content following a government order. “More sanctions would follow and the intermediary will lose its safe harbour, if the takedown notice is not adhered to. Conversely, where the unlawful content is taken down within prescribed timelines, stricter sanction may not follow,” she added. She said that while India does not yet have a standalone AI law, a patchwork of existing statutes can be used to regulate Al-driven harms, including the IT Act. Read more
JSA News
- January 15, 2026
Complex GCC leases keep law firms busy
- January 7, 2026






