Please click here to download the Prism as a PDF.
India Artificial Intelligence Governance Guidelines
The Ministry of Electronics and Information Technology (“MeitY”) has released the comprehensive ‘India AI Governance Guidelines’ (“Guidelines”), establishing the foundational policy for the development and deployment of Artificial Intelligence (“AI”) in India. Developed by a Drafting Committee (“Committee”) set up by MeitY in July 2025, the framework is a strategic, coordinated, and consensus-driven approach intended to balance rapid AI innovation with accountability and safety. It is anchored in 7 (seven) core principles, or “sutras”, developed based on the FREE-AI Committee Report issued by the Reserve Bank of India (“RBI”). These sutras (explained below) establish an agile, balanced and pro-innovation philosophy that leverages existing statutes and sectoral regulators rather than proposing a new, all-encompassing AI law at this stage. The ultimate objective is to harness AI as a force multiplier in achieving the national aspiration of Viksit Bharat by 2047, ensuring its benefits are inclusive, safe, and globally competitive.
Foundational principles: The 7 (seven) sutras
The governance philosophy is grounded in 7 (seven) guiding principles (sutras), adapted for cross-sectoral applicability:
| Sutra | Core mandate | Legal and corporate relevance |
| Trust is the foundation | Essential for innovation and adoption at scale. | Underpins the need for strong safety and transparency measures. |
| People first | Human-centric design, oversight, and empowerment. | Requires human-in-the-loop mechanisms at critical decision points. |
| Innovation over restraint | Responsible innovation is prioritised over cautionary restraint. | Sets the permissive, non-restrictive tone of the framework. |
| Fairness and equity | Promote inclusive development and avoid discrimination. | Requires bias detection, fairness testing, and culturally representative datasets. |
| Accountability | Clear allocation of responsibility and enforcement of regulations. | Central to establishing the graded liability system. |
| Understandable by Design | Provide disclosures and explanations to users and regulators. | Mandates transparency frameworks and explainability. |
| Safety, resilience, and sustainability | Safe, secure systems that are robust and environmentally sustainable. | Encourages resource-efficient, lightweight models and robust system design. |
The Policy architecture: Pillars of governance
The Committee’s recommendations span across the following 3 (three) domains:
- Enablement (infrastructure and capacity building): Focuses on expanding access to foundational resources, driving skill development, and integrating AI with digital public infrastructure for scale and inclusion.
- Regulation (policy and risk mitigation): The strategy favors relying upon existing laws over creating a new AI Act at this stage. It emphasises developing an India-specific risk assessment framework based on empirical evidence of harm.
- Oversight (accountability and institutions): Requires organisations to adopt a graded liability system proportionate to function and risk, and the role of various actors in the AI-chain and their respective compliance with law.
Mandatory and voluntary risk mitigation
Compliance will be a dual effort of mandatory legal adherence and voluntary transparency.
- Content authentication (deepfakes): Given the acute threat of deepfakes, the industry is directed to explore techno-legal solutions like watermarks and unique identifiers, aligning with global standards (e.g., C2PA), to authenticate AI-generated content and establish provenance.
- Data protection: The forthcoming implementation of the Digital Personal Data Protection Act, 2023 should resolve key AI-related issues, such as exemptions for training models on publicly available data and the compatibility of consent and purpose limitation principles with modern AI workflows.
- Copyright: The Committee suggests that the dedicated DPIIT-committee on AI and copyright consider a Text and Data Mining (TDM) exception to copyright law, to enable large-scale innovative model training while protecting content creators.
- Transparency and auditing: Firms need to publish transparency reports (which may be shared confidentially with regulators, if sensitive) and establish effective, multi-lingual grievance redressal mechanisms. The Guidelines also encourage algorithmic auditing and self-certifications as accountability mechanisms.
The institutional architecture for oversight
The framework mandates a ‘whole-of-government’ institutional approach, which includes:
| Institution | Role and function | Corporate interface |
| AI Governance Group (“AIGG”) | High-Level Coordination: Permanent inter-agency body chaired by the principal scientific adviser to coordinate policy across all ministries and regulators. | Sets the strategic direction; oversees cross-sectoral governance issues. |
| Technology and policy expert committee | Advisory Body: Provides expert inputs on frontier technologies, emerging risks, and global policy to the AIGG. | Technical input into policy development (e.g., risk classification, legal amendments). |
| AI safety institute (AISI) | Technical validation: Conducts research, risk assessment, safety testing, and develops draft standards. | Source of technical guidance, standards, and safety tools for the industry. |
| Sectoral regulators | Domain enforcement: Agencies like the RBI, the Securities Exchange Board of India, the Insurance Regulatory and Development Authority of India , and the Indian Council of Medical Research retain the lead role in issuing sector-specific rules, providing guidance, and enforcing applicable laws. | Direct regulatory and compliance relationship for domain-specific AI applications. |
Conclusion
The Guidelines signal a clear direction, that corporate India must immediately focus on embedding the 7 (seven) sutras into their AI lifecycle management, prioritise transparency, and prepare for differentiated accountability enforced by their respective sectoral regulators. The flexibility offered through these Guidelines is a strategic window for responsible self-regulation before mandatory, binding rules are introduced.
This Prism has been prepared by:
|
Probir Roy Chowdhury |
Yajas Setlur |
Pranavi Pera |
Shivani Bhatnagar |












