India’s AI Governance Guidelines
If you have questions or would like additional information on the material covered herein, please contact:
Seema Jhingan, Founding Partner
sjhingan@lexcounsel.in
Saher Gandhioke, Associate
sgandhioke@lexcounsel.in
India’s AI Governance Guidelines
| On November 5, 2025 the Ministry of Electronics and Information Technology (“MeitY”) released the India Artificial Intelligence Governance Guidelines (“the Guidelines”), India’s first comprehensive framework for governance of Artificial Intelligence (“AI”). Issued under the IndiaAI Mission, the Guidelines do not introduce a separate AI legislation but instead establishes a governance framework that relies on existing laws, institutional oversight and phased regulatory development. This approach reflects a clear policy choice that focuses on innovation while progressively strengthening accountability as AI deployment breaks into various sectors. RBI’s Suggestions on Responsible and Ethical AI The Guidelines draw reference from the principles published by a committee set up by RBI in August 2025, to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence and primarily focus on seven principles to guide AI governance centred on trust, people-centric deployment, innovation over restraint, fairness, accountability, understandable by design and system safety. Put together, these principles highlight that AI is not merely a technological issue but a matter of public infrastructure, administrative oversight and rights-sensitive regulation. The framework is organised across enablement, regulation, linking capacity building and infrastructure expansion with risk management and institutional supervision. These principles may serve as interpretative aids for sector specific regulators and courts while assessing compliance with statutory obligations that are already in place while dealing with issues of negligence, due diligence and violation of fundamental rights. Indian Laws and AI A notable feature of the Guidelines is the decision to incorporate AI governance within the existing Indian laws. MeitY has clarified that several foreseeable AI-related harms can be addressed through various existing acts such as the Information Technology Act, 2000 (“IT Act”), the Bharatiya Nyaya Sanhita, 2023, the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and consumer protection legislations amongst others. This approach reinforces reliance on statutory duties such as reasonable security practices under Section 43A of the IT Act, data fiduciary and significant data fiduciary obligations under the DPDP Act amongst others. However, the Guidelines also acknowledge certain regulatory gaps, particularly with respect to: (i) The IT Act, drafted more than two decades ago, which now requires an update in relation to how digital entities are classified, specifically in the context of AI systems. For example, there is a need to clearly define the roles of the developer, deployer, users, etc. and how they will be governed under the current definitions (’intermediary’, ‘publisher’, ‘computer system’, etc.). (ii) The DPDP Act which governs the collection and processing of all digital personal data in India and its impact on AI development, the scope of the research and ‘legitimate use’ exception for AI development and risk mitigation. If the existing regulations are unable to tackle the emerging risks to individuals, there may be a need to introduce additional rights or obligations such as adoption of data portability rights to give individuals more control over their data and the urgent need for legally backed standards to authenticate AI-generated content and address deepfakes. (iii) Copyright treatment of training data to enable the large-scale training of AI models and its implications, evaluating the copyrightability of works produced by generative AI systems, and reviewing international practice to propose a balanced copyright framework suited to India’s needs, while ensuring adequate protections for copyright holders and data principals. The Guidelines also promote techno-legal approach to support specific policy objectives and governance. Instead of primarily relying on regulatory instruments for governance, the Guidelines suggest drawing benefit from technology-enabled solutions in areas such as content authentication, privacy preservation, and bias mitigation. Compliance and Accountability From a compliance perspective, the Guidelines introduce India-specific risk frameworks that cover malicious use and discrimination, transparency failures, systematic market risks, autonomous loss of control and national security concerns. The Guidelines emphasise that effective governance includes not just regulation, but other forms of policy engagement, including education, infrastructure development, diplomacy, and institution building and is focused on six pillars: (i) Infrastructure (ii) Capacity Building (iii) Policy & Regulation (iv) Risk Mitigation (v) Accountability and (vi) Institutions. The accountability model provided under the Guidelines is one of graded liability wherein the obligations scale according to the nature of the stakeholder’s involvement and the precautions adopted. Transparency reporting, grievance redressal mechanisms and enforcement through existing regulatory bodies form the core of this structure, with an explicit acknowledgment that responsible innovation requires tolerance for certain probabilistic failures despite due diligence. Collaborative Approach The framework establishes a new whole of government approach where ministries, sectoral regulators and other public bodies work together to develop and implement AI governance frameworks. Under this approach, the AI Governance Group is positioned as the central inter-ministerial decision-making body, supported by a Technology and Policy Expert Committee and an AI Safety Institute that will provide technical expertise on safety issues and testing, standards development and international cooperation. Additionally, sector specific regulators would retain enforcement authority within their respective domains, while MeitY functions as the nodal ministry overseeing coherence across the AI governance ecosystem. The Guidelines illustrate a shift away from prescriptive, compliance-heavy regulation towards institutionalised oversight, techno-legal solutions and a phased action plan. The success of this model ultimately depends on the operationalisation of the new institutions, the development of common standards and the translation of voluntary frameworks into enforceable mandatory compliances. India AI Impact Summit To further promote AI development and ensure proper governance , India is set to host the India-AI Impact Summit 2026 on February 19-20 in New Delhi centred around the theme “welfare for all, happiness for all.” This summit will bring together global stakeholders to deliberate on global AI governance, safety standards, ethical deployment and cooperative frameworks. From a legal and regulatory standpoint, the summit is expected to facilitate dialogue on harmonisation of AI and safety standards, cross-border data governance and building robust liability frameworks, to lay down an effective action plan for the future. Feedback |
Disclaimer: LexCounsel provides this e-update on a complimentary basis solely for informational purposes. It is not intended to constitute, and should not be taken as, legal advice, or a communication intended to solicit or establish any attorney-client relationship between LexCounsel and the reader(s). LexCounsel shall not have any obligations or liabilities towards any acts or omission of any reader(s) consequent to any information contained in this e-newsletter. The readers are advised to consult competent professionals in their own judgment before acting on the basis of any information provided hereby.