Digital jurisprudence in India, in an AI era

Digital jurisprudence in India, in an AI era


Generative AI (GAI) is rapidly transforming society in unprecedented ways. However, the existing legal frameworks and judicial precedents, designed for a pre-AI world, struggle to effectively govern this swiftly evolving technology.

  • As GAI becomes more integrated into various aspects of life, it brings forth complex legal and ethical challenges that necessitate a thorough re-evaluation of current digital jurisprudence.

GS-01 GS-03 (Society, Science and technology)

Dimensions of the Article:

  • What is Digital Jurisprudence?
  • Issues Related to Generative AI
  • Its Effects on Society
  • Causes
  • How to Prevent

What is Digital Jurisprudence?

  • Digital jurisprudence refers to the body of law and legal principles that govern the digital and technological landscape. It encompasses regulations, court rulings, and legislative measures that address issues arising from digital technologies, including data protection, intellectual property rights, and intermediary liabilities.
  • In the context of Generative AI, digital jurisprudence aims to create a legal framework that balances innovation with the protection of individual rights and societal interests.

Issues Related to Generative AI

  • Safe Harbour and Liability Fixation:
    • The landmark Shreya Singhal judgment upheld Section 79 of the IT Act, providing intermediaries with ‘safe harbour’ protection against liability for hosted content, provided they meet due diligence requirements. However, its application to Generative AI tools is challenging.
    • GAI tools are debated as intermediaries or mere conduits for user prompts, complicating liability issues for content generated by these tools.
  • Classification Challenges:
    • The Delhi High Court’s decision in Christian Louboutin Sas vs Nakul Bajaj and Ors (2018) stated that safe harbour protection applies only to passive intermediaries. However, distinguishing between user-generated and platform-generated content in the context of Large Language Models (LLMs) is increasingly difficult.
  • Legal Conflicts and Defamation:
    • Generative AI outputs have led to legal conflicts, such as a lawsuit against Open AI in the United States for defamation by ChatGPT. The ambiguity in classifying GAI tools complicates courts’ ability to assign liability, particularly when users repost content.
  • Copyright Conundrum:
    • Section 16 of the Indian Copyright Act, 1957, does not easily accommodate AI-generated works. Critical questions include whether existing copyright provisions should be revised, whether co-authorship with a human is mandatory, and who should be recognized for AI-generated works.
  • Privacy and Data Protection:
    • The K.S. Puttaswamy judgment (2017) established a strong foundation for privacy jurisprudence, leading to the Digital Personal Data Protection Act, 2023 (DPDP). However, GAI complicates privacy concerns, particularly with the “right to erasure” and “right to be forgotten,” as AI models cannot truly unlearn absorbed information.
  • Data Acquisition and Licensing:
    • The process of data acquisition for GAI training needs overhaul. Developers must ensure proper licensing and compensation for intellectual property used in training models, involving complex licensing challenges and the need for centralised platforms for data licensing.

Its Effects on Society

  • Misinformation and Defamation: GAI can produce false or defamatory content, leading to misinformation and potential harm to individuals and society.
  • Intellectual Property Issues: AI-generated content raises questions about ownership and rights, complicating the enforcement of intellectual property laws.
  • Privacy Concerns: The integration of personal data into AI models raises significant privacy issues, as individuals may lose control over their data.
  • Economic Impact: GAI could disrupt job markets, with automation affecting employment and economic stability.


  • Rapid Technological Advancements: The swift pace of AI development outstrips existing legal frameworks and regulatory measures.
  • Ambiguity in Legal Classifications: Unclear classifications of GAI tools complicate the assignment of liability and enforcement of laws.
  • Lack of Comprehensive Legislation: Current laws are not equipped to handle the unique issues arising from AI, necessitating updates and new regulations.
  • Insufficient Data Governance: Inadequate data protection and licensing mechanisms create legal and ethical challenges in AI training and deployment.

How to Prevent:

  • Updating Legal Frameworks: Revising existing laws and creating new regulations tailored to the complexities of AI technology.
  • Clear Classification of AI Tools: Establishing clear definitions and classifications for GAI tools to facilitate legal enforcement and liability assignment.
  • Enhanced Data Governance: Implementing robust data protection and licensing mechanisms to ensure ethical AI training and usage.
  • Public Awareness and Education: Promoting awareness and understanding of AI-related issues among the public and stakeholders to encourage responsible use.

Way forward:

  • Implement a sandbox approach, granting GAI platforms temporary immunity from liability. This allows responsible development while gathering data to inform future laws and regulations.
  • Overhaul data acquisition processes for GAI training, ensuring legal compliance with proper licensing and compensation for intellectual property used in training models. Solutions could include revenue-sharing or licensing agreements with data owners.
  • Create centralized platforms for data licensing, similar to stock photo websites, to simplify access for developers and ensure data integrity against historical bias and discrimination.
  • Develop comprehensive AI legislation that addresses the unique challenges posed by GAI, including liability, intellectual property rights, and data protection.
  • Encourage judicious interpretations by constitutional courts to balance the benefits of AI technology with the protection of individual rights and societal interests.
  • Adopt a holistic, government-wide approach to AI governance, ensuring coordination among various regulatory bodies and stakeholders.
  • Promote public participation in the development of AI regulations and ensure accountability mechanisms to protect against misuse and harm.
  • Engage in international cooperation to establish global standards and best practices for AI governance, ensuring consistent and effective regulation across borders.