top of page

“AI And Data Privacy What The Law Says”

Abstract

Artificial Intelligence (AI) has transformed the way industries function by enabling large-scale data processing, predictive analysis, and automated decision-making. At the same time, this dependence on massive datasets has triggered crucial debates on privacy, fairness, and accountability. This article explores the Indian legal framework governing AI and data privacy, with comparative insights from global standards such as the EU’s GDPR, the US’s CCPA, and China’s PIPL. It evaluates constitutional principles, statutory frameworks, and judicial interpretations while highlighting regulatory gaps in India. Finally, it suggests reforms and AI-specific legislation to ensure that technological progress does not undermine the fundamental right to privacy.


Introduction

Artificial Intelligence (AI) has quickly emerged as one of the defining forces of the 21st century. From revolutionizing healthcare diagnostics and automating financial transactions to streamlining governance and reshaping education, AI’s influence is far-reaching. Its strength lies in analyzing vast pools of data to recognize patterns, make predictions, and even take autonomous decisions. However, this reliance on personal and sensitive data raises difficult questions about data privacy and protection.

While AI promises efficiency and innovation, it also risks intruding into areas of informational privacy, autonomy, and human dignity. Around the world, lawmakers and courts are grappling with this delicate balance how to encourage technological advancement without compromising fundamental rights. This paper examines India’s legal framework on AI and privacy, comparing it with global approaches like the European Union (EU), the United States (US), China, and other jurisdictions. It analyses legislation, judicial pronouncements, and emerging regulatory challenges, and concludes by offering practical reforms.


Legislative Framework on Data Privacy and AI


1. Indian Legal Regime

(a) Constitutional Right to Privacy

A turning point in Indian privacy law came with the Supreme Court’s landmark judgment in Justice K.S. Puttaswamy v. Union of India (2017). The nine-judge bench unanimously recognized the right to privacy as a fundamental right under Article 21. The Court emphasized that in the digital era, individuals need protection against unauthorized use of their personal information, making informational privacy a core constitutional value. This ruling laid the foundation for regulating emerging technologies, including AI systems that rely on profiling, data collection, and surveillance.


(b) Information Technology Act, 2000

The Information Technology Act, 2000, along with the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, was India’s first attempt at data protection. The Rules define sensitive personal information such as health, financial, and biometric data and mandate consent before processing. Section 43A holds companies liable for negligent handling of such information. However, the IT Act was primarily enacted to deal with cybercrime and e-commerce transactions. It does not address the unique challenges posed by AI, such as algorithmic bias, automated decision-making, or transparency in data use.


(c) Digital Personal Data Protection Act, 2023

After years of debate, the Digital Personal Data Protection (DPDP) Act, 2023, became India’s first comprehensive privacy law. The DPDP adopts a consent-centric framework, recognizing limited exceptions for legitimate use. Some of its key provisions include:

  • Section 4: Processing must be based on consent or legitimate grounds.

  • Section 8: Data fiduciaries are bound by principles of purpose limitation and data minimization.

  • Section 12: Grants individuals rights of correction, grievance redressal, and withdrawal of consent.

  • Section 15: Allows cross-border transfer of data subject to government approval.

The DPDP Act represents a leap forward but stops short of regulating AI-specific issues such as automated profiling, explainability, or bias. Thus, while progressive, it does not fully address the complex interplay of AI and privacy.


2. International Legal Approaches

(a) European Union: GDPR

The EU’s General Data Protection Regulation (GDPR), in force since May 2018, remains the gold standard for global privacy law. It is particularly relevant for AI due to its provisions on automated decision-making. Article 22 grants individuals the right not to be subjected to decisions based solely on automated processing, including profiling, if such decisions significantly affect them. Recital 71 adds safeguards against discriminatory outcomes and guarantees a right to human review. Article 5 further mandates data minimization and purpose limitation, curbing excessive use of personal data. GDPR enforcement has been rigorous, with fines imposed on tech giants like Meta, Google, and Amazon for violations.


(b) United States

The US does not have a single federal privacy law akin to the GDPR. Instead, it follows a fragmented, sector-specific approach. Notable examples include Health Insurance Portability and Accountability Act (HIPAA), 1996 for healthcare data, the California Consumer Privacy Act (CCPA, 2018), and its successor, ‘the California Privacy Rights Act (CPRA, 2023)’. These laws grant individuals rights of access, deletion, and opting out of data sales. Importantly, the CPRA introduces oversight on automated decision-making, making California a leader in AI-related privacy regulation. Additionally, the Federal Trade Commission (FTC) enforces consumer protection and has acted against companies for deceptive AI and data practices.


(c) China

China’s Personal Information Protection Law (PIPL), passed in 2021, is one of the strictest privacy laws globally. Strongly influenced by the GDPR, it includes explicit provisions on AI. Article 24 empowers individuals to demand transparency and fairness when subject to automated decision-making, making it one of the few frameworks to directly regulate algorithmic governance.


(d) Other Jurisdictions

Other nations are also strengthening AI governance. Canada’s proposed Artificial Intelligence and Data Act (AIDA), introduced under Bill C-27, aims to regulate AI applications directly. Brazil’s General Data Protection Law (LGPD), 2018, mirrors the GDPR in many respects, providing individuals with rights against automated data processing. These global efforts highlight the recognition that AI requires specialized regulation to safeguard privacy.


Judicial Approaches to AI and Data Privacy

1. India

Although Indian courts have yet to directly confront AI-related privacy disputes, several precedents provide guiding principles. In Justice K.S. Puttaswamy v. Union of India (2017), privacy was firmly established as a fundamental right. In People’s Union for Civil Liberties v. Union of India (1997), the Supreme Court held that surveillance measures require procedural safeguards. Similarly, in Internet and Mobile Association of India v. RBI (2020), the Court applied the proportionality principle to restrictions on financial data use. These rulings suggest that if AI systems involving facial recognition or predictive policing are challenged, courts will test them against principles of necessity, reasonableness, and proportionality.


2. Global Jurisprudence

Global courts have actively scrutinized mass surveillance and AI-driven data collection. The CJEU in Scherms II (2020) invalidated the EU-US Privacy Shield, finding it insufficient to protect EU citizens’ data from US surveillance. In Digital Rights Ireland (2014), the CJEU struck down the data retention directive as disproportionate. Meanwhile, the US Supreme Court in Carpenter v. United States (2018) held that warrantless access to cell-site location data violated the Fourth Amendment. These cases reflect a growing judicial consensus that unchecked data collection, particularly by AI systems, requires heightened legal scrutiny.


Challenges in Regulating AI and Privacy

  1. Opacity of Algorithms: The 'black box' nature of AI makes accountability difficult.

  2. Profiling and Discrimination: Algorithmic bias can lead to unfair outcomes in hiring, policing, or lending.

  3. Cross-Border Data Transfers: With AI relying on global datasets, jurisdictional conflicts often arise.

  4. Informed Consent: Users seldom understand the full scope of how their data is being processed by AI.

  5. India’s Gaps: Despite progress, the DPDP Act does not yet address automated decision-making or algorithmic accountability.


Comparative Analysis

  • India (DPDP Act, 2023): A comprehensive privacy law but silent on AI-specific issues.

  • EU (GDPR): Provides robust safeguards against profiling and automated decision-making.

  • US: Lacks uniformity but California has taken notable steps with the CPRA.

  • China (PIPL): Explicitly regulates AI-driven decision-making.

In comparison, India is at an early stage of privacy regulation and must adapt quickly to keep pace with international standards.


Suggestions and Way Forward

  1. AI-Specific Legislation: Draft a dedicated Indian law focusing on algorithmic accountability, fairness, and transparency.

  2. Expand DPDP Act: Incorporate explicit provisions on automated profiling and the right to explanation.

  3. Strengthen the Data Protection Board: Empower it with oversight authority on AI systems.

  4. International Alignment: Harmonize Indian law with GDPR to ensure smoother cross-border data flows.

  5. Ethical Guidelines: Adopt frameworks like the EU’s Ethics Guidelines for Trustworthy AI.

  6. Judicial Training: Equip judges with specialized training to evaluate AI-related disputes effectively.


Conclusion

Artificial Intelligence offers immense opportunities but also introduces serious privacy risks. The DPDP Act, 2023, is a milestone for India but remains inadequate in addressing AI-specific challenges such as explainability, profiling, and bias. In contrast, the EU’s GDPR and China’s PIPL have stronger safeguards, while the US remains fragmented though dynamic. For India, the path forward lies in striking a balance between innovation and individual rights. By enshrining transparency, accountability, and proportionality into its AI governance framework, India can unlock the benefits of AI without compromising the right to privacy.


References

  1. Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.

  2. People’s Union for Civil Liberties v. Union of India, (1997) 1 SCC 301.

  3. Internet and Mobile Association of India v. Reserve Bank of India, (2020) 10 SCC 274.

  4. Regulation (EU) 2016/679, General Data Protection Regulation (GDPR).

  5. Schrems II, Case C-311/18, Court of Justice of the European Union (2020).

  6. Digital Rights Ireland, Joined Cases C-293/12 and C-594/12, CJEU (2014).

  7. Carpenter v. United States, 138 S. Ct. 2206 (2018).

  8. Information Technology Act, 2000, and IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

  9. Digital Personal Data Protection Act, 2023 (India).

  10. Personal Information Protection Law, 2021 (China).

  11. California Consumer Privacy Act, 2018 (U.S.).

  12. California Privacy Rights Act, 2023 (U.S.).

  13. Bill C-27, Artificial Intelligence and Data Act (Canada).

  14. Lei Geral de Proteção de Dados (LGPD), 2018 (Brazil).


Disclaimer: The content shared in this blog is intended solely for general informational and educational purposes. It provides only a basic understanding of the subject and should not be considered as professional legal advice. For specific guidance or in-depth legal assistance, readers are strongly advised to consult a qualified legal professional.

 
 
 

Comments


  • Picture2
  • Telegram
  • Instagram
  • LinkedIn
  • YouTube

Copyright © 2025 Lawcurb.in

bottom of page