top of page
Search

“Who owns artificial intelligence [A.I] Created work law versus technology”

  • Writer: Harsh Mishra
    Harsh Mishra
  • Aug 18
  • 7 min read

Updated: Aug 18

INTRODUCTION:

The ownership of work created by artificial intelligence  is a complex issue with significant legal and technological implications. Current copyright law generally requires a human author for copyright protection. meaning A.I. generated works may not be eligible for copyright unless a human exercised sufficient control over the creative process. This raise questions about whether the user, developer, or even the AI itself should own the rights to the work.


BODY  

  1. CORE IDEA 

The question of ownership over AI generated works highlights a conflict between existing legal framework - which are built for human creators-and modern AI capabilities, which allow machines to create autonomously. 

  1. LEGAL PRESPECTIVE 

Human authorship principle :Most copyright, patent, and IP laws assume the creator is human.

*Purely AI created works :Usually not protected by copyright in many countries example US,EU, Japan because no human author exist.

*Human AI Collaboration: If a human provides significant creative control ,the human’s contribution can be protected.

*Different jurisdiction: some countries example UK assign rights to the person who made the arrangement for AI creation.

  1. TECHNOLOGICAL PERSPECTIVE

 *AI capabilities AI system can produce text, art, code, and invention without human interventions. 

*Speed versus law: AI development outpaces legal adaptation leading to gaps in ownership rule.

*Practical use vs protection :AI works can be commercially valuable but may be ineligible for legal protection, risky unclaimed or public domain status.  

  1. THE LAW -TECH GAP

*Mismatch: laws are human -centered technology now enables non- human creativity.

*Risks  : Ownership uncertainty for businesses and individuals.

Enforcement challenges when A.I outputs infringe on rights

Difficulty proving  originality or authorship.

*Need for reform Debate over whether law should 

  1. Recognize AI as creator.

  2. Assign ownership to developer, users or commissioners.

  3.   Keeps AI works in the public, domain.

5 CONCEPTUAL CONFLICT

Law- Protects human creativity with clear rights and responsibilities.

Technology- Produces autonomous creativity blurring lines of authorship.

The clash forces society to rethink ‘authorship’, ‘ownership’, and the role of human intent in creativity.


Some important laws, acts and sections

INDIA

Copyright Act, 1957 

 ·  Recognizes "computer-generated works" but does not consider AI as an author.

·  ·  Section 2(d)(vi): Defines the author as the person who causes the work to be created—typically the human developer or user.

  Section 17: In employer-employee or commissioned contexts, the employer or commissioning party is deemed the first owner unless an agreement says otherwise. 

·  ·  Section 51 (infringement): Using copyrighted works to train AI without permission may constitute infringement, no board fair dealing exception applies to 

Section 6: Only human inventors or legal entities may be listed as inventors—AI cannot.

·  Emerging Legal Activity:

ANI v. Open AI (2024): A landmark Delhi High Court case examining unauthorized use of copyrighted content to train AI. 

A 2025 expert panel is evaluating amendments to clarify definitions of AI-generated works, ownership, and training data licensing.

UNITED STATES

U.S. Copyright Law:

17 U.S.C. § 102 (Copyright Act of 1976): Requires human authorship for copyright protection. 

Work made for hire doctrine: Under 17 U.S.C. § 101 & § 201(b), an employer or commissioning party may be considered the author if the work was created as part of employment or a commissioned project.

Registration Practice:

The U.S. Copyright Office refuses rights for works created solely by AI (e.g., Thaler v. Perlmutter, 2022). 

Works with meaningful human creativity or arrangement can be protected; purely prompt-based outputs generally are not. 

Legislation:

The Generative AI Copyright Disclosure Act (H.R.7913) (2024) proposes mandatory disclosures of copyrighted works used for training AI models, submitted to the Copyright Office in advance. 


UNITED KINGDOM

Copyright, Designs and Patents Act 1988:

Section 9 (computer-generated works): If no human author exists, the "author" is the person who made the arrangements for its creation

CHINA

The Li v Liu (2023) case: The Beijing Internet Court granted copyright to a user who provided creative input, parameter-setting, and iterative refinement, even though the output was AI-generated. This sets a precedent for recognizing human-directed AI-assisted works as protectable.


KEY JUDICIAL DECESION

India — ANI v. Open AI (Delhi High Court, 2024–2025 — ongoing/high-profile hearing)Case: Asian News International (ANI) v. OpenAI (Delhi High Court).What happened so far: ANI alleges Open AI used ANI’s news content to train models without permission; the Delhi HC is hearing important issues around TDM (text-and-data-mining), training data, attribution and whether such use infringes copyright. Proceedings and submissions are shaping Indian judicial thinking on AI training and infringement.Why it matters: Could become a leading Indian precedent on training-data liability, attribution, and remedies for AI-generated misinformation — with direct effects on who “owns” or is liable for AI outputs. (Ongoing as of 2024–2025.)

United States — Naruto v. Slater (9th Cir., 2018) — non-human claimant precedentCase: Naruto v. Slater (the “monkey selfie” case).What happened: The courts held an animal cannot own copyright. Although not an AI case, courts cited it when addressing claims by non-human entities seeking IP rights.Why it matters: It’s often relied on by courts as an analogy that only humans (or legal persons via human agency) can be authors/owners under existing law


THE PROBLEM AND RELATED SOLUTIONS

1. The Core Problem

A. Law vs. Technology Mismatch

Law’s view: Intellectual property laws (copyright, patents, designs) were built around human creators and tangible works.

Technology’s reality: Modern AI can autonomously generate content, code, designs, music, and inventions with minimal or no human creative involvement.

Conflict: The law struggles to fit non-human “authors” or “inventors” into frameworks designed for humans.


B. Specific Issues

1. Authorship / Inventorship Gap

Copyright: In most jurisdictions (US, India, UK, EU), the author must be a natural person. Purely AI-generated works fall into the public domain, unless substantial human input exists.

Patents: Inventor must be a natural person. AI like DABUS cannot be named, even if it created something novel.

Problem: This creates uncertainty for businesses investing in AI R&D — who gets the economic benefit if AI itself cannot be the rightsholder?


2. Training Data Ownership

AI models are trained on massive datasets (often scraped from the internet).

Legal risk: Content owners claim infringement if copyrighted material is used without permission.

Tech difficulty: AI developers may not know exactly what data was ingested; “black box” training makes tracing sources hard.


3. Attribution and Moral Rights

Without a human author, moral rights (like the right to be credited or object to distortion) are in limbo.

When humans do contribute (e.g., prompting, editing), disputes arise over whether their role is creative enough to earn rights.


4. Enforcement Challenges

AI outputs can be indistinguishable from human works.

Determining authorship and ownership requires tracing the creative process — something law is not equipped to verify technically.


5. Cross-Border Conflicts

Different jurisdictions have contradictory approaches:

China has granted copyright to human-directed AI works (Li v. Liu).

US/UK have refused protection to AI-only works (Thaler v. Perlmutter, DABUS cases).


This creates uncertainty in global markets.


2. Potential Solutions

A. Legal Reforms

  1. New Category of Rights

Create a sui generis “AI-generated work right” with shorter protection terms, granted to the human(s) or entity directing the AI.

Example: Similar to database rights in the EU.


  1. Clarify “Substantial Human Involvement”

Define clear thresholds for when human input (prompt engineering, iterative editing) turns AI output into a protectable work.


  1. Contractual Default Rules

Where AI is used in employment or commissioned work, contracts can pre-assign ownership to avoid disputes.

Example: Amend laws to presume that the employer/client owns AI-assisted works unless stated otherwise


  1. Training Data Licensing Rules

Require AI companies to disclose and license training data for commercial use.

Could include a statutory text-and-data-mining exception with compensation mechanisms.


B. Technological Solutions

  1. Provenance Tracking

Embed metadata or blockchain-based “content passports” to track:

Who prompted

What model was used

What edits were made

This can help establish ownership and originality.


  1. Watermarking AI Outputs

Invisible digital watermarks could identify AI-generated content and link it to a usage log.


  1. Version Control for AI Creation

AI platforms could log prompt histories and model settings so ownership claims can be audited.


C. Hybrid Governance Approach

Short-term: Use contracts + attribution logs to bridge the gap.

Long-term: International treaty or WIPO-led agreement on AI authorship, similar to Berne Convention for copyright.


3. Example of a Balanced Model

Law: AI outputs with no human creative input → public domain;AI outputs with substantial human direction → owned by that human or their employer

Tech: AI platforms automatically generate an authorship report showing the human contribution and model metadata.

Contracts: Default clauses in employment/freelance agreements assigning AI-assisted IP to the paying party.


CONCLUSION

The question of who owns AI-generated work sits at the intersection of outdated legal definitions and rapidly evolving technology.

Law’s stance today: In most jurisdictions, ownership requires human authorship or inventorship. Purely autonomous AI outputs fall into the public domain or are unprotectable.

Technology’s reality: AI systems can produce original-like works and inventions with minimal human involvement, challenging the human-creator requirement.

Conflict: The legal system protects human creativity, while technology is increasingly capable of creativity-like processes without a human “hand on the pen.” This mismatch leaves businesses, artists, and researchers uncertain about rights, profits, and liabilities

Key Legal–Tech Gap

Authorship / Inventorship: Law only recognizes humans; AI cannot be the rightsholder.

Training Data Use: Technology’s scale makes it hard to track and license every piece of content, while law still treats unauthorized use as infringement.

Attribution: Law’s moral rights framework doesn’t map neatly to collaborative human–AI creation.

Balanced Solution Path

Legal reforms

Define “substantial human involvement” so human-directed AI works can be protected.

Create a new right (sui generis) for AI-generated works with limited protection, granted to the directing party.

Mandate transparency & licensing for training data, possibly with a global text-and-data-mining framework.

Technological tools

Provenance tracking (logs, metadata, blockchain) to prove authorship chain.

Watermarking AI outputs to identify origin.

Version control in AI systems to record prompts and edit

Hybrid governance

Contracts should pre-assign ownership where AI is used in employment or commissioned work.

International standards (WIPO-led) could harmonize rules across borders to avoid conflicting claims.


Final Thought

In its current form, the law views AI as a tool — like a camera or paintbrush — with rights vesting in the human directing it. But as AI autonomy grows, relying solely on old definitions risks leaving valuable creations in a legal void.A blended approach — combining legal clarity, technological proof systems, and contractual certainty — offers the most practical route to resolving ownership disputes and keeping pace with innovation.


REFLECTIVE QUESTIONS


  1. Human Involvement Threshold – How much human input (e.g., prompting, editing, supervising) should be required for an AI-assisted work to be eligible for intellectual property protection?

  2. The public domain, what incentives or disincentives will this create for creators, businesses, and AI developers?

  3. Technological Safeguards – What role could provenance tracking, watermarking, or AI usage logs play in resolving disputes over ownership and authorship?

  4. Global Consistency – In a world where different jurisdictions take opposing views on AI authorship, should there be an international treaty or WIPO-led framework to harmonize ownership rules?


 
 
 

Comments


  • Picture2
  • Telegram
  • Instagram
  • LinkedIn
  • YouTube

Copyright © 2025 Lawcurb.in

bottom of page