We have recently written articles on AI in digital asset management (DAM) and use of LLMs in DAM. As artificial intelligence (AI) and Large Language Models (LLMs) continue to advance, their implications for copyright law have become a significant focus of legal discussions worldwide.
In Australia, the integration of AI technologies in content creation and management raises complex questions about copyright ownership, infringement, and fair use. LLMs, which can generate human-like text and analyse vast amounts of data, challenge traditional notions of authorship and intellectual property rights. This introductory exploration delves into how AI and LLMs intersect with copyright law in Australia, highlighting the evolving legal landscape and the need for regulatory frameworks that balance innovation with the protection of creators' rights.
Integration of AI technologies in content creation brings about a crucial discussion surrounding copyright laws. With the rise of AI-generated content, there are growing concerns about infringement and claims of obtaining content through improper channels. As a result, copyright laws are undergoing significant shifts to address these challenges and ensure fair use and attribution in this evolving landscape. Let's dive deeper into this multifaceted realm where AI and human collaboration are reshaping not only work dynamics but also the legal frameworks governing creative output.
The difference between a computer-generated work and a computer-assisted work lies in the level of human involvement. How does that work?
Making arrangements for the creation of a computer-generated work can involve various legal, ethical, and practical issues like Ownership and Attribution.
Determining who owns the rights to a computer-generated work can be complex. In traditional creative processes, the creator of the work typically holds the copyright. However, with computer-generated works, the situation may be less clear, especially if the work is generated by an AI system. Legal frameworks may need to evolve to address questions of ownership and attribution in these cases.
Well, it is indeed complex and multifaceted.
Arguments in Favour of Disclosure:
Some argue that disclosure of computer-generated works promotes transparency and ensures that consumers and audiences are aware of the origin of the content they are consuming. This transparency is particularly important in contexts where the distinction between human-created and computer-generated content may influence perception or valuation.
Providing disclosure of computer-generated works can be viewed as a form of consumer protection, ensuring that consumers are not misled or deceived about the nature of the content they are consuming.
Some argue that requiring disclosure of computer-generated works can provide legal clarity and guidance, helping to address potential copyright issues, ownership disputes, and attribution concerns that may arise from the use of automated or algorithmic content creation techniques.
Arguments Against Disclosure:
Implementing a requirement for disclosure of computer-generated works may pose practical challenges, particularly in cases where the distinction between human-created and computer-generated content is not clear-cut. Defining clear criteria for what constitutes a computer-generated work and enforcing disclosure requirements could be complex and subjective.
Artists should have the flexibility to choose their preferred tools and techniques without being compelled to disclose specific details about their creative process.
It depends — generative AI may violate copyright laws when the program has access to a copyright owner's works and is generating outputs that are "substantially similar" to the copyright owner's existing works, according to the Congressional Research Service.
However, there is no federal legal consensus for determining substantial similarity.
Well, check out some Policy developments:
Australian Policy developments:
The paper records issues with the greatest significance as:
The paper proposes that the government establish a "standing mechanism for ongoing engagement" to:
UK Policy developments:
In the chapter on copyright the committee recommends:
Canadian Policy developments:
If generative AI models continue to go unchecked, many experts in this space believe it could spell big trouble — not only for the human creators themselves, but the technology too.
“When these AI models start to hurt the very people who generate the data that it feeds on — the artists — it’s destroying its own future”. “So really, when you think about it, it is in the best interest of AI models and model creators to help preserve these industries. So that there is a sustainable cycle of creativity and improvement for the models.”
In the U.S., much of this preservation will be incumbent on the courts, where creators and companies are duking it out right now. Looking ahead, the level at which U.S. courts protect and measure human-made inputs in generative AI models could be reminiscent of what we’ve seen globally, particularly in other Western nations.
The United Kingdom is one of only a handful of countries to offer copyright protection for works generated solely by a computer. The European Union, which has a much more pre-emptive approach to legislation than the U.S., is in the process of drafting a sweeping AI Act that will address a lot of the concerns with generative AI.
The answer is “No”, since it isn’t considered to be the work of human creation.
It has long been the posture of the U.S. Copyright Office that there is no copyright protection for works created by non-humans, including machines. Therefore, the product of a generative AI model cannot be copyrighted.
However, legally these AI systems — including image generators, AI music generators and chatbots like ChatGPT — cannot be considered the author of the material they produce. Their outputs are simply a culmination of human-made work, much of which has been scraped from the internet and is copyright-protected in one way or another.
“One thing you have to know about copyright law is, for infringement of one thing only — it could be a text, an image, a song — you can ask the court for $150,000. So imagine the people who are scraping millions and millions of works.” - Daniel Gervais, Professor at Vanderbilt Law school.