Menu

Who Is the Author? – AI Images and the Question of Creative Responsibility

26 • 03 • 19Benedek Szabó

As artificial image-, music-, and text-generating tools become increasingly widespread, a growing number of dilemmas surface alongside them. The current copyright categories applied to content created with artificial intelligence stem from a legal framework that predates AI, leading to unclear outcomes in some cases. Among the most recent concerns are the ethical and legal questions surrounding the use of AI-generated images—an area so new that no precise regulatory framework yet exists. As a result, individuals using generative AI can often create content at their own discretion, raising questions about responsible creative conduct.

In both online—and increasingly offline—media spaces, artificially generated images appear with growing frequency, and they have also gained ground in the art world. In connection with such content, copyright issues arise ever more frequently, along with questions about how responsibility should be divided between artificial intelligence and its user. This latter issue becomes particularly pressing when the content in question raises moral or legal concerns. Who owns the copyright to the finished work, and who bears responsibility for the resulting content? The person generating the material using the program, or the program itself, which enables virtually anyone to produce almost any kind of content?

If we approach the matter from a legal perspective, we find that the law does not recognize a distinct AI category, though it does provide a relevant starting point. In Hungary, copyright protection and liability are regulated by Act LXXVI of 1999 on Copyright. Section 1 protects literary, scientific, and artistic works, and copyright protection also extends to computer programs, the rights to which may be exercised by the legal or natural person who created them. It follows that software simulating human-like intelligence is itself protected by copyright.

However, the question of authorship and legal responsibility for works created with AI programs remains unclear. In light of the above, two possible legal interpretations emerge.

According to one line of reasoning, copyright belongs to the person who created the work. The law recognizes this person as the author, whose work is protected by law. From this perspective, it does not matter what tools the creator used—just as neither the manufacturer of a camera nor the developer of image-editing software may exercise copyright over the images produced with those tools.

0103

Getimg.ai által generálta: Kéri Gáspár

0103

DALL·E 3 által generálta: Kéri Gáspár

The other interpretation holds that if a unique work results from the collaboration of two parties, copyright belongs jointly and equally to the co-authors. While AI software itself is protected by copyright, determining who owns the rights to its product—the generated image—is more difficult. The decisive question is whether AI can qualify as a co-author. To answer this, we must determine whether the given program is sufficiently advanced and sufficiently creative. This distinction is crucial because copyright protects only independent, creative products, and such products can only be created by independently thinking agents—something only so-called strong AI would be capable of.

To assess this, we must clarify what is meant by ‘weak’ and ‘strong’ AI. The distinction was introduced by the American philosopher John R. Searle. Weak AI refers to systems that behave as if they were intelligent, without necessarily possessing actual understanding. Strong AI, by contrast, would genuinely think and possess its own consciousness. Based on current technological development, today’s artificial intelligences operate at the level of weak AI; they do not demonstrate the degree of autonomy required to be regarded as consciously acting agents. Consequently, they are not capable of independent creative activity, since they select from predefined operations within predetermined frameworks and objectives. For this reason, they cannot qualify as co-authors under the law. As a result, nearly all legal rights and responsibilities related to generated images rest with the user of the AI.

There is, however, one theoretical exception. Although no significant Hungarian scholarly reference is known yet, the issue has been discussed in international academic discourse. In this scenario, the AI program makes an error independently of the user, potentially raising questions about the program’s own responsibility. In such cases, the error—or legal violation—occurs for reasons beyond the user’s control. Since, under the law, liability requires at least negligence, the user might, in principle, be exempt from liability. Such a situation could arise if a generated image infringes upon legal rights. The problem was first formulated in this context by the Italian legal scholar Giovanni Sartor, who argues that the user should always bear responsibility for the program’s behavior, regardless of foreseeability. This reasoning applies the logic of strict liability for hazardous activities to artificial intelligence, meaning that the user must assume responsibility for damages arising from AI’s operation.

As we can see, AI behavior reaches a legally critical point when interaction occurs between the system and its user, and a new product emerges from this interaction. From another perspective, while the user may bear the greater burden of responsibility, they retain copyright—including rights of distribution and commercialization.

Despite the current gaps in the legal framework, organically evolving forms of regulation may emerge. AI development companies, for example, may build safeguards into their programs to anticipate legal or moral concerns. Such mechanisms could enable generative systems to refuse to create content that violates specific laws—such as the right to one’s likeness or the right to reputation.

Another advantage of such self-regulation is the prevention of viral controversies—given that AI companies succeed in implementing it. For example, some users employ AI to imitate the style of other creators. Media reports have covered cases in which image-generation software was used to produce works in the visual style of Studio Ghibli or the world-renowned photographer Ansel Adams. In both instances, legal proceedings were initiated, and rights holders filed copyright complaints.

In the coming years, the relevant legal repertoire will undoubtedly continue to evolve, just as generative models themselves will advance. It is possible that more practical legislation will emerge. As technology develops, AI tools may eventually become legally recognized agents—and if that occurs, the question of responsibility will once again require reconsideration.