The evolving use of AI in AEC projects raises complex legal questions about authorship, copyright, and ethical responsibility.
A recent legal case has highlighted important common ground shared by AEC projects and a selfie photographed by a monkey: both are governed by the laws of intellectual property and are confronting novel questions about authorship. In the United States, copyright law is a key component of intellectual property rights and serves a dual role of protecting creators’ original works from unauthorized use and reproduction while also promoting the advancement of knowledge and innovation. As the industry evolves with innovations in technology and process, architecture, and by extension the AEC industry, has been at the forefront of the evolution of copyright law and is now navigating a new critical practice issue: artificial intelligence.
In considering AI’s application to our industry, it is crucial to remember that for a work to be copyrightable in the U.S., there must be authorship and the author must be human. U.S. copyright protects original works of authorship that are fixed in a tangible medium of expression. As a general rule, the author is the original person or people who actually create the work. However, authors can assign, give away, or sell their copyrights to new owners, and for works made for hire, authorship can automatically vest in other parties.
The case of the photogenic monkey named Naruto, known as the “monkey selfie case,” illustrates the importance of authorship. Naruto took a selfie using a camera set up and provided to him by a British photographer named Slater. Slater published the photos, and they went viral. Wikipedia uploaded the photo and tagged it as being in the public domain, reasoning that a monkey could not own a copyright. This initiated a legal battle over authorship, which settled in the U.S. but could receive different treatment in the U.K., posing problems for Wikipedia’s handling of the photograph – and raises questions of how to reconcile differing approaches to authorship across various jurisdictions when the internet is worldwide.
While the monkey selfie case is a light-hearted exploration of questions of authorship, the stakes become much higher when these questions are applied to emerging technologies using artificial intelligence that are rapidly becoming embedded in daily life. As these technologies are continually evolving, this is an emerging area of law that has far more questions than answers at this time. In AEC, as in many other creative industries, there are lots of conversations about ownership, creation, how the design process is affected, and how to use these powerful tools responsibly and ethically.
For example, it is not clear whether AI output is copyrightable – is the output created by the algorithm (non-human) or is it created by the software developer (human), who created the algorithm, which then creates the output? If it is copyrightable, then the question becomes who owns the copyright. Can that ownership be taken from works that are already copyrighted, or is the AI infringing on the existing copyright? Does authorship reside in the software developers, and can it truly be assigned to the person who has obtained the license to use that software?
In the meantime, many generative AI platforms are doing their best to hedge against and punt on these questions in their terms of use, with statements that they will grant the software license holder rights if or to the extent they have ownership to grant. Others have taken a more definite position that they do grant ownership, with the purchase of upgraded software subscriptions of course. As we wait for clarity on these legal questions, we must navigate a gray area of whether and how to use these transformative tools in our projects without potentially infringing on another party’s copyright.
This gray area also extends to our clients. The concept of works made for hire is seen often with AEC projects, when the employer or commissioning party is considered the author and copyright owner. If an individual is employed by a firm, the owner of the individual’s work product, or creative output, is the company and not the employee. Similarly, since a client pays or commissions a firm to create a project, the creative output may be considered work made for hire, which would mean that the firm producing the design is not the actual author. Understandably, this can create some cognitive dissonance since in no way do these clients create the work, and yet they can be considered the author. When AI is involved in the project, we must consider how to deliver work product to clients if authorship or ownership cannot vest in the design team or client.
While each company will have different levels of comfort with utilizing AI tools operating in this legal gray area, until the law is able to catch up and provide clarity, it is important for each to be informed in its decision-making and set up appropriate guardrails in line with their risk tolerance. Company policies for AI use vary in detail and scope, ranging from general guiding principles around responsible use to the implementation of AI task forces and detailed protocols for specific tools. When forming these policies, firms should consider the ethical use of AI, management of confidential data and intellectual property rights, and quality assurance. Firms should also place controls around which AI tools are utilized, as new technologies are released every day and not all are of the same quality or effectiveness.
Architects and engineers will not stop innovating and pushing the state of the art, whether with AI or other technologies, and in doing so they may well inform and push innovation in the state of the law as well.
Irina Rice, Esq., LEED AP is general counsel at FXCollaborative. Connect with her on LinkedIn.