AI may speed up rendering and iteration in AEC, but human judgment still determines accuracy, compliance, and meaningful design innovation.
As new AI tools have proliferated in the last few years, much of the associated media coverage has focused on generative tools derived from Large Language Models (LLMs), and image generators based in machine learning technology. Within the AEC industry there has been a similar focus, given the central role that imagery and renderings play in how building designs are typically marketed, published, and eventually absorbed by the general public. And after all, any tool that is perceived to offer significant efficiency gains is likely to capture widespread attention in a field where speed is often critical. But if the fundamental promise of AI tools lies in their potential to free up bandwidth for humans to focus more time and energy on other kinds of tasks, we need to think seriously about where that bandwidth should best be redirected. In other words: where in the process is human time and creativity best spent, and where is automation most beneficial?
When Google released a version of its Nano Banana image generation tool in August 2025, many of the reactions on social media focused on its ability to generate seemingly accurate architectural drawings and diagrams from a partially obscured streetview image of a specific building or urban setting. It was also able to insert furniture into a picture of an empty room and match the perspective and lighting very accurately. The pure novelty of early AI image generators seemed to be evolving into something that might become useful for a wider range of applications, and at scale.
The limits of generative AI in AEC
It is tempting, then, to see the development of large computational models as a potential boon to the productivity of designers, who increasingly sell imagery as much as their professional services. But while “good enough” might work for an early concept rendering, it is not a useful target for imaginative design or code compliance, life safety, or even basic coordination. What client (or AEC professional) would trade speed for accuracy when the risks of a mistake far outweigh the incremental benefit of a slightly faster design process?
In AEC practice, accuracy is both essential – design work must be precise, after all – and calibrated to the task at hand: an early conceptual rendering can be intentionally vague in a way that a final construction detail cannot. Much as other industries have adopted AI tools to produce “good enough” options to serve as the basis for more precise work performed by humans, AEC professionals have learned that AI-generated imagery can save time otherwise spent “inventing” convincing concept renderings before a project is more fully developed. As conceptual work becomes more “real” through the input of stakeholders, engineers, and others, an appropriate level of precision can be folded in over time.
Why human judgment still matters in design
But a key limitation of all generative AI tools is that they create content based on statistical probability and their training datasets, and therefore cannot actually create new ideas from scratch. If the resultant imagery often feels generic or derivative that's because, by definition, it is. AEC professionals require tools that allow for a finer grain of human control in shaping and refining their outputs.
Proprietary AI tools developed specifically for architecture and planning can offer rapid iteration of viable layouts for parking, workstations, or apartments based on given parameters and geometry – these involve repetitive, modular components where optimizing efficiency is usually paramount. But the complexity of local regulations often limits the degree to which the output of these tools can be trusted – eventually, a human being needs to assess their accuracy before moving forward with the detailed design, which takes time. AI tools often need to be confined to a kind of “sandbox” – used to test early iterations and validate assumptions, but then filtered through experienced human knowledge workers who are familiar with the highly contextual regulatory and technical parameters that might otherwise be missed or conflated.
Automation, risk, and professional liability
Automation itself is of course not a new idea in digital practice. Architecture and engineering require a number of highly repetitive drawing tasks, so within the field of “computational design,” the use of customized software in the form of scripts and plugin tools like Grasshopper has long been a means of reducing time spent by humans performing such tasks. The concept of an AI “agent” that is empowered to make decisions within an iterative process and interpret its results takes computational design even further, into a realm where it encounters the realities of risk management and professional liability insurance. In other words, there is a practical and enduring limit to how much human agency can be removed from even the most menial digital design process before the legal risk becomes unmanageable for any practice.
A smarter approach to AI in architecture and engineering
Taking a step back from assessing the merits of any particular AI tool, AEC professionals should carefully consider what goals they seek to prioritize when adopting new technologies. Within limited circumstances, AI image generation tools have already demonstrated their value by saving time (and therefore money) for AEC firms. But for the value proposition to benefit a wider segment of the industry, AEC professionals should focus on tools that use rapid iteration – not machine-learned mimicry – as a basis for optimizing systems and processes at a range of scales, giving humans more time to evaluate options, solve complicated problems, optimize their work, and lead by leveraging human design creativity to achieve their clients’ goals.
![]() |
John McGill, AIA is a project manager at FXCollaborative with experience in residential, commercial office, workplace, and higher education projects. Connect with him on LinkedIn. |
