AI isn’t a tech problem – it’s a trust problem

Apr 26, 2026

Banner Image

 

The real AI conversation in AEC isn’t about tools or speed, it’s about preserving trust, judgment, and accountability.

Across the country, in AEC firms of every size, the same conversation is unfolding behind conference room doors. The calendar invitation might read “AI policy discussion” or “AI tool evaluation.” But if you sit in those meetings long enough, it becomes clear that the real discussion isn’t about technology or software.

It’s about trust.

Artificial intelligence is advancing at a pace that feels both exhilarating and destabilizing. Clients are asking what firms are doing with it. Competitors are experimenting publicly. Staff are quietly testing tools on their own. No leader wants their firm to appear behind the curve, and at the same time, no leader wants to compromise professional standards that exist for a reason.

In AEC, we do not move fast and break things. We sign and seal them.

That distinction matters. In many industries, AI adoption is framed as a productivity race. In engineering, it is a governance question. The tension is not whether the technology works. It often does. The tension is whether its use preserves the defensibility, judgment, and accountability that define professional practice.

I went looking for AI adoption best practices. Over the past year, through webinars, articles, candid conversations with AEC professionals, and a review of dozens of peer AI, I expected to find a model that firms could confidently replicate. Instead, what became clear was that no single template exists. Adoption is not defined by one decisive move, but by a series of choices shaped by risk tolerance.

The variation between firms is not ideological. It reflects culture, client mix, project type, and the depth of a firm’s review bench. Some policies are cautiously permissive, grounding AI use within existing QA/QC frameworks. Others are restrictive to the point of near prohibition. Nearly all include some version of the same line: “AI may assist, but it does not replace professional judgment.”

That line is not boilerplate. It’s a boundary. And once that boundary is acknowledged, the rest of the conversation unfolds almost predictably. While no definitive best-practice model has emerged, five recurring tensions consistently surface across firms. They vary in expression, but they point to the same underlying trade-offs shaping how the profession is approaching AI:

  1. Innovation versus liability. On one side is practical urgency. Schedules are compressed. Teams are leaner. Clients expect iteration at a pace that traditional workflows strain to match. Firms already rely on advanced modeling software, automation, and integrated design platforms to deliver complex work efficiently. AI feels like the next natural step in that evolution, and waiting for absolute certainty is unrealistic.
    On the other side is institutional memory. Anyone who has participated in a claims review understands how thin the line can be between efficiency and exposure. In engineering, faster output only has value if it is defensible years later. Our deliverables are not judged solely at submission. They are judged in audits, disputes, public forums, and sometimes courtrooms. A shortcut that cannot be clearly explained under scrutiny is not an advantage. It is a liability.
    This is not a debate between progress and fear. It is a debate about professional responsibility.
  2. Speed versus review capacity. AI undeniably accelerates the front end of production. Narratives can be drafted in minutes. Background research can be compiled instantly. Meeting summaries and preliminary analyses can appear almost as quickly as questions are asked. The productivity gains are real.
    But acceleration raises an operational question that few policies fully answer. If the volume of draft material increases, what happens to review? Traditional QA/QC systems were built around human production speed. They assume a certain cadence of output. When that cadence changes, oversight cannot remain static.
    Most AI policies correctly state that human review is required. Fewer define what that review entails. Does every AI-assisted section require line-by-line validation? Is sampling appropriate for low-risk content? How does a project manager balance efficiency gains with review bandwidth in an already constrained environment?
    Without clarity, firms risk falling into one of two traps. They either over-review everything and negate efficiency, or they under-review because the output appears polished. In both cases, trust is strained.
  3. General tools versus purpose-built platforms. Inside many firms, the AI debate sounds practical: Do we grant broad access to general tools, or limit use to purpose-built platforms embedded in approved systems?
    General tools such as ChatGPT are already woven into daily workflows. Staff use them to organize notes, brainstorm approaches, and refine communication. These tools are fast and flexible. They are also difficult to contain. Their data boundaries are opaque, their outputs uneven, and their audit trails inconsistent.
    Purpose-built AI within engineering software offers the opposite appeal. It operates within defined parameters, aligns with QA/QC procedures, and can be piloted, licensed, and monitored. It feels governable. It feels contained. But it is narrower and slower to evolve and often comes with higher cost and implementation complexity.
    It is not surprising that firms frame AI strategy as a choice between these two models. Across the policies reviewed, language regarding AI approval structures is common: “All AI tools must be approved by the project manager prior to use on any project.”
    The instinct is familiar: evaluate the software, approve it, control access. But while that instinct worked for prior technology cycles, it does not fully work here. AI is not a single platform waiting for enterprise rollout. It is already embedded in browsers, search engines, productivity suites, and design systems. Even firms that restrict standalone tools are using AI-enhanced features inside existing software. The debate between general access and specialized pilots assumes AI can be cleanly bounded. In practice, it cannot.
    And so this is not simply a question of which tools to authorize. It is a question of whether governance built around AI as a contained software rollout can keep pace with a capability that now spans the entire digital environment.
  4. Firm-wide policy versus project-specific reality. The AI debate does not end at the firm’s front door. It extends to clients, contracts, and differing appetites for risk. One client may welcome innovation and rapid iteration. Another may prohibit AI use explicitly. Some agreements remain silent. Others are highly prescriptive. One project may be low-risk and internal while another may carry significant public consequence.
    AEC firms do not operate within a uniform risk landscape. Yet most attempt to craft a single, firm-wide policy to govern that variability. Writing a policy broad enough to protect the organization but flexible enough to account for project-by-project nuance is inherently difficult.
    Many firms emphasize confidentiality and data protection, which is essential. Fewer provide clear guidance for navigating client-specific restrictions, disclosure expectations, or situations where contract language conflicts with internal practice. When that nuance is absent, teams are left to interpret. In matters of client trust, interpretation is rarely enough.
  5. Transparency versus fear of misunderstanding. Internally, many leaders support identifying AI-assisted work so reviewers understand the workflow. Externally, disclosure feels more complicated. Some worry that mentioning AI will imply corner cutting. Others worry that failing to mention it will appear deceptive if uncovered later.
    Professional services firms trade on credibility. Clients are not merely purchasing a deliverable. They are purchasing judgment. If AI alters the process, firms owe clients clarity about what changed and what did not. Framed correctly, that conversation strengthens rather than weakens trust. Accelerating early-stage drafting or research does not diminish rigor if final outputs are reviewed under the same professional standards and QA/QC protocols clients have always expected.
    This is not a linear compliance question. It is an attempt to govern a capability that reshapes how work is produced, reviewed, and trusted across the entire industry.

The leadership test

What stands out to me in the industry policy collection is not inconsistency; it’s the recurring theme of caution paired with experimentation. Firms are building guardrails while acknowledging that the road ahead is still forming. Most policies are strong on principles and lighter on operational detail. That gap is understandable. The technology is evolving quickly, and governance often trails innovation.

The firms most likely to navigate this moment successfully will not be those with the flashiest tools. They will be those that build clarity into their operating model. They will classify AI use by risk level, define review expectations proportionate to that risk, and assign ownership rather than dispersing responsibility. Most importantly, they will recognize that AI adoption is not a software rollout to be managed through procurement and access controls. It is a leadership exercise governed through culture, accountability, and professional standards.

For those steering this transition, the risk lies in narrowing the conversation through a single lens. A firm composed solely of enthusiasts may move quickly but overlook exposure, while a firm composed solely of skeptics may preserve comfort but lose competitive ground. The healthiest organizations will hold both instincts at once. They experiment within boundaries and modernize without abandoning discipline.

The question is not whether AI will influence practice. It already is. The question is whether its integration will preserve defensibility, transparency, and professional judgment.

In the end, AI adoption in engineering is not primarily a technology problem. It is a trust problem. And trust, unlike software, cannot be patched after the fact.

Julia Moroney, MA, PCM is vice president and director of marketing at French & Parrello Associates. Connect with her on LinkedIn.

About Zweig Group

Zweig Group, a four-time Inc. 500/5000 honoree, is the premier authority in AEC management consulting, the go-to source for industry research, and the leading provider of customized learning and training. Zweig Group specializes in four core consulting areas: Talent, Performance, Growth, and Transition, including innovative solutions in mergers and acquisitions, strategic planning, financial management, ownership transition, executive search, business development, valuation, and more. With a mission to Elevate the Industry®, Zweig Group exists to help AEC firms succeed in a competitive marketplace.