Creating an AI company policy

Oct 05, 2025

Banner Image

 

Practical insights and recommendations for creating, communicating, and maintaining an effective AI policy.

When we first explored artificial intelligence at HLB, like many in the architecture, engineering, and construction industry, we approached it with excitement and caution. Implementing AI offered incredible potential, but the risks quickly became evident. If your firm does not yet have an AI policy, learn from our experience and establish one as soon as possible. Here are practical insights and recommendations to guide you through creating, communicating, and maintaining an effective AI policy.

Planning.

In the wake of ChatGPT 3.5’s release in November 2022, when the world saw just how transformative AI could be, we moved quickly. By February 2023 we had assembled a dedicated AI team that blended HLB’s strategic vision with deep technical and educational expertise. Within weeks, this small group began researching trends, evaluating tools, and translating insights into clear, actionable guidance. Having one point of contact within our AI team helped us to drive consistency and kept our strategy aligned and accessible, accelerating the understanding and potential for this technology within HLB.

Next, the team generated and deployed our AI policy with a focus on speed over perfection. We knew it was more important to get a solid baseline in place, something clear but not overly granular, than to wait for every detail to be perfect. Policy creation was not limited to leadership or IT – open, cross-disciplinary conversations, together with advice from our legal team, brought forward diverse perspectives on risks, opportunities, and practical needs. Through broad engagement and insights we gathered, we shaped a policy that anticipated how teams might work while leaving room to refine and expand as new tools emerged. We also ensured that the scope of the policy applied not only to chatbot interactions but also to AI image generation and AI meeting note takers, covering internal and external communications.

Finally, before diving into specific tools or technologies, we made sure our AI team understood the current capabilities of AI and its rapid trajectory. We then organized collaborative workshops and pilot programs, carefully selecting participants who were quick learners and natural cheerleaders for the initiative. These sessions combined workflow mapping with the development and testing of well-thought-through prompts, helping us outline a solid strategy. We ensured that valuable data was collected throughout these pilots, measuring time taken, effort level, and quality compared to baseline methodologies. Through this hands-on exploration we identified where AI could boost speed and innovation and where we needed to keep human expertise firmly at the core of decision-making and quality control. These exercises gave us more context and insights into future revisions to the policy.

Implementation.

We quickly discovered that an instructional PDF document doesn’t drive change. Real implementation came only when we committed to onboarding, ongoing education, demonstrating approved use cases, and creating space for discussion. That integration of policy into everyday practice made a meaningful difference. It wasn’t about getting signatures on policy documents; it was about building a shared understanding of the reasoning behind each of the provisions.

When it came to AI-generated content, we communicated a clear analogy: treat outputs like junior staff work. Even when something looked polished, it could contain hidden inaccuracies, hallucinations, bias, or missing context. To protect quality and maintain trust, we implemented independent review processes for AI-assisted work regardless of the user’s experience level.

One important decision during the development of our AI policy was aligning the consequences of misuse with our existing IT and HR policies. The policy also complements these documents; for example, it states that “AI should not be the primary decision-maker for any employment-related judgments.” By keeping the disciplinary structure consistent, we eliminated ambiguity and reinforced that AI use is held to the same professional standards as other core responsibilities.

We also learned that hard-coding specific tools into the policy didn’t work. The pace of AI software evolution made that approach impossible to maintain. We replaced it with a dynamic list of approved tools that live outside the main document and are revised regularly. This gave us flexibility while maintaining oversight and consistency.

Data privacy was a foundational pillar. We clearly defined what information could never be shared with AI systems depending on whether it is a public LLM or a secure internal environment, while also distinguishing the difference between the two. We cautioned teams that while public AI tools are easy to access, they often harvest your inputs to improve their models, so using unauthorized chatbots can expose sensitive company data. Prompts themselves were treated as intellectual property, protected just like code, specifications, and client deliverables.

Underlying all of this remains one core principle: human judgment remains essential. Our policy makes it clear that AI is a support tool, not a replacement for expertise. Every AI-assisted output must be reviewed for accuracy, bias, and alignment with our values, ensuring the work we deliver remains thoughtful, ethical, and high quality.

Follow through.

A rigid policy can stifle creativity, while a vague one invites risk. We invest time in finding language that sets clear expectations and still encourages thoughtful exploration. Teams are invited to evaluate new workflows within safe boundaries, so innovation flourishes without compromising data-security, oversight, or quality.

We hope these insights help you as you develop your own policy. We are still refining our own as a work-in-progress document – identifying gaps, rolling out additional training to ensure alignment across offices, and preparing to revise our guidelines as both market tools and our in-house solutions evolve.

If you are debating whether to start, do not wait. You do not need perfection on day one. Give your team a structure for responsible AI use, gather feedback, and iterate. As technology matures, your plain-language instructions will evolve into automated workflows that drive geometry, populate schedules, and produce client-ready deliverables. Decide early where AI adds speed and insight, and where human intuition remains non-negotiable. Teach emerging staff the reasoning behind these choices so they build the judgment that senior reviewers rely on during QAQC. The richer the context you supply, the more accurate, personal, and valuable the AI’s response becomes, keeping you firmly in control while extending your reach throughout every project phase. 

Erik Stroemberg is an associate director and BIM manager at HLB Lighting. Connect with him on LinkedIn.

About Zweig Group

Zweig Group, a four-time Inc. 500/5000 honoree, is the premier authority in AEC management consulting, the go-to source for industry research, and the leading provider of customized learning and training. Zweig Group specializes in four core consulting areas: Talent, Performance, Growth, and Transition, including innovative solutions in mergers and acquisitions, strategic planning, financial management, ownership transition, executive search, business development, valuation, and more. With a mission to Elevate the Industry®, Zweig Group exists to help AEC firms succeed in a competitive marketplace. The firm has offices in Dallas and Fayetteville, Arkansas.