#069 - AI in Architecture, Engineering and Construction | 2025 BST Global Conference Recap
Reflecting on Status of the Engineering and Construction Sector and the Pace of AI Adoption
I recently attended the BST AI Global Summit, in beautiful sunny Florida.
The conference itself was outstanding, professionally executed, efficient and well organized. Content was relevant and presented clearly. They really did an excellent job and I hope to return next year.
I heard a common theme from folks across global engineering firms: a clear acceptance that our industry is primed for significant disruption. Many know they need to adapt, but the how of practical implementation is still buffering.
Like any event centered on a trending technology, there was a wide breadth of content. My goal in this article is to distill it down to actionable intelligence for practicing engineers.
My day-to-day work at Knight PiΓ©sold is hands-on with frontier AI models in a secure environment. We're past the initial hype and now into the weeds of implementation challenges. This gives me a specific lens, but my aim here is to cut through the broader noise and provide some insights to sharpen your thinking and effectiveness, wherever your firm is on its AI journey.
The Unvarnished AEC & AI Landscape
An AI Strategy Isn't Optional Anymore. Clients are starting to ask pointed questions. If your firm can't articulate how it's using AI (or planning to) to enhance value, manage risk, and improve project delivery, it signals unpreparedness. This is becoming a basic dimension of perceived competence. Lack of a coherent answer is a red flag. Some sectors are moving slower than others, with commercial/residential leading the charge. Energy and infrastructure projects are moving a little slower but with growing pockets of application.
Data: Still the Foundation, Still the Bottleneck. This isn't news, but AI forces the issue with brutal clarity. You cannot get value from AI without high-quality, well-governed, accessible data; preferably in the cloud. Garbage in, garbage out still applies; AI just produces the garbage faster and potentially makes it look more convincing. Sure you can clean and sort data using AI tools but for me, this has been a fools errand.
Many firms are still bogged down by data silos and inconsistent quality. Retroactively sorting or upgrading your data is an attractive proposal, a historical gold mine of proprietary data, but in practice this is difficult to meaningfully implement. I am treading carefully here, working on smaller pilot projects to validate concepts.
Proving Value & Managing Risk: The potential of AI is clear at this point. Translating that into measurable ROI and project improvements? This requires its own detailed breakdown, which I'll try to tackle in the future. Simultaneously, the risks are substantial and non-negotiable:
Cybersecurity: AI models introduce new attack surfaces.
Ethics & Bias: Biased training data leads to biased outputs. This is an engineering integrity issue.
Accuracy & Hallucinations: LLMs, in particular, confidently invent nonsense. Verification is critical. The NIST AI Risk Management Framework is a decent starting point for structuring risk thinking, but practical, AEC-specific methodologies for validation and risk mitigation are nascent at best. Expect this to be a major focus area.
Client AI Adoption: As clients start using AI for their own internal analyses, expect two pressures:
They'll come with narrower, pre-digested scopes, reducing our traditional analysis role.
They'll assume we're using AI for massive internal efficiencies and expect more sophisticated deliverables for the same (or lower) fees. We must get better at defining and justifying the premium value our AI-augmented expertise provides. Otherwise, margins will erode.
Upskilling is the Elephant in the Room: The gap in AI literacy across the industry is vast. Effective adoption requires more than teaching people how to write prompts. It demands a fundamental shift in how engineers approach problems, integrate tools, and critically evaluate outputs. Generic, one-size-fits-all training is largely a waste of time and resources. This cultural and educational hurdle is arguably the biggest barrier to widespread, effective AI use right now. The spectrum of adoption and resistance is vast, from pious zealotry to steadfast refusal.
Data Products: Credible New Revenue, But Think Long Term: Beyond internal efficiency, there's also a push towards monetizing expertise through client-facing data services (e.g., AI-driven predictive maintenance models, site optimization tools). This is a potentially significant growth area, but only for firms with the right data infrastructure, technical talent (AI/ML + domain expertise), and product development discipline. It's not a simple add-on; it requires strategic commitment and long term maintenance. Like many of these tools, easy to build, easy to become obsolete, tricky to maintain.
The SME AI Gap & Big Firm Investment. Smaller to medium-sized firms (sub-500 staff) often seem hesitant or limited to basic, off-the-shelf tools like Microsoft CoPilot, frequently lacking a clear strategic application. In contrast, large firms (Arup, Stantec, Parsons, etc.) are building dedicated AI/data science teams and making more substantial, integrated investments. This points to a widening capability gap. SMEs need to find focused, high-leverage niches or risk falling further behind.
Disruption is Coming: While firms are cagey about specific AI budgets, the resource shift in major players is a fact. Tech startups are targeting AEC inefficiencies, and Big Tech is actively acquiring related capabilities. Our industry's historical resistance to rapid tech adoption makes it a prime target. AI is the catalyst. The old, comfortable pace of change is over.
The All-Knowing Internal Knowledge Base? The dream of feeding decades of project files into a LLM or and getting instant, accurate answers remains largely aspirational. Yes, there's progress on narrow tools (proposal generators, resume parsers), but reliably querying vast, unstructured, legacy project data is a hard technical problem that no one, including the giants, has truly solved at scale from what I can tell. There are promising developments with RAG, agentic flows and more recent MCP tools but this is a massive undertaking. If youβve tried this, then you know.
The Path Forward: Optimization, Not Tool Accumulation
For engineers, the highest leverage isn't chasing every new LLM. It's about optimizing the tools and data we already have access to (At the moment, I favour Gemini 2.5 Pro and Claude 3.7). Some things to think about:
Define Real Objectives: What specific engineering problem are you trying to solve or improve with AI? What does measurable success look like? Answering this is difficult, and it will differ by context, but avoiding the question leads to wasted effort. We must define these targets.
Measure Impact Rigorously: "It feels faster" or βit just works betterβ isn't good enough, and frankly, the industry as a whole is still weak here. This isn't an excuse to avoid it; it's a call to develop better methods. We need quantifiable metrics: time saved, errors reduced, quality improved, risk mitigated. (Developing robust frameworks for this is non-trivial; it's something I'm actively working on).
Align AI with Business Reality: Ensure AI initiatives directly support core business goals and address genuine client needs, not just internal curiosity projects. I heard the term βinventor syndromeβ - somebody who canβt help just building stuff without a clear strategy. I could certainly be diagnosed with this.
Where Individuals and Teams Should Focus:
Master AI Best Practices & Own the Risk: Data governance and output validation are expanding as responsibilities that extend past your IT team; they are now engineering responsibilities. Practical things to consider:
Learn Effective Prompt Engineering: Precision matters. Treat it like writing clear specifications. This makes all the difference.
Cultivate Extreme Skepticism: Verify everything AI produces. Your engineering judgment is more critical, not less. Assume it's wrong until proven right. If you are using LLMβs to research facts/data, you are completely insane. Get a grip.
Understand Model Limits: Know the strengths and weaknesses of the specific tools you use. Don't apply an LLM where a deterministic calculation is required.
Targeted, Role-Specific Training: Generic "Intro to AI" workshops are low value. Engineers need hands-on sessions using your firm's approved tools for their specific tasks. Drafting, modeling, analysis, report writing, project management β each needs tailored guidance. Short, high-impact, practical sessions.
Client Communication: If AI is part of your workflow, articulate how it benefits the project. Does it enable more complex simulations? Reduce specific risks? Deliver insights faster? Help clients (and internal stakeholders) understand the tangible value, weβre all in the same boat, clients are leaning on AI more and more to gain an edge, help them out where you can.
Drive Focused, Bottom-Up Innovation: Often, the most valuable AI applications solve granular, annoying problems:
Share What Works: Create simple channels (like a dedicated Teams channel or wiki page) for sharing effective prompts, small wins, and lessons learned.
Empower Junior Staff: They often pick up new tools quickly and see non-obvious applications. Give them room to experiment.
Share Code: Use internal repositories (we use Gitlab) for small, reusable scripts and tools. Enable engineers to solve their own micro-problems efficiently.
Why a "Digital Strategy" Document Should Be A Real Thing
I can feel your eyes rolling upwards, just hold on a second. I know this sounds like marketing waffle but a good document helps clarify your thoughts and formalizes a strategy. The act of writing it down forces you to consider real implementation and scope. Advantages include:
A Shared Target: What are we collectively trying to achieve with AI?
Reduced Wheel-Spinning: Aligns individual efforts, preventing redundant or low-value experiments.
A Common Playbook: Standardizes best practices, improving quality and managing risk consistently.
Justification for Resources: Clear objectives and metrics help secure budget and time for valuable work (including your time).
Guidance for Skills: Informs what training is actually useful versus what's just hype.
The intent is to direct collective energy effectively, not constrain ingenuity.
Final Thought
The AI shift in AEC is real, and it's accelerating. Itβs easy to feel overwhelmed or tempted to dismiss it. My takeaway from the summit, filtered through practical application, is the need for focused, pragmatic action. Much easier said than done.
I am deeply interested in this topic on a personal level so itβs easy for me to consume this type of information but for many busy engineering professionals, the prospect of digesting an entirely new way to do things is abhorrent. I get that, everyone is balls to the wall busy. But the flexibility provided by learning the basics of prompt engineering for LLMβs can not be overstated, and thatβs just scratching the surface.
The challenges (data readiness, risk management, upskilling, proving value) are significant. But the opportunities are equally real for engineers and firms who can think clearly, adapt systematically, and apply these tools with discipline and critical judgment.
The tools are evolving rapidly but the fundamental principles of good engineering endure.
Focus on understanding the fundamentals, rigorous application, and measurable value. Thatβs how we navigate this.
Thanks for your time.
See you in the next one.
James π