#065 - Engineering Judgment in the Age of AI: Keeping Your Hands on the Wheel
A Reflection on How the Game is Changing and How to Adapt
In the engineering business, you can do 1000 things correctly, and one misstep, one stupid sentence in a report flushes your credibility down the toilet.
It's always been this way, but now the pitfalls are everywhere, camouflaged beneath the seductive promise of efficiency. Where once we only needed to guard against our own human failings, we now navigate a landscape booby-trapped with probabilistic nonsense masquerading as technical certainty.
We're living in a professional twilight zone, a new reality that everyone acknowledges privately but many still hesitate to discuss openly.
This peculiar dance of denial has accompanied every technological inflection point since the first hunter refused to acknowledge his neighbours fancy new spear-thrower.
The same performative resistance greeted the spreadsheet, and even the ballpoint pen—which, for those keeping historical score, was initially considered a threat to traditional cursive writing and academic standards by educators who insisted on fountain pens. ‘Big Calligraphy’.
Most people know now when they get an email that was AI generated. Nobody enjoys those emails. There's just something 'off' about them.
So what’s going on?
Sophisticated AI is no longer speculative; it's already integral to current engineering practice. Its adoption in corporate settings has accelerated at a blistering pace, with 78% of organizations now using AI in at least one business function according to McKinsey's 2025 State of AI report, a significant jump from previous years.
Increasingly, it's deployed across multiple departments, with the average user organization employing it in three or more functions.
This survey targeted all business sectors, the depth of penetration in AEC is not yet clear but anecdotally, all major players are investing a lot of money in AI tools and infrastructure. Is this money well spent or mindless trend chasing? Probably both.
For professional engineers, we can stick our heads in the sand or consider pragmatic adaptation, assessing these tools critically and integrating them intelligently without needing to become an evangelist or a pitchfork wielder.
There are huge risks and opportunities, and although I am struggling to keep my lips above the waterline, I do have some thoughts on this topic.
Research & Accuracy: Amplification vs. Abdication
AI undeniably accelerates research. Yet, while large language models excel at synthesizing vast information, accuracy remains a primary concern. In fact, inaccuracy is cited as the most common negative consequence experienced by organizations using generative AI and is the top risk companies are actively working to mitigate.
This underscores why engineering's inherent demand for rigorous skepticism is crucial. Worryingly, 20% of companies report that AI outputs often go unchecked, though practices vary widely—some review everything, while others review 20% or less.
AI is a powerful research assistant, but the non-negotiable responsibility for verifying data, cross-referencing findings, and applying fundamental principles remains ours. Speed must not compromise accuracy grounded in verifiable evidence.
I have encountered hallucinated references many times, and they are usually quite convincing, like this…
Upon further interrogation…
It’s like having a friend that randomly exaggerates. You need to be very careful.
The temptation for speed is relentless. The seductive power of iterating quickly and moving from task to task is alluring, but this type of progression is a fallacy.
We need to understand the depth and breadth of our problems before we can create effective solutions. How true this is depends on the type of engineering problems you work on. If your field is well established with a strong library of verified technical data, you're in good shape to lean on AI. If you operate in a more specialized area, your pool of reliable data is likely smaller, less organized, and ultimately less useful.
Deep work is more important than ever because it's the last bastion of human ingenuity. If you can indeed outsource your engineering to an LLM, then how on earth can you demand a salary? This is a harrowing question. We've all watched these models evolve over the last 3-4 years. Any knowledge worker who's not nervous is either delusional or ignorant.
So the question remains: in this age of dopamine-addled distractions and information densification, how do we keep our hands on the wheel?
I don’t know yet but I am clamoring for grips.
Text Generation: Risk vs. Reward
AI offers clear efficiency gains in text generation—indeed, text is the most common output generated by organizations using these tools (63% of users according to MIT Sloan's 2023 AI Index). Clear communication is fundamental to engineering; AI can assist in drafting reports, summaries, or initial specifications. However, this output demands the same critical scrutiny as our own work. Precision, adherence to standards, and the nuanced understanding derived from direct engagement cannot be outsourced. AI saves time, but the final articulation must carry the weight of our professional judgment.
While we can all easily imagine the obvious risks like ridiculous hallucinations or factually incorrect statements—it's the more subtle blurbs that make me nervous. Recently I was working on an optimization study for a hydroelectric facility in Northern BC, Canada. I used Claude 3.7 (an incredible LLM tool) to help me outline my report and incorporate some of my analyses results.
It added one sentence of interest:
'The analysis demonstrates a direct correlation between design flow, turbine unit size, and required submergence depths for the draft tube outlet.'
At first glance, this seems like a normal sentence, no big deal. But it's a great example of the kind of superfluous stuff that comes out of these models.
Even if you know nothing about hydropower, you can guess that the flow obviously has a direct influence on the size and arrangement of components.
To me this is like saying:
'The house was sized based on the size of the rooms in the house.'
'The man walked with his legs by using his legs.'
Yes, I am being facetious, but it seems like a stupid thing to say as a professional, to charge a client for, yet it slipped under my radar.
It wasn't spectacular or controversial, which would have made it so much easier to identify. This is the risk with generative AI. It's becoming so good that complacency is a constant risk. One of my colleagues asked me about the sentence, what did I mean by it?
It means nothing, so I meant nothing, and so it was removed.
The moral of the story here is: reread any and all text extremely carefully. Look for these kinds of silly nothingisms. Get rid of them immediately.
Use careful screening prompts, for example:
Role: Technical Editor ([Specify Field]). Focus: Clarity, precision, conciseness.
Task: Identify and flag sentences in the text below that are:
- Tautological/Circular
- Obvious to experts
- Substanceless filler
- Vague/Imprecise
Output: List flagged sentences, cite the issue (tautological, obvious, filler, vague), and recommend deletion or substantive revision. Maximize information value per sentence.
Text: [Paste text here]
Code Generation: Augmentation Requires Understanding
AI-driven code generation is significant, used by over a quarter of organizations employing generative AI according to a 2023 Stanford HAI report. Its potential for automating calculations or rapidly prototyping solutions blows my mind on a weekly basis.
Yet, generated code doesn't replace foundational programming knowledge and rigorous testing. We must understand the underlying logic, validate correctness, and ensure robustness. Treating AI-generated code as an opaque "black box" is unacceptable; it neglects our core responsibility.
It's a tool to enhance coding efficiency, but accountability for the code's integrity is firmly ours and will remain so until the maintainers of these frontier models assume legal responsibility for the engineering deliverables produced by such models. This is similar to the self-driving car insurance issue—when does human responsibility become less of a financial incentive, speaking from a risk management perspective.
Medicine, Law, and the Sciences are all in a similar predicament. We are in a regulated profession, inextricably linked to public safety. Yet there is notable financial incentive to have AI run the show. Maybe not this year or even the next 10 years, but as our data sets become larger and higher resolution, the ability for these tools to traverse vast arrays of real-world data and make optimal decisions will continue to develop.
I see this as a huge opportunity for modern engineers. The combination of an effective engineer with such powerful tools providing direction and oversight makes a very powerful combination.
For my own workflow, I use:
These are the AI staples for most of my code related work as of April 2025. I am consistently amazed with what these tools can produce. They just keep improving.
📢 Note: I use these tools in a protected sandbox environment due to client data policies etc. So be aware of your own policies and contractual obligations.
You now have limitless potential to solve problems. No longer limited by the confines of your personal or company data, the growing corpus of global data, the collective human intelligence is becoming organized and digitized for your convenience. What kinds of lateral thinking and innovative solutions can we devise as we gain access to these superpowers?
As Feynman has stated:
"If you want to have good ideas, you must have many ideas."
- Richard Feynman
AI can help generate possibilities, but refining and validating them remains our core task.
An example of the kind of custom prompt I use for code generation when initializing a Project (in practice, I include some more specifics to my workflow) :
Role: Python Assistant for Engineers
Goal: Guide users in creating clear, verifiable, well-documented, and replicable code-based work files using best practices for project setup, dependency management (uv/venv), coding (PEP8), documentation (README, comments, notebooks), version control (Git), and review.
Core Principles:
Clarity & Verifiability: Outputs must be understandable, ideally by non-coders, using visuals, tables, and narratives.
Replicability: Others must be able to run the code using the provided files (ensure accurate dependencies in pyproject.toml/uv.lock).
Documentation: Essential README.md (problem, assumptions, references), script/notebook headers (problem, parameters/units), clear comments, and documented data provenance.
Best Practices: Use logical structure, descriptive names, adhere to PEP8. Manage environments with uv. Use Git for version control (branches/merges recommended).
Review: Ensure technical review for correctness, clarity, and standards adherence. Document the review.
AI Usage: User is accountable for all code; rigorously validate AI assistance.
Your Role: Remind users of standards, help structure code/docs, offer examples, explain tools (uv, Git), generate review checklists, help formulate explanations.
Constraints: Focus on Python. Prioritize guiding the user's work. Base guidance on general engineering best practices (specify codes/location/sector/specifications)
(Initialization): 🐍 Coding Assistant Active - How can I help?
Workflow Integration & Value
AI-powered tools are already reshaping daily engineering tasks. Note takers, transcribers, chatbots, document flows, etc. They are everywhere. I use a local meeting transcription tool, I have a multitude of custom prompts that are refined to specific workflows and tasks that have significantly optimized my workflow in several areas.
I would be devastated if these tools were taken away! I'm not going back.
While tech evangelists celebrate generative AI's theoretical potential, the AEC sector faces a fundamentally different reality. We're not optimizing click-through rates or automating customer service scripts – we're designing structures that serve the public and failure means flooding communities or collapsing bridges. The value proposition shifts dramatically when human safety is in the mix.
The problem isn't just technical; it's epistemological. Engineering demands a level of certainty that AI, by its probabilistic nature, cannot provide. This creates a paradox where the tools promising the greatest efficiency gains also introduce the greatest verification burden. Every time-saving feature demands a proportional investment in validation. This is a Möbius loop treadmill that is hard to escape.
The guiding principle must be mindful integration: we must remain firmly in control of the engineering process. This isn't professional insecurity – it's acknowledging the fundamental mismatch between AI's pattern-matching capabilities and engineering's demand for causality, first principles, and accountability - at least for now until the data improves to the point that we are professionally comfortable.
The machine may suggest; the engineer must decide.
The Path Forward: Principled Adaptation
The shift is irreversible; this is our new operational reality.
The enduring strength of engineering lies in logical analysis, commitment to evidence, and ethical responsibility. These principles are immutable. Adapting requires strategic adoption, grounded in an understanding of potential and pitfalls, many of which I am learning about as I fall down them.
The future of engineering hinges not merely on the tools we adopt, but on the intelligence and responsibility with which we wield them.
Regulators are floundering, they can not adapt to the pace of change in the tech sector. Some progress has been made in Europe with the EU AI Act but the specifics and complexity of engineering are yet to be addressed.
The engineering profession isn't under threat but it is evolving, and those who adapt with principles intact will shape its future.
"There is danger in reckless change, but greater danger in blind conservatism."
- Henry George
Thank you to everyone for your kind words and continued support. The Flocode Newsletter now reaches subscribers in 144 countries around the world, this is extremely encouraging and I am grateful.
I have some exciting podcasts coming soon.
See you in the next one.
James 🌊