#097 - The Water Line
Professional Engineering and AI - intuition, transparency, and learning to swim in the current.
I recently listened to a conversation between Ezra Klein and Jack Clark, co-founder of Anthropic, about where AI is headed and what it means for white-collar work. It’s one of the best discussions I’ve heard on the topic, and I’d encourage you to watch the full thing. It left me thinking for days, not about some abstract future, but about what I’m already experiencing in my own practice and what I think is coming for our profession.
I lead our AI initiatives at Knight Piésold, where I work. I spend a significant portion of my time thinking about how these tools integrate into complex engineering workflows. I am, by any reasonable definition, a technologist who is enthusiastic about this shift. And even so, there are days when I feel like I’m barely keeping my lips above the water line.
If that’s true for someone actively swimming in this current, I can only imagine what it feels like for engineers who have been heads-down delivering projects for the past decade and have only peripherally registered that something significant is happening.
This article is for both of us.
What I’m actually feeling
The thing nobody in our industry seems willing to say plainly is this: your clients know you have more capability now. They know because they have more capability too. The tools have been democratized. A project manager on the owner’s side can generate a competent-looking technical memo in twenty minutes. They can interrogate your deliverables with AI and come back with more ruthlessly sharp questions than they would have asked two years ago. The bar for what constitutes valuable specialized consulting is rising, fast.
I feel this pressure directly. To justify the rates we charge, to prove that hiring an experienced engineering consultant is worth it, you increasingly need to push these tools to their limits in creative ways. And that doesn’t just apply to technical analysis. It applies to onboarding new staff, training, writing proposals, business intelligence, risk assessment, strategic planning, project management. Every surface area of the business is exposed to this shift. The firms and individuals who develop the intuition and creativity to wield these tools effectively will pull ahead at a pace that surprises everyone, including themselves.
That’s the part that keeps me up at night. Not that AI will replace engineers, but that the gap between engineers who engage with these tools seriously and those who don’t will become a chasm before most people realize it’s opened.
What makes us different (for now)
There are aspects of professional engineering that provide some insulation from the first wave of white-collar disruption, and they’re worth being clear-eyed about.
We interface with the physical world. This matters more than people appreciate. Every AI tool is only as good as the context and data you feed it, and in our profession, we are the ones who specify how that data gets generated. We design the drilling programs. We define the hydrologic monitoring networks. We scope the structural inspections, the surveying, the sampling. We curate and manage that data, decide what’s relevant, understand its limitations, and expose it to analytical tools with judgment about what it means. For now, we remain an essential ingredient in that chain.
Professional regulation adds another layer. In many jurisdictions, a licensed engineer must stamp the work, carry the liability, exercise the professional judgment. That’s a real barrier to full automation. Though I’ll note that even this appears to be losing traction in some places, and I wouldn’t build a career strategy around regulatory protection alone.
But here’s the honest part: “for now” is doing a lot of heavy lifting in those sentences. The window of security that comes from physical-world interface and professional regulation is real, but it’s not permanent. It buys us time. The question is what we do with that time.
How I See It
I spend less time on busywork now than I did two years ago. Meaningfully less. The hours I used to burn on formatting reports, chasing down reference standards, drafting boilerplate correspondence, setting up calculation templates... a significant portion of that is handled by AI, or at least accelerated to the point where it barely registers as work.
What’s replaced it is more time thinking. More time ideating, refining approaches, interrogating assumptions, exploring alternatives I wouldn’t have had the bandwidth to consider before. For me, personally, it’s a clear net positive. I feel like a more effective engineer than I was before these tools existed.
I understand that many people see this differently. There’s a legitimate concern that widespread AI use will slowly erode our collective ability to think clearly, to develop unique perspectives, to produce work that isn’t a grey, cookie-cutter approximation of what an algorithm considers adequate. The worry that we’re training ourselves out of originality is not irrational. I’ve seen AI-generated engineering content that is technically correct and completely devoid of insight. More of that is not a good outcome for anyone.
But I think these concerns and the optimism I feel are just the opposite sides of the same coin. The tools themselves are neutral. They amplify whatever you bring to them. If you bring shallow understanding, you get polished mediocrity. If you bring genuine expertise and creative intent, you get something that would have taken you five times as long to produce on your own, with space left over to think about whether it’s actually the right answer.
I choose optimism because of the tangible benefits I experience daily. But I hold that optimism alongside real concern, particularly about the next generation.
The part I can’t resolve
Am I worried about how AI might influence the development of my two young kids as they grow up in a world with ubiquitous AI tools? Absolutely. The interview touched on this, and it’s the dimension of all this that I find hardest to think clearly about.
There’s something fundamentally different about learning to think in a world where a system that appears to know everything is always available to answer your questions, finish your sentences, and tell you your ideas are interesting. The friction of not knowing, of having to sit with confusion and work through it, is where a lot of genuine understanding gets built. I’m not sure how you preserve that in a world designed to eliminate friction at every turn.
But the genie is out of the bottle. These tools exist. They will become more capable. The trajectory is not in question. What remains in question is how thoughtfully we adapt to it, which has always been the thing that separates humans who thrive from humans who get swept along.
Building intuition the only way it gets built
I’m not going to pretend I have a tidy framework for navigating all of this. But I do have a growing conviction about what actually works, and it’s simpler than most people want it to be.
You have to use the tools. Regularly, on real problems, with genuine curiosity about where they succeed and where they fall apart. There is no shortcut to this. Reading about AI, attending webinars about AI, having opinions about AI... none of it substitutes for the direct experience of sitting with a problem you understand deeply and working through it alongside one of these systems. That’s where intuition gets built. You start to feel where the boundaries are. You learn what kinds of prompts produce useful output and what kinds produce confident nonsense. You develop a sense for when to trust and when to verify, which is really just engineering judgment applied to a new domain.
The curiosity piece matters more than people realize. The engineers I’ve seen progress fastest with these tools are not the most technically sophisticated. They’re the most willing to experiment, to try something that might not work, to ask “what if I approached this completely differently?” That experimental mindset, treating every interaction as a small hypothesis to test, compounds remarkably quickly. Within weeks you develop instincts that would take months to acquire from reading alone.
One principle I keep coming back to is borrowed from how Anthropic themselves approach building. They’ve made a deliberate choice to be transparent about their systems, to make their processes visible, to publish what they’re learning even when it’s uncomfortable. The reasoning is straightforward: you can only improve what you can see. If your workflows are opaque, even to yourself, you have no mechanism for making them better.
I think this applies directly to how we should adopt AI in engineering practice. Make your AI-assisted workflows explicit. Document what you’re delegating to AI, what you’re reviewing, where the human judgment enters the chain. Not because someone is going to audit you, but because that visibility is what allows you to improve. It’s the difference between “I use AI sometimes” and “I have a clear process for how AI supports my work, and I know where the weak points are.” The first is experimentation. The second is engineering.
This also means being honest with your team and your organization about what you’re doing. The instinct to quietly use AI and not mention it is understandable but counterproductive. If one person on a team figures out that AI can cut proposal preparation time in half, and they keep it to themselves, the firm doesn’t benefit. If they share the workflow openly, everyone improves. The transparency creates a feedback loop where better practices propagate and get stress-tested by people with different perspectives. That’s how institutional knowledge has always been built. The tools are new, but the mechanism isn’t.
Where this lands
I’m genuinely curious how people in our profession feel about this. Not the surface-level takes about whether AI will replace engineers, but the deeper questions about what it’s doing to how we think, how we develop judgment, how we define value in our work. The obvious risks get plenty of airtime. I’m interested in the ones that aren’t obvious yet.
In any case, we are finding out in real time.




Great breakdown James. We're certainly at the early stages of some level of technical revolution. Time will tell where this goes, but no doubt it will keep changing things. Totally agree that practicing is the best way to stay on top of things. Like you said, the best way to build deep understanding is through wrestling, trying, failing, and learning.