top of page
Search

AI Is Doing the Work. But You’re Still Carrying the Risk.


I was recently involved in an adjudication where one thing became clear almost immediately. One party had relied heavily on AI to prepare their submissions—the payment claim, the adjudication claim, and even the reply. At first glance, the documents appeared strong. The structure was clean, the language was polished, and the arguments were presented with a level of confidence that made the overall submission read as though it had been prepared by someone experienced.


However, as I worked through the material in detail, a very different picture began to emerge. Key contractual requirements had been misunderstood, critical assumptions were incorrect, and several parts of the claim did not align with the actual facts of the project. More concerningly, some of the clauses being relied upon did not even exist within the contract. Each of these issues, taken individually, may have been manageable. But when combined, they created a position that was fundamentally difficult to defend. What stood out most was not the presence of errors, it was the genuine belief that everything had been done correctly.





This brings us to a broader point that is becoming increasingly relevant across the industry. If this work had been prepared by a junior team member, a consultant, or a subcontractor, it is highly unlikely that it would have been submitted without careful review. Most professionals would, at a minimum, take the time to interrogate the assumptions, check the work against the contract, and confirm that it accurately reflected what had occurred on the project. That is basic due diligence. Yet when the same work is produced by AI, that level of scrutiny is often reduced or, in some cases, removed entirely.

The question we need to ask is simple: why are we treating AI differently?


Part of the answer lies in how AI presents information. It produces work that is fast, structured, and highly convincing. Tasks that previously required hours of effort, such as drafting claims, preparing submissions, or summarising contractual positions, can now be completed in minutes. The outputs are coherent and well-articulated, often giving the impression that the underlying reasoning is equally sound.


As a result, trust is increasingly being placed not on verification or understanding, but on how persuasive the document appears. This is where the risk begins.

The issue is not AI itself, nor is it the use of AI-generated content. The issue lies in the confidence that AI projects and the ease with which that confidence is adopted by the user.


Human confidence is typically grounded in competence, developed through experience and shaped by an awareness of potential risks and past mistakes. AI confidence, by contrast, is inherent. It does not hesitate, it does not signal uncertainty, and it does not distinguish between situations where its output is highly reliable and those where it is not. It presents information with the same level of certainty regardless of context. When that confidence is mistaken for correctness, the consequences can be significant. Once AI’s confidence becomes your confidence, you place yourself in a position where any oversight or error becomes yours to carry, particularly if it is not identified early.


This distinction becomes critical in construction, and even more so in the context of disputes. Outcomes are not determined by how well an argument is written or how persuasive it sounds. They are determined by whether the contractual process has been properly followed, whether notices have been issued in accordance with the contract, whether timing requirements have been met, and whether the claim is supported by credible evidence. AI can assist in presenting an argument clearly and coherently, but it does not independently verify whether those underlying requirements have actually been satisfied. Nor does it exercise judgement in the way that a qualified professional or adjudicator would. It cannot reliably interpret how a specific contract should be applied in context, assess how the facts of a project influence entitlement, or evaluate how a decision-maker may view issues such as compliance, causation, or adequacy of evidence. As a result, the real risk is not that AI produces poor work, but that it produces convincing work without confirming whether the position itself is valid.



The Shift You Need to Be Aware Of


None of this is to suggest that AI should be avoided. On the contrary, it is an extremely powerful tool when used appropriately. It can improve clarity, assist with structuring complex information, accelerate drafting processes, and provide useful perspectives that may not have been initially considered. In many ways, it raises the baseline quality of written communication across the industry. However, there remains a fundamental difference between presenting an argument effectively and having an argument that can withstand scrutiny. AI operates well in the former. Construction disputes are decided in the latter.


What is changing, and what professionals need to be aware of, is the shift in how risk presents itself. Historically, weaker claims were easier to identify through poor structure, inconsistent reasoning, or unclear language. That signal is now diminishing. We are increasingly seeing submissions that are well-written and logically structured, yet still contain significant gaps in contractual compliance, misunderstandings of entitlement, or insufficient supporting evidence. This creates a more subtle and potentially more dangerous form of risk: work that appears credible enough that it is not questioned.



The Risk Sits With You—Not the Tool


Ultimately, regardless of how a document is produced, the responsibility does not shift. When a claim is submitted, it carries your name, your organisation stands behind it, and your position is judged based on its content. AI is not accountable for that outcome. It does not bear the consequences of an incorrect assumption or a non-compliant process. You do. That is why the standard of review should not change simply because the work was generated more efficiently.


AI should therefore be used as a tool to support thinking, not replace it. It can act as a useful sounding board, a way to test ideas, or a means of improving how information is communicated. However, it should not be treated as the final authority on whether a position is correct. Where the stakes are high, or whether due to the value of a claim, the complexity of the dispute, or the potential impact on a business, the additional care is required. This may involve seeking input from a specialist, consulting an experienced professional, or at the very least, independently verifying the basis of the position being advanced.


AI is doing more of the work. That much is clear. But the responsibility has not moved.

You are still carrying the risk.


And if there is one principle to take away from this, it is this:

if you would not submit someone else’s work without reviewing it, you should not do so with AI. And where the consequences matter, don’t hesitate to seek advice from someone who understands the risk.

 
 

Bridging the Gaps. Build with Confidence.

© 2025 Emmolina May. All Rights Reserved.

bottom of page