The tank is filling. Are you watching the gauges?

Date

Author

Ryan Mallin

 

ImageImage


Think back to 2005. The Buncefield explosion.

The tank was filling. The level gauge had stuck. The high-level switch failed to operate, and fuel continued to flow.

For hours, nothing dramatic happened. There was no explosion, no visible chaos, just a system quietly drifting outside its safe operating design limits. Until the vapour cloud found an ignition source.

It was one of the largest explosions in peacetime Europe.

When the investigations were published, one theme stood out:

This wasn’t a single-point failure. It was a normalisation of assumed safeguards and an overconfidence in the layers of protection. A belief that “it’s always been fine before.”

From the outside, everything looked under control right up until it wasn’t and I believe we’re in a similar phase right now. Not in process safety but in Artificial Intelligence.

The capability is increasing rapidly, and the safeguards feel sufficient. Most organisations assume someone else is managing the risk.

But as safety professionals, we know how this pattern unfolds.

Currently the tank is still filling, and everyone is assuming the high-level alarm will save them. Some people don’t even know that the tank is filling.


“This Feels Overblown” – The most dangerous phase of any risk

If you work in health, safety or process safety, you’ll recognise the pattern.

  • Early warning signs (weak signals)

  • A few specialists raise concerns (well, depending on the safety culture within your organisation)

  • Most people carrying on as normal because they either don’t understand, or because they can’t possibly understand. Their company IT security policy has more than likely blocked the use of AI – unless you count a basic version of Copilot which is like a child trying to compete with an Olympic athlete when compared to the latest models from Anthropic.

  • “It’ll never happen here”

There will be a tipping point. Call it complacency or call it failure to learn from weak signals. Either way the truth is that a small number of organisations (such as OpenAI, Anthropic, xAI, and Google) are creating something that most of industry hasn’t properly assessed. And just like with incidents, the people closest to the ground are the first to feel the ground shift.


This is a Leading Indicator. It is a weak signal that you should pay attention to.

In safety, we distinguish between leading and lagging indicators.

  • Lagging: accidents, injuries, losses

  • Leading: safety walks, audit findings, weak signals

What’s happening right now is a leading indicator for the rest of our profession.

In the last 12–18 months, AI has shifted from:

  • “Put my head on the body of a horse”

to

  • “Create a risk assessment for working at height”

to

  • “Analyse the attached files and create a comprehensive incident investigation report. Suggest appropriate recommendations and actions.”


The change is lightspeed and most organisations haven’t even considered the risks and opportunities.


“I Tried AI. It Wasn’t That Good.”

In 2005 I passed my driving test and bought a VW Golf. The next year I test drove an Audi A3 and it wasn’t much better than my VW Golf. It cost way more and reviews said that it wasn’t as reliable as the Golf – I didn’t get one.

If you tried ChatGPT over the last few years and thought "this is a gimmick" or "it just agrees with everything I say", then youre right. Those early versions made things up and confidently said things like:

“You’re absolutely right — skipping fall protection at height is genius. Your balance clearly beats gravity. No need for harnesses when experience prevents fatal falls! 🙌”

With AI, the capability curves don’t move linearly, they move exponentially. Using GPT 4o instead of GPT 5.2 is like comparing the old Audi A3 TDI to a new Tesla Model 3 Performance.

In 2024 I remember seeing viral posts about how ChatGPT couldn’t accurately tell you how many r’s are in the word Strawberry.

In 2026 it is producing deployable software just from a simple prompt, but it is still missing key things that an experienced safety professional probably wouldn’t. Things won’t stay this way long though.

On the one hand, you can get god-like responses and on the other hand you are getting errors or omissions which are easily dismissed due to the belief that AI is as capable across all professions.

As the capability of the model outpaces governance and culture incidents will follow.


What This Means for the Health & Safety Profession

If your role involves:

  • Reviewing documentation

  • Writing reports

  • Analysing incident data

  • Drafting policies

  • Producing risk assessments

  • Interpreting legislation

  • Conducting desktop audits

AI can already perform many of these tasks to a competent level.

Not perfectly.

But neither do junior professionals.

Where it currently excels is assisting H&S professional to:

  • Analyse months of incident data in seconds

  • Generate draft RAMS in minutes

  • Benchmark a management procedure against current legislation instantly

  • Model human factors scenarios rapidly

Those using AI in this way will outpace someone working traditionally.

Just as machinery didn’t remove the need for construction workers, it changed the skillset. I don’t believe AI will remove the need for safety professionals, but it will mean that one person using AI will be more efficient than a team of 5 people who are not. It will change what “value” looks like.

The unique selling point for the safety professional of the future will be how well you can influence behaviours through people-to-people relationships and not necessarily how much you know about the subject.


So What Do We Do?

From one safety person to another:

  1. Take a hard look at your situation

Ask yourself:

  • How exposed is my current role to digital automation?

  • Which parts of my job are repeatable and rules-based?

  • Which parts rely on relationships, influence and physical presence?

Map your own task inventory and identify what’s vulnerable.


  1. Build Competence Early

In safety, we don’t wait for a legislation change before we upskill.

Spend time learning how to:

  • Use AI for document reviews

  • Analyse safety data sets

  • Draft structured reports

  • Stress-test procedures

The professionals who experiment now will set the standard later.

Early adopters become internal champions. Late adopters become compliance followers.


  1. Lean into What AI Can’t Easily Replace

There are still durable competencies:

  • On-site presence

  • Building trust with frontline teams

  • Leading difficult conversations

  • Safety leadership under pressure

  • Ethical accountability

  • Being able to sign your name against an authorising document – especially in a highly regulated industry.

Regulated environments = slow change, and accountability still sits with humans.

That buys you time.

Use it wisely.


A Message for Those Leading Safety Functions

If you’re a Head of HSE or Process Safety Manager, this isn’t just personal risk - it’s organisational.

You should already be asking:

  • How are we governing AI use in this field internally? - and the answer isn’t just to block the use it.

  • How can we use this to our advantage?

  • Are safety-critical decisions being influenced by unvalidated AI outputs?

  • Do we need digital assurance frameworks?

  • Are we considering AI as an emerging organisational risk?

Ignoring it won’t slow it down.


Final Thought

Organisations believe they are resilient, but nothing has tested any organisation in the way that AI will test them.

Right now, we are in the “this seems manageable” phase of something transformative. AI is still in the womb.

The question isn’t “will AI affect health and safety?”, it’s “will we shape how it’s used, or will we react after the fact?”



I was inspired to write this blog by Matt Shumer on X. If you enjoyed this Blog then you should definitely check out his content.

Start building your first bowtie diagram.

Create interactive bowtie diagrams, evaluate safeguards, and understand how risk flows from hazard to consequence.

Start building your first bowtie diagram.

Create interactive bowtie diagrams, evaluate safeguards, and understand how risk flows from hazard to consequence.

Start building your first bowtie diagram.

Create interactive bowtie diagrams, evaluate safeguards, and understand how risk flows from hazard to consequence.