The United Nations World Day for Safety and Health at Work campaign launches on the 28th April 2025 with a focus on new technologies and how they are transforming occupational health and safety (OHS). At the heart of this transformation lies artificial intelligence (AI), offering powerful opportunities but also introducing serious challenges.
As AI-driven platforms are increasingly being adopted into workplace processes, it’s important to pause and consider the balance between human expertise and technology that’s essential for delivering safe, quality outcomes.
The Advantages and the Risks
AI brings undeniable benefits. It can generate educational content at speed, reduce costs and rapidly analyse vast amounts of data to inform compliance and risk assessments.
However, the potential risks cannot be overlooked in the safety-critical sector. While AI is a remarkable tool, it lacks real-world experience and emotional intelligence. It doesn’t know what it feels like to be under pressure or the unpredictable nature of human behaviour in an emergency. Effective safety training goes beyond delivering facts; it needs to resonate emotionally. That comes from human experience, not algorithms. Subject matter experts, not just a consensus of data.
Without expert oversight, AI-generated content also risks missing niche contextual nuances, leading to misinterpretations that could have life-threatening consequences. Take terms like ‘take 5,’ ‘hot work,’ or ‘confined space’; these aren’t just industry jargon. They represent critical protocols and misunderstanding them can compromise safety.
Control and Governance
At Mintra, we’ve implemented clear parameters to control and govern the use of AI. Our internal AI Policy was created following a comprehensive risk assessment and defines which tools are permitted and how they should be used. We’ve put these measures in place to maximise the benefits of AI while protecting our staff, our data and the integrity of Mintra’s products and services.
In the broader landscape, the safety-critical sector is at an inflection point, learning how to regulate and manage AI. Governance of AI in training standards remains in its infancy and so far, we’ve not seen guidance filtering through from standards and accreditation bodies. That means the onus is currently on individual organisations to ensure that their use of AI meets the highest safety and training standards. Until regulatory frameworks catch up, self-governance, transparency and deep subject matter expertise are essential to prevent gaps in quality and accountability.
The Rise of In-House
Since the COVID-19 pandemic, there has been an increase in organisations creating their own HSE training in-house and AI will be an attractive tool to assist the process. There are clear pros and cons to this. The approach allows businesses to directly utilise in-house subject matter expertise and tailor content to their specific needs, but the quality controls, processes and governance are far less than those of an accredited training provider.
AI can certainly accelerate development, but without human oversight and expertise, key terminology, tone and emphasis can be misinterpreted. Equally, the text content might be accurate, but the imagery is implausible or has abnormalities, such as additional fingers.
Credibility and authenticity are essential, especially in safety-critical training, where accurate interpretation of information is not just a quality concern but can have life-threatening consequences.
As businesses take more training in-house, they must also ask:
• How are we governing the use of AI?
• Do we have quality assurance measures in place?
• Are the people reviewing and verifying the AI content qualified and experienced?
• Are we operating to the same high standards expected by regulatory bodies?
Until external guidance emerges, organisations must take it upon themselves to ensure that in-house training built with AI doesn’t compromise on quality, compliance or impact
The Human Element: Disclosure and Verification
At Mintra, our team have spent years operating in the safety-critical sector. This deep understanding enables us to blend storytelling, interactivity, and emotional relevance into every course.
Yes, AI can support us, enhancing content, streamlining processes, and offering additional insights, but it should support, not lead. The magic happens when we combine AI’s capabilities with human creativity, instructional design, and domain expertise. Delivering training that educates and resonates, driving lasting behaviour change.
Looking Forward: Striking the right balance
The future is bright if we can strike the right balance. Imagine a dynamic learning ecosystem where data from digital training feeds into a real-time feedback loop. AI identifies patterns and personalises content, while human experts ensure it remains accurate, meaningful, and emotionally engaging.
As AI continues to evolve and adoption increases, the need for robust governance and subject matter expertise has never been more important. Quality must remain paramount. The use of artificial intelligence in training must always be subject to supervision and independent evaluations. In safety-critical industries, AI can’t be left unsupervised.
…………………………….
MINTRA SERVICES is a digital Learning and Human Capital Management software for safety-critical industries.
tvrwdb
2ujreo
2ujreo
u3jjoi
Interesting perspective—thanks for making me think.