Pro Tips

“Safer Workplaces Through Smarter Technology” Hearing

“Safer Workplaces Through Smarter Technology” Hearing

Building an AI-Ready America

What we learned from the "Building an AI-Ready America: Safer Workplaces Through Smarter Technology” Congressional Hearing

On February 11, 2026, the House Subcommittee on Workforce Protections held a hearing on “Building an AI-Ready America: Safer Workplaces Through Smarter Technology.” The message was consistent across the opening statement, witness testimony, and the committee’s press release: AI is moving from an interesting concept to a practical set of tools that can prevent injuries, not just document them after the fact.

But the hearing also made something else clear. The hardest part is not “getting AI.” The hard part is making AI trustworthy inside real safety workflows: validating effectiveness, keeping humans accountable, protecting privacy, and using data to coach and prevent, not to punish.

The hearing witnesses included leaders from MCAA, NAW, and Samsara.

Below are the clearest learnings for EHS and safety professionals.

Learning 1: AI is pushing safety from reactive reporting to preventive control

The Subcommittee framed AI-powered safety tools as a shift from incident-based safety management to a preventive, data-driven model, with examples like wearables for heat stress and predictive analytics that surface risk before an accident occurs.

Witness testimony reinforced that this is already happening in the field across industries. 

What this means for EHS leaders: if your safety program is still primarily built around lagging indicators, AI adoption will not be a “tool upgrade.” It is a workflow redesign. Your job is to turn detection into prevention, which means defining what happens after an alert or incident, who reviews it, and how it becomes a change in controls. At Haven, we refer to this as a shift from "closing" incidents to "understanding" them.

Learning 2: The most effective safety AI is task-based and domain-specific, not generic

One of the strongest patterns across testimony was that safety value comes from narrow, operationally grounded use cases:

  • MCAA emphasized their AI tools were built specifically for masonry workflows and standards, built on decades of industry knowledge, and designed to run on devices used in the field.

  • NAW emphasized that wholesaler-distributors are usually deployers, not developers, and described practical categories of use: computer vision (disembodied AI), digital twins for predictive maintenance (predictive AI), wearables for guidance (human-centered AI), and automation-assisted systems that reduce walking, lifting, and repetitive motion.

  • Samsara emphasized “task-based, situational, preventive” AI in physical operations, not broad generative productivity tools.

This aligns with how we think about AI in high-stakes safety workflows at Haven: the goal is not generic text generation. The goal is operational trust, defensibility, and closed-loop execution inside the safety system. 

What this means for EHS leaders: you will get more value by choosing one or two high-consequence workflows and implementing AI tightly inside them, rather than buying a general AI feature and hoping it “improves safety” broadly.

Learning 3: Human oversight is not optional, and AI cannot inherit accountability

The opening statement raised the right question: these tools can be invaluable, but employers should maintain space for human oversight and be wary of delegating ultimate responsibility for worker safety to AI. However, if it is not grounded in safety fundamentals and an ethical framework, it can introduce significant physical and psychological hazards into the workplace.

At Haven, we often describe AI in safety as leverage, not replacement. That framing is practical: your team still owns decisions, but AI can increase speed, quality, and consistency across sites.

What this means for EHS leaders: treat AI as a decision-support layer. Keep humans “in the loop” with explicit decision rights, review gates, and accountability, especially when corrective actions have operational tradeoffs.

Learning 4: Trust determines adoption, and trust is built through transparency and privacy safeguards

The opening statement explicitly flagged worker privacy and stakeholder understanding as central to adoption and long-term success. Safety data collection can fail if workers fear retaliation or discipline for honest mistakes or for speaking up. Ethical principles from safety organizations centered on trust, transparency, equity, and privacy.

What this means for EHS leaders: if it feels like surveillance, you will lose the workforce. If you deploy it in a way that is clearly about prevention, coaching, and engineering improvements, adoption gets easier and outcomes improve.

Learning 5: Validation, auditing, and drift control are the real work

The Subcommittee asked directly: how can the effectiveness of these tools be validated?

The witnesses pointed to practical strategies including making AI understandable to users, collaborative system evaluations, transparency that enables auditing, certification concepts, and building an evidence base for safety implications.

This echoes a reality we see across organizations: building AI pilots is easy, production is hard. Trust requires evaluation, audit trails, and continuous monitoring, not just a demo that looks good. (Refer to our blog on Building vs. Buying AI Tools)

What this means for EHS leaders: you need acceptance criteria, auditability, and lifecycle ownership from day one, including how you will measure false positives, missed detections, and performance changes over time.

Bottom line

The hearing’s strongest takeaway is not that AI is coming. It is that AI is already here in practical, deployable forms, and EHS leaders now have a window to shape how it gets used.

If you lead the rollout with prevention, transparency, and governance, AI can help you close the loop faster: detect risk, correct it, and verify the control works. If you roll it out as surveillance or as a shortcut around the hierarchy of controls, it will erode trust and create new hazards.


Keep up to date with our latest insights and news

Haven - ICAM investigations

Pro Tips

Leveraging Haven to supercharge ICAM-based investigations

Using Haven as the execution engine to power ICAM investigation framework

Mar 3, 2026

Analyzing Capa Effort vs. Impact

Pro Tips

AI-Assisted CAPA Assessment and Planning

A Practical AI-Assisted CAPA System That Honors the Hierarchy of Controls Without Ignoring Effort and Reality

Feb 16, 2026

Building an AI-Ready America

Pro Tips

“Safer Workplaces Through Smarter Technology” Hearing

What we learned from the "Building an AI-ready America: Safer Workplaces Through Smarter Technology” Congressional Hearing

Feb 12, 2026

Haven Officialy Launched

Updates

Haven Safety AI Officially Launched

After a year and a half of design, build, and field validation, today we are officially launching Haven Safety AI.

Feb 10, 2026

Build vs. Buy in EHS AI

Pro Tips

From Build to Buy in Enterprise Safety AI

From Build to Buy in Enterprise Safety AI Why vertical COTS is increasingly the rational default for safety intelligence.

Feb 1, 2026

RCA Quantity vs. Quality

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 2]

With AI support, you do not need to choose between quantity and quality (2/2)

Jan 15, 2026

RCA Quality Beats Quantity

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 1]

With AI support, you do not need to choose between quantity or quality (1/2)

Jan 3, 2026

State of AI in the EHS industry

Pro Tips

The State of AI in the EHS Industry - Q4 2025

AI in EHS has moved past the “innovation theater” phase. AI is now productized, embedded, and increasingly measurable.

Dec 1, 2025

family picture

Updates

Why Are We Building Haven?

Every incident report represents a real person. A parent. A provider. Someone whose life might split into before and after because something small was missed.

Jun 2, 2025

Keep up to date with our latest insights and news

Haven - ICAM investigations

Pro Tips

Leveraging Haven to supercharge ICAM-based investigations

Using Haven as the execution engine to power ICAM investigation framework

Mar 3, 2026

Analyzing Capa Effort vs. Impact

Pro Tips

AI-Assisted CAPA Assessment and Planning

A Practical AI-Assisted CAPA System That Honors the Hierarchy of Controls Without Ignoring Effort and Reality

Feb 16, 2026

Building an AI-Ready America

Pro Tips

“Safer Workplaces Through Smarter Technology” Hearing

What we learned from the "Building an AI-ready America: Safer Workplaces Through Smarter Technology” Congressional Hearing

Feb 12, 2026

Haven Officialy Launched

Updates

Haven Safety AI Officially Launched

After a year and a half of design, build, and field validation, today we are officially launching Haven Safety AI.

Feb 10, 2026

Build vs. Buy in EHS AI

Pro Tips

From Build to Buy in Enterprise Safety AI

From Build to Buy in Enterprise Safety AI Why vertical COTS is increasingly the rational default for safety intelligence.

Feb 1, 2026

RCA Quantity vs. Quality

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 2]

With AI support, you do not need to choose between quantity and quality (2/2)

Jan 15, 2026

RCA Quality Beats Quantity

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 1]

With AI support, you do not need to choose between quantity or quality (1/2)

Jan 3, 2026

State of AI in the EHS industry

Pro Tips

The State of AI in the EHS Industry - Q4 2025

AI in EHS has moved past the “innovation theater” phase. AI is now productized, embedded, and increasingly measurable.

Dec 1, 2025

family picture

Updates

Why Are We Building Haven?

Every incident report represents a real person. A parent. A provider. Someone whose life might split into before and after because something small was missed.

Jun 2, 2025

Keep up to date with our latest insights and news

Haven - ICAM investigations

Pro Tips

Leveraging Haven to supercharge ICAM-based investigations

Using Haven as the execution engine to power ICAM investigation framework

Mar 3, 2026

Analyzing Capa Effort vs. Impact

Pro Tips

AI-Assisted CAPA Assessment and Planning

A Practical AI-Assisted CAPA System That Honors the Hierarchy of Controls Without Ignoring Effort and Reality

Feb 16, 2026

Building an AI-Ready America

Pro Tips

“Safer Workplaces Through Smarter Technology” Hearing

What we learned from the "Building an AI-ready America: Safer Workplaces Through Smarter Technology” Congressional Hearing

Feb 12, 2026

Haven Officialy Launched

Updates

Haven Safety AI Officially Launched

After a year and a half of design, build, and field validation, today we are officially launching Haven Safety AI.

Feb 10, 2026

Build vs. Buy in EHS AI

Pro Tips

From Build to Buy in Enterprise Safety AI

From Build to Buy in Enterprise Safety AI Why vertical COTS is increasingly the rational default for safety intelligence.

Feb 1, 2026

RCA Quantity vs. Quality

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 2]

With AI support, you do not need to choose between quantity and quality (2/2)

Jan 15, 2026

RCA Quality Beats Quantity

Pro Tips

Quality Beats Quantity in RCAs, But AI Lets You Have Both [Part 1]

With AI support, you do not need to choose between quantity or quality (1/2)

Jan 3, 2026

State of AI in the EHS industry

Pro Tips

The State of AI in the EHS Industry - Q4 2025

AI in EHS has moved past the “innovation theater” phase. AI is now productized, embedded, and increasingly measurable.

Dec 1, 2025

family picture

Updates

Why Are We Building Haven?

Every incident report represents a real person. A parent. A provider. Someone whose life might split into before and after because something small was missed.

Jun 2, 2025

See Haven in Action

See Haven in Action

Experience how AI-powered safety intelligence can transform your workplace. Book a demo to see our platform in action.