AI System Safety
Risk Perception in Complex Systems: A Comparative Analysis of Process Control and Autonomous Vehicle Failures
As intelligent systems increasingly operate in high-risk environments, understanding how they perceive and respond to hazards is critical for ensuring safety. In this study, we conduct a comparative analysis of 60 real-world accident reports, 30 from process control systems (PCSs) and 30 from autonomous vehicles (AVs), to examine differences in risk triggers, perception paradigms, and interaction failures between humans and artificial intelligence (AI). Our findings reveal that PCS risks are predominantly internal to the system and detectable through deterministic, rule-based mechanisms, whereas AVs' risks are externally driven and managed via probabilistic, multi-modal sensor fusion. More importantly, despite these architectural differences, both domains exhibit recurring human–AI interaction failures, including over-reliance on automation, mode confusion, and delayed intervention. This study highlights the need for a hybrid risk perception framework and improved human-centered design to enhance situational awareness and responsiveness. While AI has not yet been implemented in PCS incident studies, this work interprets human-automation failures in these cases as indicative of potential challenges in human–AI interaction that may arise in future AI-integrated process systems. Implications extend to developing safer intelligent systems across industrial and transportation sectors.
Executive Impact Summary
Our comparative analysis reveals critical insights into risk perception and human-AI interaction across process control and autonomous vehicle systems, highlighting shared vulnerabilities and actionable pathways for enhancing safety.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Study Methodology Workflow
This framework illustrates the methodological workflow of the study, from data collection to comparative analysis.
Distinct Risk Typologies
PCS: Risks predominantly initiated by internal technical anomalies (system/equipment degradation, instrumentation drift, chemical reactivity) in structured, deterministic environments.
AV: Risks dominated by external, dynamic, and uncertain factors (pedestrian/cyclist interactions, other vehicle behavior, sensor misperception) in open, unpredictable environments.
PCS vs. AV Risk Perception Paradigms
A summary of the fundamental differences in how Process Control Systems (PCSs) and Autonomous Vehicles (AVs) perceive and interpret risks.
| Feature | PCSs | AVs |
|---|---|---|
| Sensor Modalities | Single modal (e.g., pressure, temp) | Multi-modal (LiDAR, radar, vision) |
| Sensor Fusion | Low-frequency, fixed logic | High-frequency, adaptive fusion |
| Detection Method | Rule-based, deterministic | ML-based, probabilistic |
| Adaptability | Low | Medium-High |
| Transparency | High (explainable) | Low (black-box models) |
Decision & Response Mechanisms
PCS: Structured, transparent, rule-based interpretations with immediate automated responses (e.g., shutdowns). Predictive mechanisms largely absent.
AV: Higher complexity, lower transparency (black-box AI). Risk interpretation often implicit/ambiguous, with responses variable and often delayed.
Recurring Interaction Failures
Both PCS and AV domains exhibit recurring human–AI interaction failures, including over-reliance on automation, mode confusion, and delayed intervention. These are often rooted in interface limitations, delayed cueing, or incomplete system understanding, rather than human incompetence.
Failure Type Distribution
Percentage distribution of dominant failure types across PCS and AV systems.
| Failure Type | PCS (%) | AV (%) |
|---|---|---|
| Human Error | 26.7% | 20.0% |
| Over-reliance on Automation | 20.0% | 30.0% |
| Passive Monitoring | 16.7% | 36.7% |
| Mode Confusion | 13.3% | 10.0% |
| Maintenance/Inspection | 23.3% | 3.3% |
Chemical Manufacturing Explosion (PCS)
Summary: A fatal explosion occurred during a batch chemical mixing operation. An operator mistakenly added 10% potassium hydroxide (KOH), an incompatible base, into a batch tank containing XL 10 (a silicon hydride-based polymer) and TD 6/12 Blend. The chemical reaction released hydrogen gas, which ignited, causing an explosion that killed four employees and destroyed the production facility.
Risk Trigger: Internal: Inadvertent mixing of incompatible chemicals (10% KOH with XL 10), leading to hydrogen gas generation.
Sensor & Detection: No hydrogen or flammable gas detection system was operational. Failed catalytic gas detectors had previously been trialed. Manual recognition by operators; no automated detection. Detection latency was several minutes.
System Response: Manual ventilation and evacuation were attempted; incomplete due to rapid escalation. Insufficient time for full evacuation.
Outcome: Fatalities (4), total destruction of production building, offsite damage.
Design Weaknesses:
- Inadequate hazard analysis and process safety system.
- Similar appearance of incompatible chemical drums.
- Absence of effective gas detection and alarm systems.
- Weak safety culture and procedural safeguards over engineering controls.
Calculate Your Potential AI Impact
Estimate the efficiency gains and cost savings your enterprise could achieve with intelligent automation.
Your Path to Safer AI Systems
We guide you through a structured approach to integrate AI responsibly, minimizing risks and maximizing operational benefits.
Phase 1: Discovery & Assessment
Comprehensive analysis of your existing systems, identifying key risk areas, automation opportunities, and human-AI interaction points based on our comparative research.
Phase 2: Hybrid Framework Design
Development of a tailored risk perception framework, integrating deterministic safety protocols with adaptive AI intelligence and human-centered design principles.
Phase 3: Prototype & Validation
Building and testing AI-integrated prototypes with simulation-based validation and human-in-the-loop testing to ensure robustness and safety in high-risk scenarios.
Phase 4: Deployment & Continuous Improvement
Strategic deployment, operator training, and ongoing monitoring with mechanisms for continuous learning and adaptation, ensuring long-term system resilience and safety.
Ready to Enhance Your System's Safety?
Proactive risk perception and human-centered AI design are not just advantages—they are necessities. Let's discuss how our expertise can secure your intelligent systems.