Skip to main content
Enterprise AI Analysis: Detection of Mobile Phone Use While Driving Supported by Artificial Intelligence

Enterprise AI Analysis

Detection of Mobile Phone Use While Driving Supported by Artificial Intelligence

This research presents an intelligent embedded system leveraging computer vision, inertial sensing, and edge computing for real-time mobile phone use detection by drivers. Our analysis highlights its robust performance, efficiency, and scalability for advanced road safety applications.

Executive Impact at a Glance

Key performance indicators showcasing the immediate value and efficiency of the proposed AI system in real-world driving environments.

81% Overall Accuracy
12.8 FPS Inference Rate
8.4 W Avg. Power Consumption
0.80 Avg. F1-Score
0.85 Avg. mAP

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Model Performance
Computational Efficiency
System Architecture
Operational Validation

Model Performance

The YOLOv8n model demonstrated robust performance, achieving stable convergence across loss functions. Key metrics include a precision of 0.87, recall of 0.81, mAP50 of 0.85, and mAP50-95 of 0.68. The overall average F1-score was 0.80 and mAP approximately 0.85, indicating its suitability for real-time systems. For specific classes, driver_phone_1 achieved a precision of 0.84, recall of 0.90, and F1-score of 0.87, while driver_phone_2, facing more complex visual conditions, showed precision of 0.78, recall of 0.61, and F1-score of 0.69. The model maintained an overall average precision of 0.81 despite variable lighting and motion. Error rates were 8.6% for driver_phone_1 and 24.7% for driver_phone_2, highlighting challenges with partial occlusions and reflections for the latter.

Computational Efficiency

The system, deployed on a Jetson Xavier NX in 20 W 6-core mode, achieved an average inference rate of 12.86 FPS with an average power consumption of 8.4 W and a stabilized temperature of 58 °C. This demonstrates a balanced trade-off between performance, energy efficiency, and thermal stability crucial for continuous in-vehicle operation. Comparative analysis against other lightweight models like YOLOv5n (15.3 FPS, 9.1W, 62°C), YOLOv4-Tiny (18.7 FPS, 9.8W, 66°C), and MobileNet-SSD (20.2 FPS, 7.6W, 55°C) confirmed YOLOv8n's favorable balance for sustained throughput in resource-constrained environments. The Python execution environment, with controlled thread limitations and modular process isolation, prevented CPU overload and maintained SoC thermal stability, ensuring an average latency below 85 ms per inference cycle.

System Architecture

The proposed architecture seamlessly integrates computer vision, inertial sensing, and cloud data management. At its core, a YOLOv8n model is deployed on a Jetson Xavier NX 16Gb—Nvidia. An MPU6050 inertial sensor acts as an activation gate, ensuring image capture only when the vehicle is in motion, optimizing power consumption and reducing false records. Detections are stored in Firebase Firestore in base64 format, enabling event traceability. A temporal persistence scheme requires a detection for at least five consecutive seconds before logging an event, with a five-second cooldown. The modular design ensures functional independence, fault tolerance, and scalability across acquisition, detection, inertial gating, persistence/telemetry, and visualization modules.

Operational Validation

Operational performance was assessed through 30 test runs across diverse scenarios: daytime driving, nighttime driving, partial occlusion, and direct glare. The system achieved its best performance during daytime driving with 83.3% correct detections (25/30 trials), attributed to stable illumination and enhanced edge discriminability. Performance decreased under partial occlusion (56.7%) and direct glare (63.3%), reflecting the influence of illumination and occlusions on visual detection. The inertial control mechanism using the MPU6050 significantly reduced false positives by activating detection only during vehicular motion. The web-based HMI provides real-time monitoring and evidence management, demonstrating dynamic transitions between 'Standby' and 'Active Detection' states, enhancing system traceability for supervisory contexts.

83.3% Correct Detections in Daylight

Enterprise Process Flow

Design Phase
Development Phase
Testing and Validation Phase

Embedded Inference Benchmark on Jetson Xavier NX

Model Average FPS Average Power (W) Steady Temperature (°C)
YOLOv8n (trained) 12.86 8.4 58
YOLOv5n 15.3 9.1 62
YOLOv4-Tiny 18.7 9.8 66
MobileNet-SSD 20.2 7.6 55
YOLOv8n offers a superior balance of performance, energy efficiency, and thermal stability for continuous vehicular operation.

Per-Class Performance Disparity

The system exhibited a notable difference in performance between driver_phone_1 (F1-score 0.87) and driver_phone_2 (F1-score 0.69). This is primarily attributed to scenarios involving partial occlusion and photometric complexity for driver_phone_2, where the phone is often covered by the driver's hand or affected by specular reflections on the glass screen. These conditions reduce local contrast and limit feature availability, increasing false negatives and highlighting the need for advanced data augmentation and robust detection strategies for edge cases.

Calculate Your Potential ROI

Estimate the tangible benefits of implementing AI-driven driver monitoring in your enterprise.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

A clear, phased approach to integrating intelligent driver monitoring into your operations, ensuring smooth deployment and maximum impact.

Phase 1: System Design & Dataset Preparation

Defined two detection categories for mobile phone use, created a 1000-image labeled dataset (70% training, 15% validation, 15% testing), configured YOLOv8n input size (512px) and confidence threshold (0.35), and designed Firebase Firestore storage for base64-encoded images with metadata.

Phase 2: Model Implementation & Deployment

Trained YOLOv8n in Google Colab for 100 epochs, exported the best checkpoint, deployed on Jetson Xavier NX (20W 6-core mode), configured a Python 3.8.10 virtual environment, and integrated four functional modules: detection, database communication, orchestration, and visualization.

Phase 3: Validation & Operational Testing

Conducted controlled tests to verify detector activation under motion (MPU6050 gating), persistence timing, cooldown, lossless base64 transmission to Firestore, and correct web interface display. Performed functional evaluations to confirm synchronized operation and quantitative performance metrics across diverse driving scenarios.

Ready to Transform Your Road Safety?

Leverage cutting-edge AI to enhance driver safety and operational efficiency. Book a personalized consultation to explore how this technology can integrate into your fleet.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking