Skip to main content
Enterprise AI Analysis: A Decade Later: Evolution of Real-Time Embedded Virtual Presence Systems (An HCI Perspective)

Enterprise AI Analysis

A Decade Later: Evolution of Real-Time Embedded Virtual Presence Systems (An HCI Perspective)

OSARUIYOBO GIWA, SUNJUN KIM
Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea

Published: 09 March 2026 | Online AM: 10 January 2026 | Accepted: 23 December 2025 | Revised: 19 October 2025 | Received: 15 July 2025

Executive Impact at a Glance

This research offers critical insights into the real-time embedded systems powering the next generation of virtual presence, directly impacting strategic technology roadmaps.

0 Total Downloads
0 Total Citations
0 Time from Received to Published
0 Publication Year

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Defining Real-Time Embedded Virtual Presence

This paper defines real-time embedded virtual presence systems as those capable of detecting, processing, and responding to virtual entities within real-time environments using embedded computing hardware. These systems operate under strict timing constraints to maintain user immersion, agency, and safety.

The survey covers the period from 2015 to 2025, charting the evolution from early latency-constrained prototypes to ethically-aware, hybrid Mixed Reality (MR) and Internet of Things (IoT) ecosystems.

Key Contributions:

  • A timeline-driven framework synthesizing technical and HCI milestones across five phases.
  • An interdisciplinary synthesis bridging embedded systems and HCI advancements.
  • Critical analysis of latency reduction and ethical embedding.
  • A structured future roadmap for predictable AI timing, synchronized multimodal fusion, and responsible embedded XR.

Systematic Review Methodology

The study employed a structured literature review from 2015-2025, grounded in both system-level constraints and human-centered interaction paradigms. The aim was to analyze technological shifts, trends, and design considerations across major development phases, emphasizing latency, AI, and immersive feedback.

0 Total Articles Collected 0 Final Articles Selected

Enterprise Process Flow: Survey Structure

1. Introduction
2. Comparison with Existing Surveys
3. Foundational Works and Evolution of Virtual Presence Systems
4. Methodology
5. Timeline in Detail
6. Synthesis and Discussion
7. Open Problems and Future Directions
8. Conclusion
9. Appendix

Comparison with Existing Surveys (Table 1 Summary)

Focus Area Years Technical Emphasis Limitation Compared with This Work
VR Solutions Using Artificial Intelligence 2010-2021 AI-driven adaptation for immersive content and user interaction AI/HCI scope only; lacks real-time or embedded analysis
Adaptive VR-Based Training Frameworks 2010-2020 Pedagogical adaptation, cognitive load, and user-state modeling in VR Focused on training; timing treated as a black box
Security and Privacy in Virtual Reality 2015-2024 Ethical and privacy frameworks in immersive systems Ethical focus; omits timing and predictability
Real-Time Scheduling on Heterogeneous Accelerators 2015-2025 CPU-GPU scheduling for time-critical heterogeneous systems Computation-focused; no XR or user-level link
This Survey 2015-2025 Latency, predictability, embedded AI, multimodal fusion, and ethics First synthesis bridging deterministic embedded systems with HCI outcomes split into timelines.

Timeline 1: Real-Time Rendering Bottlenecks in Early Consumer Grade AR/VR Systems (2015-2017)

This period focused on minimizing motion-to-photon delays in consumer-grade AR/VR systems to enhance user responsiveness and presence. Innovations included frameless rendering and predictive tracking, balancing visual quality with timing constraints.

Period Timeline Heading Technological Milestones HCI Focus Key Inclusion Criteria
2015-2017 Oculus Rift CV1, HTC Vive, ARKit/ARCore Real-time rendering, latency mitigation Rendering bottlenecks in early consumer AR/VR Immersion, usability, accessibility

A key advancement was the development of a frameless renderer, implemented using FPGA-based scan-out-driven pixel rendering, which achieved ultra-low latency (1 ms) by eliminating the frame buffering paradigm (Friston et al., 2016 [19]).

Timeline 2: Timing and Synchronization Challenges in Context-Aware Presence (2018-2019)

This phase shifted from purely rendering performance to situational awareness, with MR systems adapting to user context and environment. Challenges included managing resource constraints and distributed processing with technologies like IoT.

Period Timeline Heading Technological Milestones HCI Focus Key Inclusion Criteria
2018-2019 IoT integration, HoloLens 2 Timing and synchronization challenges in context-aware XR Remote collaboration, social presence Context-aware systems with real-time feedback

Notable developments include HoloFace, a real-time facial augmentation framework for HoloLens (Kowalski et al., 2018 [30]), which demonstrated dual processing pipelines to balance accuracy and real-time execution. The OFALL-SE hybrid object localization model (He et al., 2019 [24]) further showcased synchronized holographic content for collaborative MR environments.

Timeline 3: Adaptive and AI-Driven Real-Time Embedded Systems (2020-2021)

This period saw the convergence of AI, emotion modeling, and adaptive logic into VR/MR, moving towards personalized virtual presence. Systems became responsive to user behavior, emotions, and learning states, introducing new real-time constraints for AI inference.

Period Timeline Heading Technological Milestones HCI Focus Key Inclusion Criteria
2020-2021 Emotion-aware interfaces, COVID-era XR use Adaptive, AI-driven embedded systems User modeling, affective computing Embedded personalization and AI adaptation

The Virtual Emotion Loop (VEE-loop) framework (Andreoletti et al., 2021 [3]) allowed VR environments to adapt based on real-time emotional biofeedback, personalizing experiences. Architectures inspired by BIRNAT (Bidirectional Recurrent Neural Attention Transformer) were used for high-fidelity avatar reconstruction under latency and bandwidth constraints, leveraging temporal fusion for coherent volumetric frames.

Timeline 4: Human-Virtual Agents in Real-Time (2022-2023)

Focus shifted to deepening interaction quality between users and intelligent agents through real-time, multimodal feedback. This involved synchronized visual, auditory, tactile, and behavioral cues for immersive and socially aware experiences.

Period Timeline Heading Technological Milestones HCI Focus Key Inclusion Criteria
2022-2023 Real-time haptics, agents, sensor fusion Human-agent interaction in real time Multimodal feedback, copresence Embodied interaction and sensory fusion

Innovations included the Virtual Triplets framework for shared control of avatars in VR classrooms (Zhang et al., 2022 [72]), and Wasserstein GAN-based models for synchronized non-verbal facial behaviors in virtual agents (Delbosc et al., 2023 [14]). The HapticProxy system (Zhang et al., 2023 [73]) provided positional vibrotactile feedback, increasing interaction precision in tangible MR setups.

Timeline 5: Hybrid Reality, Ethics, and Seamless Integration (2024-2025)

The final phase integrates virtual, physical, and intelligent infrastructure into hybrid reality architectures, emphasizing low-latency synchronization, edge rendering, and responsible AI governance, enabling persistent collaboration between digital twins, physical agents, and human users.

Period Timeline Heading Technological Milestones HCI Focus Key Inclusion Criteria
2024-2025 Hybrid MR, IoT, edge computing XR + IoT convergence, digital twins Hybrid presence across physical and virtual spaces Integrated hybrid systems with real-time control

The Mixed Reality Virtual Device (MRVD) architecture (Lee et al., 2024 [32]) offloads computational logic to containerized virtual layers, enabling seamless convergence between MR, IoT, and Digital Twin components. GradualReality (Seo et al., 2024 [54]) modulates physical object presence in VR through interaction-state-aware blending, improving task accuracy. Ethical frameworks, such as formal metrology for AI-driven XR systems (Hu et al., 2024 [26]), also became a direct concern.

Hybrid Inference Reconciliation Strategies (Table 5)

Strategy Description Real-Time Mechanism Key Tradeoff / Embedded Challenge
Model Partitioning AI model layer-split; lightweight feature extraction (front layers) runs on the Edge, heavy classification/head (back layers) runs on the Cloud. Offloads majority of FLOPs to the Cloud, leaving embedded device with reduced, predictable local processing WCET. Latency vs. Resource Use: High dependence on stable, low-latency network connectivity.
Early Exit (Branchy) Single large model runs entirely on the Edge but features internal classification/exit points after early layers. Dynamic Scheduling Policy tied to the RTOS clock triggers an early exit when task predicted to miss deadline. Predictability vs. Accuracy: Guaranteed low latency via local execution; significant sacrifice in final accuracy.
Edge Pre-processing + Cloud Refinement Edge runs simple, ultra-fast AI task; sends coarse result to Cloud which runs complex, high-fidelity refinement model. Edge result acts as a Local Prediction/Fallback. If Cloud result delayed, embedded system uses low-fidelity result for continuous response. Fidelity vs. Jitter: Guaranteed RT performance via local prediction; complexity in seamless reconciliation.
Edge-Cloud Slice + Jitter Buffer Edge renders critical elements (e.g., UI, immediate tracking) locally. Cloud renders high-fidelity scene elements. Results merged on client using Jitter Buffer. Jitter Buffer absorbs network variability by delaying display of non-critical Cloud frames until consistently rendered. Consistency vs. State-of-Art Graphics: Ensures RT performance for critical elements; introduces minimal but consistent latency for high-fidelity content.

Synthesis of Evolutionary Trends

The decade saw a clear trajectory: from basic rendering optimization (Timeline 1) to context-awareness (Timeline 2), then AI-driven personalization (Timeline 3), multimodal human-agent interaction (Timeline 4), and finally hybrid ethical systems integrating digital twins and edge computing (Timeline 5).

Key Technological Innovations: Mid-air haptics, AI-driven VR/AR frameworks for adaptive experiences, and hybrid twin platforms (MRVD, ICHDT-ECoTI) operationalizing synchronization between physical and digital layers.

Open Problems and Future Directions (Table 11 Summary)

Problem Area Timeline(s) Origin Brief Explanation
End-to-End Latency Bottlenecks T1 (2015-2017) Maintaining stable frame rates and low motion-to-photon latency across entire XR pipelines remains difficult, especially in mobile and embedded settings.
Predictable Scheduling for Embedded Rendering T1 Embedded GPUs and low-power processors struggle to guarantee frame-accurate rendering at scale. RTOS support is limited.
Real-Time Predictability in Adaptive AI Systems T3 (2020-2021) AI-powered personalization introduces timing unpredictability. Systems need methods to bound inference times and synchronize AI-driven feedback within real-time loops.
Latency-Safe Human-Agent Co-Presence T4 (2022-2023) Coordinating turn-taking, motion prediction, and feedback with virtual agents is latency-sensitive. Slight delays degrade co-presence.
Ethical Control Loops in XR Systems T5 (2024-2025) XR systems influence emotional and cognitive states in real-time. No standard models exist to enforce ethical thresholds.
Seamless Edge-to-Cloud Coordination for Hybrid XR T5 Hybrid presence requires balancing computation across embedded, edge, and cloud layers. Maintaining consistent world state, real-time responsiveness, and privacy is still an open systems challenge.

Future Directions: Focus on Bounded Inference Models (quantized/pruned DNNs), Time-Triggered AI Scheduling (TT-AIS) for hard real-time guarantees, and Model Certification. Also critical are Unified Scheduling Models for cross-layer latency coordination, Deterministic Synchronization, and Energy-Latency Co-optimization for hybrid worlds. For ethical governance, Embedded Ethical Constraint Engines and Physiological Monitoring are proposed. Finally, Transparent Trust Mechanisms (Explainable Agent Reasoning, Real-Time Provenance Tracking) and Self-Adaptive Orchestration for edge computing are essential for future systems.

Technique Mechanism Tradeoff Summary (Table 6)

Timeline Representative Technique Real-Time / Embedded Mechanism Primary Tradeoff / Limitation
2015-2017 Frameless/scan-out, predictive tracking, volumetric pipelines Priority threads; tight ISR; FPGA/GPU pipelines; fixed perf states Lowest latency vs. hardware specificity; predictive overshoot; volumetric bandwidth pressure
2018-2019 Local/remote split (HoloFace), multi-user world-lock, MR-IoT orchestration Budgeted remote calls; bounded queues; time sync; context schedulers Power/thermal vs. responsiveness; drift requires resync; network jitter tails
2020-2021 Affective/adaptive loops; avatar embodiment choices; AI tutoring Quantized/early-exit models; accelerator pinning; rate limiting; DVFS floors Inference burstiness; classifier variance; bandwidth for point clouds
2022-2023 Multimodal haptics/olfaction; agent turn-taking; teleoperation fusion Multi-rate scheduling; timestamped synchronizers; mixed hard/soft servers Cross-modal skew; filtering vs. responsiveness; actuator power budget
2024-2025 Edge/cloud MR; digital twins; runtime governance Slicing; priority queues; state-diff streaming; auditable, bounded-cost checks Tail latency under load; consistency at scale; policy cost vs. deadlines

Calculate Your Potential AI Impact

Estimate the efficiency gains and cost savings your enterprise could achieve by integrating advanced real-time embedded AI and virtual presence systems.

Estimated Annual Cost Savings $0
Estimated Annual Hours Reclaimed 0

Your AI Implementation Roadmap

Our phased approach ensures a smooth, secure, and impactful integration of real-time embedded AI into your enterprise, leveraging insights from cutting-edge research.

Phase 1: Discovery & Strategy Alignment (1-2 Weeks)

In-depth analysis of current systems, infrastructure, and business objectives related to virtual presence or real-time embedded interactions. Define clear, measurable goals for AI integration based on our analysis of latency and HCI requirements.

Phase 2: Architectural Design & Proof-of-Concept (3-6 Weeks)

Design a tailored, real-time embedded architecture (e.g., edge-cloud partitioning, adaptive scheduling). Develop a proof-of-concept for a critical use-case, focusing on latency minimization and predictability with a small-scale AI model.

Phase 3: Secure Development & Ethical Integration (6-12 Weeks)

Develop core AI models (e.g., for context-awareness, multimodal fusion) with emphasis on bounded inference times. Implement initial ethical governance frameworks, ensuring data privacy, consent management, and transparent AI decision-making. Integrate feedback loops for user-centric adaptation.

Phase 4: Pilot Deployment & Performance Optimization (8-16 Weeks)

Deploy the system in a pilot environment. Rigorous testing and optimization for real-time performance, including end-to-end latency, synchronization across modalities, and power efficiency. Refine AI models for real-world adaptability and human-agent interaction.

Phase 5: Scaled Rollout & Continuous Governance (Ongoing)

Expand deployment across the enterprise. Establish continuous monitoring for performance, ethical compliance, and user experience. Implement self-adaptive orchestration for edge computing and long-term maintenance, ensuring the system evolves with your needs and technology advancements.

Ready to Transform Your Operations with Real-Time AI?

Leverage our expertise to integrate cutting-edge real-time embedded virtual presence systems into your enterprise. Schedule a complimentary strategy session to explore your unique needs and potential solutions.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking