Qianyu Zhou, Ph.D. - University of Connecticut, 2025
Efficiency-Aware Computational Intelligence for Resource-Constrained Manufacturing Toward Edge-Ready Deployment
In industrial cyber–physical systems and data-driven manufacturing environments, a fundamental dissonance persists between the idealized assumptions of current learning paradigms and the non-ideal realities of industrial data ecosystems. Heterogeneous sensing modalities, stochastic operation conditions, and dynamic process parameters often yield data that are incomplete, unlabeled, imbalanced, and domain-shifted. High-fidelity digital replicas and experimental datasets remain limited by cost, confidentiality, and time-to-acquisition, while stringent latency, bandwidth, and energy constraints at the edge further restrict the feasibility of centralized learning architectures. These conditions collectively compromise the scalability of conventional deep networks, hinder the realization of digital-twin frameworks, and exacerbate risks of error escape in safety-critical applications. Therefore, a critical gap exists in developing computational intelligence that is data-lean, physics-consistent, uncertainty-calibrated, and resource-efficient, ensuring trustworthy inference and efficient deployment across multimodal and multiscale manufacturing scenarios. This dissertation advances a cohesive agenda for resource-conscious, edge-ready intelligence in manufacturing systems, demonstrating that practical constraints need not be barriers to high-fidelity diagnostics; correctly leveraged, they can be design signals that lead to compact, trustworthy, and field-viable solutions.
Key Impact Metrics
Our research delivers verifiable improvements in critical areas of manufacturing intelligence:
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Generative Data Augmentation with Dynamic Filtering
This study formulates Generative Data Augmentation with Dynamic Filtering to alleviate sample scarcity and severe data imbalance by producing statistically diverse yet physically consistent samples. It significantly improves the generalizability of learning under limited observations. Our results showed an overall classification accuracy of 92.30%, a 27.79% improvement over the baseline VGG16 model (0.7179). The F1 score for the 'Nick' defect class also increased to 0.8852, up from 0.4286 for ResNet18* and 0.5333 for ViT*. The Err Escape rate was reduced to just 0.0263, a substantial decrease of 85.72% compared to VGG16* (0.1842) and 83.33% compared to ResNet18* (0.1578).
| Model | Accuracy | F1-Score (Nick) | Err Escape |
|---|---|---|---|
| VGG16 | 0.7179 | 0.1667 | 0.2368 |
| VGG16* | 0.7692 | 0.4286 | 0.1842 |
| ResNet18* | 0.7820 | 0.4286 | 0.1578 |
| ViT* | 0.8333 | 0.5333 | 0.1315 |
| Proposed Method | 0.9174 | 0.8852 | 0.0263 |
| * refers to the selected model with data augmentation. The Proposed Method integrates GANs and ResNet with an adaptive weight strategy. | |||
Semi-Supervised Pseudo-Labeling with Adaptive Weighting
This study introduces Semi-Supervised Pseudo-Labeling with Adaptive Weighting, a self-evolving learning mechanism that fuses limited labeled and abundant unlabeled data while adaptively regulating pseudo-label confidence. This enables robust convergence with reduced simulation and annotation cost. The framework achieved comparable performance to fully supervised learning but slashed labeling efforts by 72% (saving approximately 487.5 hours in label generation time), demonstrating significant efficiency gains.
Enterprise Process Flow
Parallel Physics-Informed Representation Learning
This research establishes a Parallel Physics-Informed Representation Learning architecture for vibration-based gearbox fault diagnosis, where signal morphology is encoded through the fusion of temporal and spectral structural priors. The framework integrates domain knowledge and spatially correlated features within a unified network to enhance interpretability and suppress false negatives, establishing a reliable pathway for small-data condition monitoring. This design demonstrates how physical insight can be embedded directly into the representational hierarchy of deep learning models.
This approach enhances diagnostic distinctions even in early-stage fault regimes, allowing for robust and interpretable condition monitoring with limited data. By combining physical principles with deep learning, it provides a powerful solution for improving trustworthiness in safety-critical applications.
Spatially Informed Graph-Neural Surrogate Modeling
This study introduces a graph-based deformation prediction framework for surface quality monitoring in milling operations, wherein structural geometry and spatial correlation among nodes are encoded via dynamic attention modulation. The method integrates physics-derived relationships between local deformation influences and global structural responses, allowing the predictive model to adaptively emphasize salient geometric interactions across machining configurations and process conditions. This ensures efficient and accurate modeling of complex manufacturing processes under constrained computational budgets.
The framework uses a concise, dynamically evolving node graph composed exclusively of critical measurement points, ensuring an effective yet accurate surrogate model. By embedding physics-informed constraints, it accurately predicts post-milling deformation aligned with experimental protocols, supporting rapid surface quality prediction within digital twin environments.
Edge–Cloud Collaborative Compression and Reconstruction Framework
This framework provides a scalable approach for real-time signal analytics, balancing compression efficiency, information fidelity, and system throughput for distributed industrial computing environments. It implements content-aware token selection (sampling) and structure-aware, non-uniform quantization, achieving bit-budgeted signal compression up to 64:1. A physics-guided Transformer regularizes reconstruction to preserve spectral energy, phase relations, and envelope trends. This concurrent and multiplexed approach maintains or improves fault-classification accuracy and RUL prediction relative to representative baselines.
This system delivers edge-viable representations that preserve fault-relevant content under strict compute and bandwidth budgets, crucial for real-time PHM applications. It ensures diagnostic fidelity without sacrificing efficiency, enabling robust performance even under severe compression.
Zero-Shot Vision–Language Reasoning via Retrieval-Augmented Generation
This study extends few-shot visual intelligence to unseen manufacturing scenarios, coupling multimodal embeddings with knowledge retrieval to achieve generalizable understanding across modalities. Our novel visual-text RAG-VLM framework demonstrates the ability to reason about damage type and severity in a flexible, interpretable, and data-efficient manner on wind turbine blade inspection. It achieved up to 98.33% classification accuracy with a minimal number of training samples, significantly outperforming traditional image-based classifiers.
Unlike conventional pipelines, our system does not require task-specific fine-tuning. Instead, it performs in-context few-shot reasoning by retrieving relevant visual-textual examples from a structured knowledge base, enabling robust, interpretable, and scalable damage classification.
Calculate Your Potential ROI
See how our efficiency-aware computational intelligence can transform your operations. Adjust the parameters to estimate your potential savings and reclaimed hours.
Your Implementation Roadmap
We outline a strategic four-phase roadmap to seamlessly integrate efficiency-aware computational intelligence into your enterprise, ensuring a smooth transition and measurable impact.
Phase 1: Foundation & Data Integration
Integrate existing sensor data and manufacturing process parameters into a unified data lake, establish initial physics-informed models, and configure edge devices for real-time data ingestion.
Phase 2: Semi-Supervised Learning & Model Refinement
Deploy semi-supervised learning techniques to leverage unlabeled data, refine predictive models with adaptive weighting, and conduct initial validation against historical and limited ground truth data.
Phase 3: Digital Twin & Edge-Cloud Deployment
Implement the full edge-cloud collaborative framework, integrate physics-guided compression, and establish digital twin capabilities for real-time monitoring, diagnosis, and prognostics.
Phase 4: Multi-Modal & Zero-Shot Extension
Expand to multi-modal sensing, incorporate vision-language reasoning for novel defect detection, and enable adaptive, few-shot learning for evolving operational conditions.
Ready to Transform Your Manufacturing?
Our efficiency-aware AI solutions are designed to deliver tangible results, from enhanced accuracy in defect detection to significant reductions in operational costs. Let's discuss how our cutting-edge research can be tailored to your specific enterprise needs.