A Novel Digital Twin Driven Multi-Camera Edge Computing Sensing System for Additive Manufacturing
Document Type
Conference Paper
Publication Date
1-2026
Publication Title
2025 IEEE Sensors
Abstract
Ensuring real-time quality assessment for Additive Manufacturing (AM) is challenging because of the sequential layer-by-layer deposition process. This paper presents a novel digital twin-driven multi-camera edge computing sensing system designed for real-time dimension monitoring in AM, offering 360° coverage of the build area. Four calibrated cameras positioned orthogonally were synchronized and processed on an NVIDIA based edge device to enable the 3D reconstruction of the final AM product. Stereo calibration was performed into two camera groups, yielding reprojection errors of 1.19 pixels and 1.34 pixels respectively. The resulting 3D measurements closely matched the expected physical dimensions, demonstrating reliable spatial consistency across both the groups. The reconstructed geometry was integrated into a Digital Twin environment, enabling real-time visualization and dimensional verification. This system demonstrates the feasibility of synchronized multi-camera setups with geometry aware quality assessment, achieving a consistently low mean error. The proposed system is the foundation of Digital Twin-driven real-time process monitoring and feedback control in AM.
Repository Citation
Asare, Martha; Matthew, Joel; Saenz, Efren; Yu, Hongkai; and Yang, Jinghao, "A Novel Digital Twin Driven Multi-Camera Edge Computing Sensing System for Additive Manufacturing" (2026). Electrical and Computer Engineering Faculty Publications. 529.
https://engagedscholarship.csuohio.edu/enece_facpub/529
Original Citation
M. Asare, J. Mathew, E. Saenz, H. Yu and J. Yang, "A Novel Digital Twin Driven Multi-Camera Edge Computing Sensing System for Additive Manufacturing," 2025 IEEE SENSORS, Vancouver, BC, Canada, 2025, pp. 1-4, doi: 10.1109/SENSORS59705.2025.11331175.
DOI
10.1109/SENSORS59705.2025.11331175
Comments
The authors would like to acknowledge the NSF Expand AI PARTNER: ARISE:AI Research and Innovation for Smart Environments grant (NSF Award No. 2434916) for funding this work, and the CREST Center for Multidisciplinary Research Excellence in Cyber-Physical Infrastructure Systems (MECIS) (NSF Award No. 2112650) for additional support