Issues of improving the efficiency of embedded control systemswith a computer vision module
DOI:
https://doi.org/10.34185/1562-9945-4-165-2026-03Keywords:
computer vision, embedded systems, automatic control, TinyML, neural network compression, stability margin, hardware-oriented optimizationAbstract
The development of autonomous robotics, UAVs, and Industry 4.0 requires expanding the sensory capabilities of cyber-physical systems through the implementation of computer vision (CV) modules on edge devices [1, 2]. The transfer of CV algorithms is implemented via Edge Computing and TinyML paradigms, utilizing spatio-temporal optimization techniques such as quantization (transition from FP32 to INT8/INT4) and synaptic pruning [3, 4].
Global studies indicate that any compression requires a trade-off between accuracy and latency, and also makes models less robust to external disturbances [5]. Domestic scientists have significant achievements in the fields of machine learning, monitoring, and image compression; however, these issues are primarily considered in the context of offline analysis or data transmission [6, 7].
In the scientific community, there is an inconsistency of approaches: Data Science specialists evaluate models in isolation using static metrics and ignore the impact of compressed network noise on the control system [8]. Meanwhile, automatic control specialists perceive the video sensor as a classic link with deterministic delay, which may not correspond to the stochastic nature of inference [9].
Finding the optimal trade-off between the computational efficiency of the neural network (inference latency) and the performance quality of the automatic control system itself to achieve maximum reliability and efficiency of autonomous devices.
Integrating modern computer vision algorithms into resource-limited edge devices leads to a fundamental contradiction.
In classical automatic control theory, a video sensor acts as a dynamic link of pure delay, which creates a phase lag and linearly reduces the system's phase stability margin [10]. Exceeding the delay limit inevitably leads to a loss of controllability of the object. To avoid this, developers use aggressive compression to reduce inference time, which restores the phase margin but distorts error statistics [11]. Specifically, bit-depth reduction generates quantization noise (high-frequency jitter), which acts extremely destructively on the differential component of the regulator and causes chaotic oscillations of the actuators [12].
Since classic optimization metrics are not suitable for such cases, the authors propose a comprehensive multi-level methodology (Co-design) that allows integrating compression parameters into the dynamic’s equations. It encompasses six sequential tasks: creating a digital twin using the MuJoCo physics engine in Unity or NVIDIA Isaac Sim environments [13, 14]; synthesis of ideal control to obtain reference data; Vision-in-the-loop simulation with an uncompressed network; optimization and emulation in QEMU/Renode for TinyML testing [15]; hardware implementation of Hardware-in-the-loop on Nvidia Jetson/Raspberry Pi/STM32 controllers; and a final comparative analysis of the impact of compression on overshoot and stability margins.
The conducted analysis suggests that increasing the efficiency of embedded ACS with computer vision is a complex interdisciplinary task. Optimizing neural networks solely by machine learning metrics, without considering the physical object's dynamics, is not effective and can destabilize the control loop.
The proposed approach, which combines Data Science and control theory methods (from simulation to HIL testing), creates a solid foundation for researching the relationship between hardware compression parameters and control quality indicators. In the future, its implementation will help develop more accurate mathematical descriptions of the control loop, considering variable delay and quantization noise, as well as formulate clear engineering recommendations for developers of autonomous systems.
References
El Zeinaty, C., Hamidouche, W., Herrou, G., & Menard, D. (2024). Designing object de-tection models for TinyML: Foundations, comparative analysis, challenges, and emerging so-lutions. ACM Computing Surveys, 56(8), 1–46. https://doi.org/10.1145/3744339
Jiang, B., Chen, J., & Liu, Y. (2023). Single-shot pruning and quantization for hardware-friendly neural network acceleration. Engineering Applications of Artificial Intelligence, 126(Part B), Article 106816. https://www.google.com/search?q=https://doi.org/10.1016/j.engappai.2023.106816
Park, J., Kim, P., & Ko, D. (2025). Real-time open-vocabulary perception for mobile ro-bots on edge devices: A systematic analysis of the accuracy-latency trade-off. Frontiers in Robotics and AI, 12, Article 1693988. https://www.google.com/search?q=https://doi.org/10.3389/frobt.2025.1693988
de Prado, M., Rusci, M., Capotondi, A., Donze, R., Benini, L., & Pazos, N. (2021). Robus-tifying the deployment of tinyML models for autonomous mini-vehicles. Sensors, 21(4), Arti-cle 1339. https://www.google.com/search?q=https://doi.org/10.3390/s21041339
Alaklabi, S., & Alharbi, S. (2026). DRL-TinyEdge: Energy- and latency-aware deep rein-forcement learning for adaptive TinyML at the 6G edge. Future Internet, 18(1), Article 31. https://www.google.com/search?q=https://doi.org/10.3390/fi18010031
McKee, C. (2025). Design, embedded implementation, and performance optimization of a real-time AI-driven vision inspection system for automated industrial quality control [Techni-cal report].
Kashtan, V., Hnatushenko, V., Udovyk, I., & Shevtsova, O. (2023). Rozpiznavannia ta monitorynh vodnykh obiektiv na optychnykh suputnykovykh zobrazhenniakh iz vykorystan-niam mashynnoho navchannia [Recognition and monitoring of water objects on optical satel-lite images using machine learning]. Information Technology: Computer Science, Software Engineering and Cyber Security, 3, 32–42. https://doi.org/10.32782/IT/2023-3-4
Khudiakov, I. V., Gritsuk, I. V., Chernenko, V. V., Gritsuk, Y. V., Pohorletskyi, D. S., Makarova, T. V., & Manzhelei, V. S. (2021). Osoblyvosti modeliuvannia ta pobudovy infor-matsiinoi systemy dystantsiinoho monitorynhu tekhnichnoho stanu transportnykh zasobiv [Features of modeling and construction of the information system of remote monitoring of the technical condition of vehicles]. Visnyk mashynobuduvannia ta transportu [Herald of Me-chanical Engineering and Transport], 14(2), 140–148. https://doi.org/10.31649/2413-4503-2021-14-2-140-148
Berdnikova, A. L., & Manzhos, Y. S. (2012). Informatsiina tekhnolohiia modeliuvannia skladnykh system [Information technology for modeling of complex systems]. Systemy obrobky informatsii [Information Processing Systems], 101(3), 2–7.
Kavka, O. O., Maidaniuk, V. P., Romanyuk, O. N., & Zavalniuk, Y. K. (2023). Analiz alhorytmiv stysnennia zobrazhen iz vtratamy [Analysis of the lossy image compression algo-rithms]. Informatsiini tekhnolohii ta kompiuterna inzheneriia [Information Technologies and Computer Engineering], 58(3), 59–64. https://doi.org/10.31649/1999-9941-2023-58-3-59-64
Downloads
Published
Issue
Section
License
Copyright (c) 2026 System technologies

This work is licensed under a Creative Commons Attribution 4.0 International License.









