Real-Time In-Network Image Compression via Distributed Dictionary Learning

Document Type


Publication Date


Publication Title

IEEE Transactions on Mobile Learning


Multi-camera networks are increasingly becoming pervasive in many monitoring and surveillance applications, and have attracted much attention in distributed systems with collaborative, real-time decision-making capabilities. While in-network data compression brings significant energy savings in camera nodes, signal representation using sparse approximations and overcomplete dictionaries have been shown to outperform traditional compression methods. In this work, an end-to-end and real-time solution is designed and implemented to enable energy-efficient and robust dictionary learning in distributed camera networks by leveraging the spatial correlation of the collected multimedia data. Traditional distributed dictionary learning relies on consensus-building algorithms, which involve communicating with neighboring nodes until convergence is achieved. Existing methods, however, do not exploit spatial correlations in camera networks for improved energy efficiency. In contrast, low-computational-complexity metrics are employed in this work to quantify and exploit the spatial correlation across camera nodes in a wireless network for efficient distributed dictionary learning and in-network image compression. The performance of the proposed approach is validated through extensive simulations on public datasets as well as via real-world experiments on a testbed composed of Raspberry Pi nodes.

Original Citation

P. Pandey, M. Rahmati, W. U. Bajwa and D. Pompili, "Real-Time In-Network Image Compression via Distributed Dictionary Learning," in IEEE Transactions on Mobile Computing, vol. 22, no. 1, pp. 472-486, 1 Jan. 2023, doi: 10.1109/TMC.2021.3072066.