Optimizing Machine Learning for IoT: Energy-Efficient AI Approaches and Architectures
编号:189
访问权限:仅限参会人
更新:2025-12-23 13:40:02 浏览:30次
拓展类型2
摘要
The swift expansion of Internet of Things (IoT) devices has heightened the demand for machine learning (ML) models that function within tight limitations on energy, memory, and computation. This document offers an in-depth analysis of energy-saving AI methods and design enhancements specifically designed for resource-limited IoT settings. We explore lightweight machine learning and deep learning methods—such as model compression, pruning, quantization, knowledge distillation, and event-driven processing—and assess their effects on energy usage and inference efficiency across diverse IoT platforms. A refined edge–cloud cooperative framework is suggested to lower communication costs, adaptively distribute computation, and prolong device lifespan while delivering real-time insights. Experimental analysis shows that the suggested energy-efficient ML pipeline results in considerable decreases in power consumption, latency, and model size while maintaining prediction accuracy. The results emphasize the essential importance of adaptive, hardware-aware AI techniques in facilitating scalable, sustainable, and efficient IoT implementations, while providing directions for future developments in on-device learning, federated optimization, and neuromorphic computing
关键词
Energy-efficient AI,Internet of Things,Edge computing,Lightweight machine learning,Model compression,Low-power architectures.
稿件作者
Anandakumar Haldorai
Sri Eshwar College of Engineering
发表评论