![](https://d3ilqtpdwi981i.cloudfront.net/2juD-qspy1Moa00K5rZKYuZ4r98=/425x550/smart/https://bepress-attached-resources.s3.amazonaws.com/uploads/22/6f/e1/226fe15c-0c4e-4c87-a939-a29725bb0748/thumbnail_81f908d4-b336-4561-a626-a9efdb4e6cb3.jpg)
Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments.
- Computational modeling,
- Deep Q-learning,
- Directed Acyclic Graph,
- Edge-Cloud Computing,
- Energy consumption,
- Heuristic algorithms,
- Internet of Things,
- Internet of Things,
- Meta-Learning,
- Multi-Task Offloading,
- Multitasking,
- Servers,
- Task analysis
Available at: http://works.bepress.com/sajal-das/306/