This paper introduces three hybrid algorithms that help in solving global optimization problems using reinforcement learning along with metaheuristic methods. Using the algorithms presented, the search agents try to find a global optimum avoiding the local optima trap. Compared to the classical metaheuristic approaches, the proposed algorithms display higher success in finding new areas as well as exhibiting a more balanced performance while in the exploration and exploitation phases. The algorithms employ reinforcement agents to select an environment based on predefined actions and tasks. A reward and penalty system is used by the agents to discover the environment, done dynamically without following a predetermined model or method. The study makes use of Q-Learning method in all three metaheuristic algorithms, so-called RLI-GWO, RLEx-GWO, and RLWOA algorithms, so as to check and control exploration and exploitation with Q-Table. The Q-Table values guide the search agents of the metaheuristic algorithms to select between the exploration and exploitation phases. A control mechanism is used to get the reward and penalty values for each action. The algorithms presented in this paper are simulated over 30 benchmark functions from CEC 2014, 2015 and the results obtained are compared with well-known metaheuristic and hybrid algorithms (GWO, RLGWO, I-GWO, Ex-GWO, and WOA). The proposed methods have also been applied to the inverse kinematics of the robot arms problem. The results of the used algorithms demonstrate that RLWOA provides better solutions for relevant problems. (C) 2021 Elsevier B.V. All rights reserved.
Eser Adı (dc.title) | Hybrid algorithms based on combining reinforcement learning and metaheuristic methods to solve global optimization problems |
Yazar (dc.contributor.author) | Mohammed Ahmed Shah |
Yayın Yılı (dc.date.issued) | 2021 |
Tür (dc.type) | Makale |
Özet (dc.description.abstract) | This paper introduces three hybrid algorithms that help in solving global optimization problems using reinforcement learning along with metaheuristic methods. Using the algorithms presented, the search agents try to find a global optimum avoiding the local optima trap. Compared to the classical metaheuristic approaches, the proposed algorithms display higher success in finding new areas as well as exhibiting a more balanced performance while in the exploration and exploitation phases. The algorithms employ reinforcement agents to select an environment based on predefined actions and tasks. A reward and penalty system is used by the agents to discover the environment, done dynamically without following a predetermined model or method. The study makes use of Q-Learning method in all three metaheuristic algorithms, so-called RLI-GWO, RLEx-GWO, and RLWOA algorithms, so as to check and control exploration and exploitation with Q-Table. The Q-Table values guide the search agents of the metaheuristic algorithms to select between the exploration and exploitation phases. A control mechanism is used to get the reward and penalty values for each action. The algorithms presented in this paper are simulated over 30 benchmark functions from CEC 2014, 2015 and the results obtained are compared with well-known metaheuristic and hybrid algorithms (GWO, RLGWO, I-GWO, Ex-GWO, and WOA). The proposed methods have also been applied to the inverse kinematics of the robot arms problem. The results of the used algorithms demonstrate that RLWOA provides better solutions for relevant problems. (C) 2021 Elsevier B.V. All rights reserved. |
Açık Erişim Tarihi (dc.date.available) | 2021-03-13 |
Yayıncı (dc.publisher) | Elsevier |
Dil (dc.language.iso) | En |
Konu Başlıkları (dc.subject) | Metaheuristic algorithm |
Konu Başlıkları (dc.subject) | Reinforcement learning algorithma |
Konu Başlıkları (dc.subject) | Whale optimization algorithm |
Konu Başlıkları (dc.subject) | Q-learning |
Tek Biçim Adres (dc.identifier.uri) | https://hdl.handle.net/20.500.14081/1330 |
ISSN (dc.identifier.issn) | 0950-7051 |
Dergi (dc.relation.journal) | Knowledge-Based Systems |
Esere Katkı Sağlayan (dc.contributor.other) | Seyyedabbasi, A |
Esere Katkı Sağlayan (dc.contributor.other) | Aliyev, R |
Esere Katkı Sağlayan (dc.contributor.other) | Kiani, F |
Esere Katkı Sağlayan (dc.contributor.other) | Gulle, MU |
Esere Katkı Sağlayan (dc.contributor.other) | Basyildiz, H |
Esere Katkı Sağlayan (dc.contributor.other) | Shah, MA |
DOI (dc.identifier.doi) | 10.1016/j.knosys.2021.107044 |
Dergi Cilt (dc.identifier.volume) | 223 |
wosquality (dc.identifier.wosquality) | Q1 |
wosauthorid (dc.contributor.wosauthorid) | GCK-6878-2022 |
Wos No (dc.identifier.wos) | WOS:000651271700008 |
Veritabanları (dc.source.platform) | Wos |
Veritabanları (dc.source.platform) | Scopus |