Filtreler
Filtreler
Bulunan: 3 Adet 0.001 sn
Koleksiyon [2]
Tam Metin [1]
Yazar [4]
Yayın Türü [1]
Yayın Yılı [1]
Yayıncı [1]
Dil [1]
A memetic animal migration optimizer for multimodal optimization

Taymaz Rahkar Farshi

Makale | 2021 | SPRINGER HEIDELBERG

Unimodal optimization algorithms can find only one global optimum solution, while multimodal ones have the ability to detect all/most existing local/global optima in the problem space. Many practical scientific and engineering optimization problems have multiple optima to be located. There are a considerable number of optimization approaches in the literature to address the unimodal problems. Although multimodal optimization methods have not been studied as much as the unimodal ones, they have attracted an enormous amount of attention recently. However, most of them suffer from a common niching parameter problem. The main difficulty . . . faced by existing approaches is determining the proper niching radius. Determining the appropriate radius of the niche requires prior knowledge of the problem space. This paper proposes a novel multimodal optimization scheme that does not face the dilemma of having prior knowledge of the problem space as it does not require the niching parameter to be determined in advance. This scheme is the extended version of the unimodal animal migration optimization (AMO) algorithm that has the capability of taking advantage of finding multiple solutions. Like other multimodal optimization approaches, the proposed MAMO requires specific modifications to make it possible to locate multiple optima. The local neighborhood policy is modified to adapt the multimodal search by utilizing Coulomb's law. Also, Coulomb's law is also applied to decide the movement direction of the individuals. Hence, instead of moving an individual toward the two randomly chosen individuals, it moves toward the near and good enough two neighborhoods. Additionally, a further local search step is performed to improve the exploitation. To investigate the performance of the MAMO, the comparisons are conducted with five existing multi-modal optimization algorithms on nine benchmarks of the CEC 2013 competition. The experimental results reveal that the MAMO performs success in locating all or most of the local/global optima and outperforms other compared methods. Note that the source codes of the proposed MAMO algorithm are publicly available at Daha fazlası Daha az

A multi-modal bacterial foraging optimization algorithm

Taymaz Rahkar Farshi | Mohanna Orujpour

Makale | 2021 | SPRINGER HEIDELBERG

In recent years, multi-modal optimization algorithms have attracted considerable attention, largely because many real-world problems have more than one solution. Multi-modal optimization algorithms are able to find multiple local/global optima (solutions), while unimodal optimization algorithms only find a single global optimum (solution) among the set of the solutions. Niche-based multi-modal optimization approaches have been widely used for solving multi-modal problems. These methods require a predefined niching parameter but estimating the proper value of the niching parameter is challenging without having prior knowledge of the . . .problem space. In this paper, a novel multi-modal optimization algorithm is proposed by extending the unimodal bacterial foraging optimization algorithm. The proposed multi-odal bacterial foraging optimization (MBFO) scheme does not require any additional parameter, including the niching parameter, to be determined in advance. Furthermore, the complexity of this new algorithm is less than its unimodal form because the elimination-dispersal step is excluded, as is any other phase, like a clustering or local search algorithm. The algorithm is compared with six multi-modal optimization algorithms on nine commonly used multi-modal benchmark functions. The experimental results demonstrate that the MBFO algorithm is useful in solving multi-modal optimization problems and outperforms other methods Daha fazlası Daha az

Battle royale optimizer for training multi-layer perceptron

Saeid Agahian | Taymaz Akan

Makale | 2021 | SPRINGER HEIDELBERG

Artificial neural network (ANN) is one of the most successful tools in machine learning. The success of ANN mostly depends on its architecture and learning procedure. Multi-layer perceptron (MLP) is a popular form of ANN. Moreover, backpropagation is a well-known gradient-based approach for training MLP. Gradient-based search approaches have a low convergence rate therefore, they may get stuck in local minima, which may lead to performance degradation. Training the MLP is accomplished based on minimizing the total network error, which can be considered as an optimization problem. Stochastic optimization algorithms are proven to be e . . .ffective when dealing with such problems. Battle royale optimization (BRO) is a recently proposed population-based metaheuristic algorithm which can be applied to single-objective optimization over continuous problem spaces. The proposed method has been compared with backpropagation (Generalized learning delta rule) and six well-known optimization algorithms on ten classification benchmark datasets. Experiments confirm that, according to error rate, accuracy, and convergence, the proposed approach yields promising results and outperforms its competitors Daha fazlası Daha az

6698 sayılı Kişisel Verilerin Korunması Kanunu kapsamında yükümlülüklerimiz ve çerez politikamız hakkında bilgi sahibi olmak için alttaki bağlantıyı kullanabilirsiniz.
Tamam

creativecommons
Bu site altında yer alan tüm kaynaklar Creative Commons Alıntı-GayriTicari-Türetilemez 4.0 Uluslararası Lisansı ile lisanslanmıştır.
Platforms