Read by QxMD icon Read

Evolutionary Computation

Kai Olav Ellefsen, Joost Huizinga, Jim Torresen
The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives related to network structure has been shown to improve performance in some cases, especially when modular neural networks are beneficial. However, apart from objectives aiming to make networks more modular, such structural objectives have not been widely explored. We propose two new structural objectives and test their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks...
February 15, 2019: Evolutionary Computation
Bo Song, Victor O K Li
Infinite population models are important tools for studying population dynamics of evolutionary algorithms. They describe how the distributions of populations change between consecutive generations. In general, infinite population models are derived from Markov chains by exploiting symmetries between individuals in the population and analyzing the limit as the population size goes to infinity. In this paper, we study the theoretical foundations of infinite population models of evolutionary algorithms on continuous optimization problems...
February 5, 2019: Evolutionary Computation
Tobias Glasmachers
We establish global convergence of the (1+1) evolution strategy, i.e., convergence to a critical point independent of the initial state. More precisely, we show the existence of a critical limit point, using a suitable extension of the notion of a critical point to measurable functions. At its core, the analysis is based on a novel progress guarantee for elitist, rank-based evolutionary algorithms. By applying it to the (1+1) evolution strategy we are able to provide an accurate characterization of whether global convergence is guaranteed with full probability, or whether premature convergence is possible...
January 31, 2019: Evolutionary Computation
Sobia Saleem, Marcus Gallagher, Ian Wood
Exploratory Landscape Analysis provides sample-based methods to calculate features of black box optimization problems in a quantitative and measurable way. Many problem features have been proposed in the literature in an attempt to provide insights into the structure of problem landscapes and to use in selecting an effective algorithm for a given optimization problem. While there has been some success, evaluating the utility of problem features in practice presents some significant challenges. Machine learning models have been employed as part of the evaluation process, but they may require additional information about the problems as well as having their own hyperparameters, biases and experimental variability...
December 28, 2018: Evolutionary Computation
Jerry Swan, Steven Adriænsen, Adam D Barwell, Kevin Hammond, David R White
Metaheuristics are an effective and diverse class of optimization algorithms: a means of obtaining solutions of acceptable quality for otherwise intractable problems. The selection, construction, and configuration of a metaheuristic for a given problem has historically been a manually intensive process based on experience, experimentation, and reasoning by metaphor. More recently, there has been interest in automating the process of algorithm configuration. In this paper, we identify shared state as an inhibitor of progress for such automation...
December 17, 2018: Evolutionary Computation
Lukáš Bajer, Zbyněk Pitra, Jakub Repický, Martin Holeňa
This article deals with Gaussian process surrogate models for the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES)-several already existing and two by the authors recently proposed models are presented. The work discusses different variants of surrogate model exploitation and focuses on the benefits of employing the Gaussian process uncertainty prediction, especially during the selection of points for the evaluation with a surrogate model. The experimental part of the paper thoroughly compares and evaluates the five presented Gaussian process surrogate and six other state-of-the-art optimizers on the COCO benchmarks...
December 12, 2018: Evolutionary Computation
Dunwei Gong, Yiping Liu, Gary G Yen
Pareto-based multi-objective evolutionary algorithms experience grand challenges in solving many-objective optimization problems due to their inability of maintaining both convergence and diversity in a high-dimensional objective space. Exiting approaches usually modify the selection criteria to overcome this issue. Different from them, we propose a novel meta-objective (MeO) approach that transforms the manyobjective optimization problems in which the new optimization problems become easier to solve by the Pareto-based algorithms...
November 26, 2018: Evolutionary Computation
Pascal Kerschke, Holger H Hoos, Frank Neumann, Heike Trautmann
It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algorithms, for each problem instance to be solved the one expected to perform best...
November 26, 2018: Evolutionary Computation
Simon Wessing, Manuel López-Ibáñez
The configuration of algorithms is a laborious and difficult process. Thus, it is advisable to automate this task by using appropriate automatic configuration methods. The irace method is among the most widely used in the literature. By default, irace initializes its search process via uniform sampling of algorithm configurations. Although better initialization methods exist in the literature, the mixed-variable (numerical and categorical) nature of typical parameter spaces and the presence of conditional parameters make most of the methods not applicable in practice...
November 26, 2018: Evolutionary Computation
Shauharda Khadka, Jen Jen Chung, Kagan Tumer
We present Modular Memory Units (MMUs), a new class of memory-augmented neural network. MMU builds on the gated neural architecture of Gated Recurrent Units (GRUs) and Long Short Term Memory (LSTMs), to incorporate an external memory block, similar to a Neural Turing Machine (NTM). MMU interacts with the memory block using independent read and write gates that serve to decouple the memory from the central feedforward operation. This allows for regimented memory access and update, administering our network the ability to choose when to read from memory, update it, or simply ignore it...
November 8, 2018: Evolutionary Computation
Aymeric Blot, Marie-Éléonore Kessaci, Laetitia Jourdan, Holger H Hoos
Automatic algorithm configuration (AAC) is becoming a key ingredient in the design of high-performance solvers for challenging optimisation problems. However, most existing work on AAC deals with configuration procedures that optimise a single performance metric of a given, single-objective algorithm. Of course, these configurators can also be used to optimise the performance of multi-objective algorithms, as measured by a single performance indicator. In this work, we demonstrate that better results can be obtained by using a native, multi-objective algorithm configuration procedure...
November 8, 2018: Evolutionary Computation
Sobia Saleem, Marcus Gallagher, Ian Wood
An important challenge in black-box optimization is to be able to understand the relative performance of different algorithms on problem instances. This challenge has motivated research in exploratory landscape analysis and algorithm selection, leading to a number of frameworks for analysis. However, these procedures often involve significant assumptions, or rely on information not typically available. In this paper we propose a new, model-based framework for the characterization of black-box optimization problems using Gaussian Process regression...
October 26, 2018: Evolutionary Computation
Khulood Alyahya, Jonathan E Rowe
This paper presents an exploratory landscape analysis of three NP-hard combinatorial optimisation problems: the number partitioning problem, the binary knapsack problem, and the quadratic binary knapsack problem. In the paper, we examine empirically a number of fitness landscape properties of randomly generated instances of these problems. We believe that the studied properties give insight into the structure of the problem landscape and can be representative of the problem difficulty, in particular with respect to local search algorithms...
October 26, 2018: Evolutionary Computation
Pascal Kerschke, Heike Trautmann
In this paper, we build upon previous work on designing informative and efficient Exploratory Landscape Analysis features for characterizing problems' landscapes and show their effectiveness in automatically constructing algorithm selection models in continuous black-box optimization problems. Focussing on algorithm performance results of the COCO platform of several years, we construct a representative set of high-performing complementary solvers and present an algorithm selection model that - compared to the portfolio's single best solver - on average requires less than half of the resources for solving a given problem...
October 26, 2018: Evolutionary Computation
Benoît Groz, Silviu Maniu
The hypervolume subset selection problem (HSSP) aims at approximating a set of n multidimensional points in [Formula: see text] with an optimal subset of a given size. The size k of the subset is a parameter of the problem, and an approximation is considered best when it maximizes the hypervolume indicator. This problem has proved popular in recent years as a procedure for multiobjective evolutionary algorithms. Efficient algorithms are known for planar points [Formula: see text], but there are hardly any results on HSSP in larger dimensions [Formula: see text]...
October 26, 2018: Evolutionary Computation
P Kerschke, H Wang, M Preuss, C Grimme, A H Deutz, H Trautmann, M T M Emmerich
We continue recent work on the definition of multimodality in multi-objective optimization (MO) and the introduction of a test-bed for multimodal MO problems. This goes beyond well-known diversity maintenance approaches but instead focuses on the landscape topology induced by the objective functions. More general multimodal MO problems are considered by allowing ellipsoid contours for single-objective subproblems. An experimental analysis compares two MO algorithms, one that explicitly relies on hypervolume gradient approximation, and one that is based on local search, both on a selection of generated example problems...
September 28, 2018: Evolutionary Computation
Mojgan Pourhassan, Frank Neumann
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which metaheuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a cluster-based approach and a node-based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this article, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective...
June 22, 2018: Evolutionary Computation
Michaela Drahosova, Lukas Sekanina, Michal Wiglasz
In genetic programming (GP), computer programs are often coevolved with training data subsets that are known as fitness predictors. In order to maximize performance of GP, it is important to find the most suitable parameters of coevolution, particularly the fitness predictor size. This is a very time-consuming process as the predictor size depends on a given application, and many experiments have to be performed to find its suitable size. A new method is proposed which enables us to automatically adapt the predictor and its size for a given problem and thus to reduce not only the time of evolution, but also the time needed to tune the evolutionary algorithm...
June 4, 2018: Evolutionary Computation
Su Nguyen, Yi Mei, Bing Xue, Mengjie Zhang
Designing effective dispatching rules for production systems is a difficult and time-consuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems...
June 4, 2018: Evolutionary Computation
Leticia Hernando, Alexander Mendiburu, Jose A Lozano
Solving combinatorial optimization problems efficiently requires the development of algorithms that consider the specific properties of the problems. In this sense, local search algorithms are designed over a neighborhood structure that partially accounts for these properties. Considering a neighborhood, the space is usually interpreted as a natural landscape, with valleys and mountains. Under this perception, it is commonly believed that, if maximizing, the solutions located in the slopes of the same mountain belong to the same attraction basin, with the peaks of the mountains being the local optima...
May 22, 2018: Evolutionary Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"