[1] arXiv:2006.09844 [pdf][2] arXiv:2006.09747 [pdf][3] arXiv:2006.09655 [pdf][4] arXiv:2006.09530 [pdf][5] arXiv:2006.08606 [pdf][6] arXiv:2006.08604 [pdf][7] arXiv:2006.08433 [pdf][8] arXiv:2006.08346 [pdf][9] arXiv:2006.07395 [pdf][10] arXiv:2006.07290 [pdf][11] arXiv:2006.07282 [pdf][12] arXiv:2006.07087 [pdf][13] arXiv:2006.06950 [pdf][14] arXiv:2006.06257 [pdf][15] arXiv:2006.05889 [pdf][16] arXiv:2006.05594 [pdf][17] arXiv:2006.04766 [pdf][18] arXiv:2006.04663 [pdf][19] arXiv:2006.03523 [pdf][20] arXiv:2006.01601 [pdf][21] arXiv:2006.00528 [pdf][22] arXiv:2005.13671 [pdf][23] arXiv:2005.13110 [pdf][24] arXiv:2005.13105 [pdf][25] arXiv:2005.12796 [pdf][26] arXiv:2005.12270 [pdf][27] arXiv:2005.11144 [pdf][28] arXiv:2005.10488 [pdf][29] arXiv:2005.10346 [pdf][30] arXiv:2005.09426 [pdf][31] arXiv:2005.09270 [pdf][32] arXiv:2005.08557 [pdf][33] arXiv:2005.07916 [pdf][34] arXiv:2005.07772 [pdf][35] arXiv:2005.07656 [pdf][36] arXiv:2005.07229 [pdf][37] arXiv:2005.06413 [pdf][38] arXiv:2005.06142 [pdf][39] arXiv:2005.05418 [pdf][40] arXiv:2005.05268 [pdf][41] arXiv:2005.05074 [pdf][42] arXiv:2005.05066 [pdf][43] arXiv:2005.04220 [pdf][44] arXiv:2005.04191 [pdf][45] arXiv:2005.03742 [pdf][46] arXiv:2005.03181 [pdf][47] arXiv:2005.03090 [pdf][48] arXiv:2005.02572 [pdf][49] arXiv:2005.02558 [pdf][50] arXiv:2005.01474 [pdf][51] arXiv:2005.00863 [pdf][52] arXiv:2005.00853 [pdf][53] arXiv:2004.13836 [pdf][54] arXiv:2004.11898 [pdf][55] arXiv:2004.11506 [pdf][56] arXiv:2004.11331 [pdf][57] arXiv:2004.11165 [pdf][58] arXiv:2004.10558 [pdf][59] arXiv:2004.10048 [pdf][60] arXiv:2004.10035 [pdf][61] arXiv:2004.09907 [pdf][62] arXiv:2004.09366 [pdf][63] arXiv:2004.08664 [pdf][64] arXiv:2004.08547 [pdf][65] arXiv:2004.08520 [pdf][66] arXiv:2004.07873 [pdf][67] arXiv:2004.07478 [pdf][68] arXiv:2004.07300 [pdf][69] arXiv:2004.07141 [pdf][70] arXiv:2004.07136 [pdf][71] arXiv:2004.07002 [pdf][72] arXiv:2004.06874 [pdf][73] arXiv:2004.06702 [pdf][74] arXiv:2004.06581 [pdf][75] arXiv:2004.06538 [pdf][76] arXiv:2004.05696 [pdf][77] arXiv:2004.05695 [pdf][78] arXiv:2004.05146 [pdf][79] arXiv:2004.04687 [pdf][80] arXiv:2004.03361 [pdf]

Simplified Swarm Optimization for Bi-Objection Active Reliability Redundancy Allocation Problems

The reliability redundancy allocation problem (RRAP) is a well-known tool in system design, development, and management. The RRAP is always modeled as a nonlinear mixed-integer non-deterministic polynomial-time hardness (NP-hard) problem. To maximize the system reliability, the integer (component active redundancy level) and real variables (component reliability) must be determined to ensure that the cost limit and some nonlinear constraints are satisfied. In this study, a bi-objective RRAP is formulated by changing the cost constraint as a new goal, because it is necessary to balance the reliability and cost impact for the entire system in practical applications. To solve the proposed problem, a new simplified swarm optimization (SSO) with a penalty function, a real one-type solution structure, a number-based self-adaptive new update mechanism, a constrained nondominated-solution selection, and a new pBest replacement policy is developed in terms of these structures selected from full-factorial design to find the Pareto solutions efficiently and effectively. The proposed SSO outperforms several metaheuristic state-of-the-art algorithms, e.g., nondominated sorting genetic algorithm II (NSGA-II) and multi-objective particle swarm optimization (MOPSO), according to experimental results for four benchmark problems involving the bi-objective active RRAP.

Research and development of MolAICal for drug design via deep learning and classical programming

Deep learning methods have permeated into the research area of computer-aided drug design. The deep learning generative model and classical algorithm can be simultaneously used for three-dimensional (3D) drug design in the 3D pocket of the receptor. Here, three aspects of MolAICal are illustrated for drug design in the first part, the MolAICal uses the genetic algorithm, Vinardo score and deep learning generative model trained by generative adversarial net (GAN) for drug design. In the second part, the deep learning generative model is trained by drug-like molecules from the drug database such as ZINC database. The MolAICal invokes the deep learning generative model and molecular docking for drug virtual screening automatically. In the third part, the useful drug tools are added for calculating the relative properties such as Pan-assay interference compounds (PAINS), Lipinski's rule of five, synthetic accessibility (SA), and so on. Besides, the structural similarity search and quantitative structure-activity relationship (QSAR), etc are also embedded for the calculations of drug properties in the MolAICal. MolAICal will constantly optimize and develop the current and new modules for drug design. The MolAICal can help the scientists, pharmacists and biologists to design the rational 3D drugs in the receptor pocket through the deep learning model and classical programming. MolAICal is free of charge for any academic and educational purposes, and it can be downloaded from the website this https URL.

Fairness-Oriented Semi-Chaotic Genetic Algorithm-Based Channel Assignment Technique for Nodes Starvation Problem in Wireless Mesh Network

Multi-Radio Multi-Channel Wireless Mesh Networks (WMNs) have emerged as a scalable, reliable, and agile wireless network that supports many types of innovative technologies such as the Internet of Things (IoT) and vehicular networks. Due to the limited number of orthogonal channels, interference between channels adversely affects the fair distribution of bandwidth among mesh clients, causing node starvation in terms of insufficient bandwidth, which impedes the adoption of WMN as an efficient access technology. Therefore, a fair channel assignment is crucial for the mesh clients to utilize the available resources. However, the node starvation problem due to unfair channel distribution has been vastly overlooked during channel assignment by the extant research. Instead, existing channel assignment algorithms either reduce the total network interference or maximize the total network throughput, which neither guarantees a fair distribution of the channels nor eliminates node starvation. To this end, the Fairness-Oriented Semi-Chaotic Genetic Algorithm-Based Channel Assignment Technique (FA-SCGA-CAA) was proposed in this paper for Nodes Starvation Problem in Wireless Mesh Networks. FA-SCGA-CAA optimizes fairness based on multiple-criterion using a modified version of the Genetic Algorithm (GA). The modification includes proposing a semi-chaotic technique for creating the primary chromosome with powerful genes. Such a chromosome was used to create a strong population that directs the search towards the global minima in an effective and efficient way. The outcome is a nonlinear fairness oriented fitness function that aims at maximizing the link fairness while minimizing the link interference. Comparison with related work shows that the proposed FA_SCGA_CAA reduced the potential nodes starvation by 22% and improved network capacity utilization by 23%.

Nonlinear dynamic analysis of an epidemiological model for COVID-19 including public behavior and government action

This paper is concerned with nonlinear modeling and analysis of the COVID-19 pandemic currently ravaging the planet. There are two objectives to arrive at an appropriate model that captures the collected data faithfully, and to use that as a basis to explore the nonlinear behavior. We use a nonlinear SEIR (Susceptible, Exposed, Infectious & Removed) transmission model with added behavioral and government policy dynamics. We develop a genetic algorithm technique to identify key model parameters employing COVID19 data from South Korea. Stability, bifurcations and dynamic behavior are analyzed. Parametric analysis reveals conditions for sustained epidemic equilibria to occur. This work points to the value of nonlinear dynamic analysis in pandemic modeling and demonstrates the dramatic influence of social and government behavior on disease dynamics.

Vulnerability Coverage as an Adequacy Testing Criterion

Mainstream software applications and tools are the configurable platforms with an enormous number of parameters along with their values. Certain settings and possible interactions between these parameters may harden (or soften) the security and robustness of these applications against some known vulnerabilities. However, the large number of vulnerabilities reported and associated with these tools make the exhaustive testing of these tools infeasible against these vulnerabilities infeasible. As an instance of general software testing problem, the research question to address is whether the system under test is robust and secure against these vulnerabilities. This paper introduces the idea of ``vulnerability coverage,'' a concept to adequately test a given application for a certain classes of vulnerabilities, as reported by the National Vulnerability Database (NVD). The deriving idea is to utilize the Common Vulnerability Scoring System (CVSS) as a means to measure the fitness of test inputs generated by evolutionary algorithms and then through pattern matching identify vulnerabilities that match the generated vulnerability vectors and then test the system under test for those identified vulnerabilities. We report the performance of two evolutionary algorithms (i.e., Genetic Algorithms and Particle Swarm Optimization) in generating the vulnerability pattern vectors.

Vulnerability Coverage for Secure Configuration

We present a novel idea on adequacy testing called ``{vulnerability coverage}.'' The introduced coverage measure examines the underlying software for the presence of certain classes of vulnerabilities often found in the National Vulnerability Database (NVD) website. The thoroughness of the test input generation procedure is performed through the adaptation of evolutionary algorithms namely Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). The methodology utilizes the Common Vulnerability Scoring System (CVSS), a free and open industry standard for assessing the severity of computer system security vulnerabilities, as a fitness measure for test inputs generation. The outcomes of these evolutionary algorithms are then evaluated in order to identify the vulnerabilities that match a class of vulnerability patterns for testing purposes.

Calibration of the von Wolffersdorff model using Genetic Algorithms

This article proposes an optimization framework, based on Genetic Algorithms (GA), to calibrate the constitutive law of von Wolffersdorff. This constitutive law is known as Sand Hypoplasticity (SH), and allows for robust and accurate modeling of the soil behavior but requires a complex calibration involving eight parameters. The proposed optimization can automatically fit these parameters from the results of an oedometric and a triaxial drained compression test, by combining the GA with a numerical solver that integrates the SH in the test conditions. By repeating the same calibration several times, the stochastic nature of the optimizer enables the uncertainty quantification of the calibration parameters and allows studying their relative importance on the model prediction. After validating the numerical solver on the ExCaliber-Laboratory software from the SoilModels' website, the GA calibration is tested on a synthetic dataset to analyze the convergence and the statistics of the results. In particular, a correlation analysis reveals that two couples of the eight model parameters are strongly correlated. Finally, the calibration procedure is tested on the results from von Wolffersdorff, 1996, and Herle & Gudehus, 1999, on the Hochstetten sand. The model parameters identified by the Genetic Algorithm optimization improves the matching with the experimental data and hence lead to a better calibration.

Deep learning mediated single time-point image-based prediction of embryo developmental outcome at the cleavage stage

In conventional clinical in-vitro fertilization practices embryos are transferred either at the cleavage or blastocyst stages of development. Cleavage stage transfers, particularly, are beneficial for patients with relatively poor prognosis and at fertility centers in resource-limited settings where there is a higher chance of developmental failure in embryos in-vitro. However, one of the major limitations of embryo selections at the cleavage stage is the availability of very low number of manually discernable features to predict developmental outcomes. Although, time-lapse imaging systems have been proposed as possible solutions, they are cost-prohibitive and require bulky and expensive hardware, and labor-intensive. Advances in convolutional neural networks (CNNs) have been utilized to provide accurate classifications across many medical and non-medical object categories. Here, we report an automated system for classification and selection of human embryos at the cleavage stage using a trained CNN combined with a genetic algorithm. The system selected the cleavage stage embryo at 70 hours post insemination (hpi) that ultimately developed into top-quality blastocyst at 70 hpi with 64% accuracy, outperforming the abilities of embryologists in identifying embryos with the highest developmental potential. Such systems can have a significant impact on IVF procedures by empowering embryologists for accurate and consistent embryo assessment in both resource-poor and resource-rich settings.

Metastability Triggered Reactivity in Clusters at Realistic Conditions A Case Study of N-doped (TiO$_2$)$_n$ for Photocatalysis

Here we report a strategy, by taking a prototypical model system for photocatalysis (viz. N-doped (TiO$_2$)$_n$ clusters), to accurately determine low energy metastable structures that can play a major role with enhanced catalytic reactivity. Computational design of specific metastable photocatalyst with enhanced activity is never been easy due to plenty of isomers on potential energy surface. This requires fixing various parameters viz. (i) favorable formation energy, (ii) low fundamental gap, (iii) low excitation energy and (iv) high vertical electron affinity (VEA) and low vertical ionization potential (VIP). We validate here by integrating several first principles based methodologies that consideration of the global minimum structure alone can severely underestimate the activity. As a first step, we have used a suite of genetic algorithms [viz. searching clusters with conventional minimum total energy ((GA)$_\textrm{E}$); searching clusters with specific property i.e. high VEA ((GA)$_\textrm{P}^{\textrm{EA}}$), and low VIP ((GA)$_\textrm{P}^{\textrm{IP}}$)] to model the N-doped (TiO$_2$)$_n$ clusters. Following this, we have identified its free energy using ab initio thermodynamics to confirm that the metastable structures are not too far from the global minima. By analyzing a large dataset, we find that N-substitution ((N)$_\textrm{O}$) prefers to reside at highly coordinated oxygen site to maximize its coordination, whereas N-interstitial ((NO)$_\textrm{O}$) and split-interstitial ((N$_2)_\textrm{O}$) favor the dangling oxygen site. Interestingly, we notice that each types of defect (viz. substitution, interstitials) reduce the fundamental gap and excitation energy substantially. However, (NO)$_\textrm{O}$ and (N$_2)_\textrm{O}$ doped clusters are the potential candidates for overall water splitting, whereas N$_\textrm{O}$ is congenial only for oxygen evolution reaction.

Testing Swampland Conjectures with Machine Learning

We consider Type IIB compactifications on an isotropic torus $T^6$ threaded by geometric and non geometric fluxes. For this particular setup we apply supervised machine learning techniques, namely an artificial neural network coupled to a genetic algorithm, in order to obtain more than sixty thousand flux configurations yielding to a scalar potential with at least one critical point. We observe that both stable AdS vacua with large moduli masses and small vacuum energy as well as unstable dS vacua with small tachyonic mass and large energy are absent, in accordance to the Refined de Sitter Conjecture. Moreover, by considering a hierarchy among fluxes, we observe that perturbative solutions with small values for the vacuum energy and moduli masses are favored, as well as scenarios in which the lightest modulus mass is much greater than the corresponding AdS vacuum scale. Finally we apply some results on Random Matrix Theory to conclude that the most probable mass spectrum derived from this string setup is that satisfying the Refined de Sitter and AdS scale conjectures.

A supervised learning approach involving active subspaces for an efficient genetic algorithm in high-dimensional optimization problems

In this work, we present an extension of the genetic algorithm (GA) which exploits the active subspaces (AS) property to evolve the individuals on a lower dimensional space. In many cases, GA requires in fact more function evaluations than others optimization method to converge to the optimum. Thus, complex and high-dimensional functions may result intractable with the standard algorithm. To address this issue, we propose to linearly map the input parameter space of the original function onto its AS before the evolution, performing the mutation and mate processes in a lower dimensional space. In this contribution, we describe the novel method called ASGA, presenting differences and similarities with the standard GA method. We test the proposed method over n-dimensional benchmark functions -- Rosenbrock, Ackley, Bohachevsky, Rastrigin, Schaffer N. 7, and Zakharov -- and finally we apply it to an aeronautical shape optimization problem.

Data-driven Simulation and Optimization for Covid-19 Exit Strategies

The rapid spread of the Coronavirus SARS-2 is a major challenge that led almost all governments worldwide to take drastic measures to respond to the tragedy. Chief among those measures is the massive lockdown of entire countries and cities, which beyond its global economic impact has created some deep social and psychological tensions within populations. While the adopted mitigation measures (including the lockdown) have generally proven useful, policymakers are now facing a critical question how and when to lift the mitigation measures? A carefully-planned exit strategy is indeed necessary to recover from the pandemic without risking a new outbreak. Classically, exit strategies rely on mathematical modeling to predict the effect of public health interventions. Such models are unfortunately known to be sensitive to some key parameters, which are usually set based on this http URL this paper, we propose to augment epidemiological forecasting with actual data-driven models that will learn to fine-tune predictions for different contexts (e.g., per country). We have therefore built a pandemic simulation and forecasting toolkit that combines a deep learning estimation of the epidemiological parameters of the disease in order to predict the cases and deaths, and a genetic algorithm component searching for optimal trade-offs/policies between constraints and objectives set by decision-makers. Replaying pandemic evolution in various countries, we experimentally show that our approach yields predictions with much lower error rates than pure epidemiological models in 75% of the cases and achieves a 95% R2 score when the learning is transferred and tested on unseen countries. When used for forecasting, this approach provides actionable insights into the impact of individual measures and strategies.

GA-based mD-VcMD A genetic-algorithm-based method for multi-dimensional virtual-system coupled molecular dynamics

We previously introduced a conformational sampling method, a multi-dimensional virtual-system coupled molecular dynamics (mD-VcMD), to enhance conformational sampling of a biomolecular system by computer simulations. Here, we present a new sampling method, subzone-based mD-VcMD, as an extension of mD-VcMD. Then, we further extend the subzone-based method using genetic algorithm (GA), and named it the GA-based mD-VcMD. Because the conformational space of the biomolecular system is vast, a single simulation cannot sample the conformational space throughout. Then, iterative simulations are performed to increase the sampled region gradually. The new methods have the following advantages (1) The methods are free from a production run I.e., all snapshots from all iterations can be used for analyses. (2) The methods are free from fine tuning of a weight function (probability distribution function or potential of mean force). (3) A simple procedure is available to assign a thermodynamic weight to snapshots sampled in spite that the weight function is not used to proceed the iterative simulations. Thus, a canonical ensemble (i.e., a thermally equilibrated ensemble) is generated from the resultant snapshots. (4) If a poorly-sampled region emerges in sampling, selective sampling can be performed focusing on the poorly-sampled region without breaking the proportion of the canonical ensemble. A free-energy landscape of the biomolecular system is obtainable from the canonical ensemble.

On the comparison of optimization algorithms for the random-field Potts model

For many systems with quenched disorder the study of ground states can crucially contribute to a thorough understanding of the physics at play, be it for the critical behavior if that is governed by a zero-temperature fixed point or for uncovering properties of the ordered phase. While ground states can in principle be computed using general-purpose optimization algorithms such as simulated annealing or genetic algorithms, it is often much more efficient to use exact or approximate techniques specifically tailored to the problem at hand. For certain systems with discrete degrees of freedom such as the random-field Ising model, there are polynomial-time methods to compute exact ground states. But even as the number of states increases beyond two as in the random-field Potts model, the problem becomes NP hard and one cannot hope to find exact ground states for relevant system sizes. Here, we compare a number of approximate techniques for this problem and evaluate their performance.

Benchmarking a $(μ+λ)$ Genetic Algorithm with Configurable Crossover Probability

We investigate a family of $(\mu+\lambda)$ Genetic Algorithms (GAs) which creates offspring either from mutation or by recombining two randomly chosen parents. By scaling the crossover probability, we can thus interpolate from a fully mutation-only algorithm towards a fully crossover-based GA. We analyze, by empirical means, how the performance depends on the interplay of population size and the crossover probability. Our comparison on 25 pseudo-Boolean optimization problems reveals an advantage of crossover-based configurations on several easy optimization tasks, whereas the picture for more complex optimization problems is rather mixed. Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover. We then take a closer look at the surprisingly good performance of the crossover-based $(\mu+\lambda)$ GAs on the well-known LeadingOnes benchmark problem. We observe that the optimal crossover probability increases with increasing population size $\mu$. At the same time, it decreases with increasing problem dimension, indicating that the advantages of the crossover are not visible in the asymptotic view classically applied in runtime analysis. We therefore argue that a mathematical investigation for fixed dimensions might help us observe effects which are not visible when focusing exclusively on asymptotic performance bounds.

Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i.e., vectors with a dimensionality of thousands or even more) to represent features and to perform classification tasks. The unique hypervector representation enables HDC classifiers to exhibit high energy efficiency, low inference latency and strong robustness against hardware-induced bit errors. Consequently, they have been increasingly recognized as an appealing alternative to or even replacement of traditional deep neural networks (DNNs) for local on device classification, especially on low-power Internet of Things devices. Nonetheless, unlike their DNN counterparts, state-of-the-art designs for HDC classifiers are mostly security-oblivious, casting doubt on their safety and immunity to adversarial inputs. In this paper, we study for the first time adversarial attacks on HDC classifiers and highlight that HDC classifiers can be vulnerable to even minimally-perturbed adversarial samples. Concretely, using handwritten digit classification as an example, we construct a HDC classifier and formulate a grey-box attack problem, where an attacker's goal is to mislead the target HDC classifier to produce erroneous prediction labels while keeping the amount of added perturbation noise as little as possible. Then, we propose a modified genetic algorithm to generate adversarial samples within a reasonably small number of queries. Our results show that adversarial images generated by our algorithm can successfully mislead the HDC classifier to produce wrong prediction labels with a high probability (i.e., 78% when the HDC classifier uses a fixed majority rule for decision). Finally, we also present two defense strategies -- adversarial training and retraining-- to strengthen the security of HDC classifiers.

A Heuristically Self-Organised Linguistic Attribute Deep Learning in Edge Computing For IoT Intelligence

With the development of Internet of Things (IoT), IoT intelligence becomes emerging technology. "Curse of Dimensionality" is the barrier of data fusion in edge devices for the success of IoT intelligence. A Linguistic Attribute Hierarchy (LAH), embedded with Linguistic Decision Trees (LDTs), can represent a new attribute deep learning. In contrast to the conventional deep learning, an LAH could overcome the shortcoming of missing interpretation by providing transparent information propagation through the rules, produced by LDTs in the LAH. Similar to the conventional deep learning, the computing complexity of optimising LAHs blocks the applications of LAHs. In this paper, we propose a heuristic approach to constructing an LAH, embedded with LDTs for decision making or classification by utilising the distance correlations between attributes and between attributes and the goal variable. The set of attributes is divided to some attribute clusters, and then they are heuristically organised to form a linguistic attribute hierarchy. The proposed approach was validated with some benchmark decision making or classification problems from the UCI machine learning repository. The experimental results show that the proposed self-organisation algorithm can construct an effective and efficient linguistic attribute hierarchy. Such a self-organised linguistic attribute hierarchy embedded with LDTs can not only efficiently tackle "curse of dimensionality" in a single LDT for data fusion with massive attributes, but also achieve better or comparable performance on decision making or classification, compared to the single LDT for the problem to be solved. The self-organisation algorithm is much efficient than the Genetic Algorithm in Wrapper for the optimisation of LAHs. This makes it feasible to embed the self-organisation algorithm in edge devices for IoT intelligence.

Runtime Analysis of Evolutionary Algorithms via Symmetry Arguments

We use an elementary argument building on group actions to prove that the selection-free steady state genetic algorithm analyzed by Sutton and Witt (GECCO 2019) takes an expected number of $\Omega(2^n / \sqrt n)$ iterations to find any particular target search point, regardless of the population size $\mu$. This improves over the previous best lower bound of $\Omega(\exp(n^{\delta/2}))$ valid for population sizes $\mu = O(n^{1/2 - \delta})$, $0 < \delta < 1/2$.

Runtime Analysis of a Heavy-Tailed $(1+(λ,λ))$ Genetic Algorithm on Jump Functions

It was recently observed that the $(1+(\lambda,\lambda))$ genetic algorithm can comparably easily escape the local optimum of the jump functions benchmark. Consequently, this algorithm can optimize the jump function with jump size $k$ in an expected runtime of only $n^{(k + 1)/2}k^{-k/2}e^{O(k)}$ fitness evaluations (Antipov, Doerr, Karavaev (GECCO 2020)). To obtain this performance, however, a non-standard parameter setting depending on the jump size $k$ was used. To overcome this difficulty, we propose to choose two parameters of the $(1+(\lambda,\lambda))$ genetic algorithm randomly from a power-law distribution. Via a mathematical runtime analysis, we show that this algorithm with natural instance-independent choices of the distribution parameters on all jump functions with jump size at most $n/4$ has a performance close to what the best instance-specific parameters in the previous work obtained. This price for instance-independence can be made as small as an $O(n\log(n))$ factor. Given the difficulty of the jump problem and the runtime losses from using mildly suboptimal fixed parameters (also discussed in this work), this appears to be a fair price.

Optimizing carbon tax for decentralized electricity markets using an agent-based model

Averting the effects of anthropogenic climate change requires a transition from fossil fuels to low-carbon technology. A way to achieve this is to decarbonize the electricity grid. However, further efforts must be made in other fields such as transport and heating for full decarbonization. This would reduce carbon emissions due to electricity generation, and also help to decarbonize other sources such as automotive and heating by enabling a low-carbon alternative. Carbon taxes have been shown to be an efficient way to aid in this transition. In this paper, we demonstrate how to to find optimal carbon tax policies through a genetic algorithm approach, using the electricity market agent-based model ElecSim. To achieve this, we use the NSGA-II genetic algorithm to minimize average electricity price and relative carbon intensity of the electricity mix. We demonstrate that it is possible to find a range of carbon taxes to suit differing objectives. Our results show that we are able to minimize electricity cost to below \textsterling10/MWh as well as carbon intensity to zero in every case. In terms of the optimal carbon tax strategy, we found that an increasing strategy between 2020 and 2035 was preferable. Each of the Pareto-front optimal tax strategies are at least above \textsterling81/tCO2 for every year. The mean carbon tax strategy was \textsterling240/tCO2.

The First Light Curve Solutions and Period Study of BQ Ari

The first complete analysis of the photometric observation of the W UMa type binary system BQ Ari was performed using the Wilson-Devinney code to determine its photometric and geometric elements. These results show that BQ Ari is a contact W UMa binary system with a photometric mass ratio q = 0.91, and the fillout factor equal to 34 percent. We calculated the distance of BQ Ari to be 146.24 \pm 15 parsec. In this study, we suggested a new linear ephemeris for BQ Ari, combining our new mid-eclipse times and the previous observations. We present the first complete analysis of the system's orbital period behavior and analysis of the O-C diagram using the Genetic Algorithm (GA) and the Monte Carlo Markov Chain (MCMC) approaches in OCFit code. We applied two approaches to analyze the sinusoidal trend in the O-C diagram. This may suggest a periodic change caused by the Light-Time Effect (LiTE) effect with a period of 4.60 years and an amplitude of 6.212 minutes. On the other hand, the sinusoidal trend may indicate the existence of magnetic activity in this system. The period of magnetic activity was calculated to be 4.16 years and changes of period induced by magnetic activity obtained as DeltaP/P=4.36x 10^(-7). It was concluded that magnetic activity is more likely to be the causal agent as opposed to the LiTE signals.

Reconstruction and Optimization of Coherent Synthesis by Fourier Optics Based Genetic Algorithm

We present a numerical method for the reconstruction and optimization of complex field synthesis using coherent pulse combination systems. A genetic algorithm utilizing a Fourier optics based propagation method is developed for accurate convergence and modeling of near and far field distributions, achieving better than $\pi/10$ phase accuracy in reconstructed input parameters.

Evolutionary NAS with Gene Expression Programming of Cellular Encoding

The renaissance of neural architecture search (NAS) has seen classical methods such as genetic algorithms (GA) and genetic programming (GP) being exploited for convolutional neural network (CNN) architectures. While recent work have achieved promising performance on visual perception tasks, the direct encoding scheme of both GA and GP has functional complexity deficiency and does not scale well on large architectures like CNN. To address this, we present a new generative encoding scheme -- $symbolic\ linear\ generative\ encoding$ (SLGE) -- simple, yet powerful scheme which embeds local graph transformations in chromosomes of linear fixed-length string to develop CNN architectures of variant shapes and sizes via evolutionary process of gene expression programming. In experiments, the effectiveness of SLGE is shown in discovering architectures that improve the performance of the state-of-the-art handcrafted CNN architectures on CIFAR-10 and CIFAR-100 image classification tasks; and achieves a competitive classification error rate with the existing NAS methods using less GPU resources.

Genetic optimization algorithms applied toward mission computability models

Genetic algorithms are modeled after the biological evolutionary processes that use natural selection to select the best species to survive. They are heuristics based and low cost to compute. Genetic algorithms use selection, crossover, and mutation to obtain a feasible solution to computational problems. In this paper, we describe our genetic optimization algorithms to a mission-critical and constraints-aware computation problem.

A Cyclical Deep Learning Based Framework For Simultaneous Inverse and Forward design of Nanophotonic Metasurfaces

The conventional approach to nanophotonic metasurface design and optimization for a targeted electromagnetic response involves exploring large geometry and material spaces, which is computationally costly, time consuming and a highly iterative process based on trial and error. Moreover, the non-uniqueness of structural designs and high non-linearity between electromagnetic response and design makes this problem challenging. To model this non-intuitive relationship between electromagnetic response and metasurface structural design as a probability distribution in the design space, we introduce a cyclical deep learning (DL) based framework for inverse design of nanophotonic metasurfaces. The proposed framework performs inverse design and optimization mechanism for the generation of meta-atoms and meta-molecules as metasurface units based on DL models and genetic algorithm. The framework includes consecutive DL models that emulate both numerical electromagnetic simulation and iterative processes of optimization, and generate optimized structural designs while simultaneously performing forward and inverse design tasks. A selection and evaluation of generated structural designs is performed by the genetic algorithm to construct a desired optical response and design space that mimics real world responses. Importantly, our cyclical generation framework also explores the space of new metasurface topologies. As an example application of utility of our proposed architecture, we demonstrate the inverse design of gap-plasmon based half-wave plate metasurface for user-defined optical response. Our proposed technique can be easily generalized for designing nanophtonic metasurfaces for a wide range of targeted optical response.

Forecasting the Spread of Covid-19 Under Control Scenarios Using LSTM and Dynamic Behavioral Models

To accurately predict the regional spread of Covid-19 infection, this study proposes a novel hybrid model which combines a Long short-term memory (LSTM) artificial recurrent neural network with dynamic behavioral models. Several factors and control strategies affect the virus spread, and the uncertainty arisen from confounding variables underlying the spread of the Covid-19 infection is substantial. The proposed model considers the effect of multiple factors to enhance the accuracy in predicting the number of cases and deaths across the top ten most-affected countries and Australia. The results show that the proposed model closely replicates test data. It not only provides accurate predictions but also estimates the daily behavior of the system under uncertainty. The hybrid model outperforms the LSTM model accounting for limited available data. The parameters of the hybrid models were optimized using a genetic algorithm for each country to improve the prediction power while considering regional properties. Since the proposed model can accurately predict Covid-19 spread under consideration of containment policies, is capable of being used for policy assessment, planning and decision-making.

Parsimonious neural networks learn classical mechanics and can teach it

We combine neural networks with genetic algorithms to find parsimonious models that describe the time evolution of a point particle subjected to an external potential. The genetic algorithm is designed to find the simplest, most interpretable network compatible with the training data. The parsimonious neural network (PNN) can numerically integrate classical equations of motion with negligible energy drifts and good time reversibility, significantly outperforming a generic feed-forward neural network. Our PNN is immediately interpretable as the position Verlet algorithm, a non-trivial integrator whose justification originates from Trotter's theorem.

Does an artificial intelligence perform market manipulation with its own discretion? -- A genetic algorithm learns in an artificial market simulation

Who should be charged with responsibility for an artificial intelligence performing market manipulation have been discussed. In this study, I constructed an artificial intelligence using a genetic algorithm that learns in an artificial market simulation, and investigated whether the artificial intelligence discovers market manipulation through learning with an artificial market simulation despite a builder of artificial intelligence has no intention of market manipulation. As a result, the artificial intelligence discovered market manipulation as an optimal investment strategy. This result suggests necessity of regulation, such as obligating builders of artificial intelligence to prevent artificial intelligence from performing market manipulation.

Long-term electricity market agent based model validation using genetic algorithm based optimization

Electricity market modelling is often used by governments, industry and agencies to explore the development of scenarios over differing timeframes. For example, how would the reduction in cost of renewable energy impact investments in gas power plants or what would be an optimum strategy for carbon tax or subsidies? Cost optimization based solutions are the dominant approach for understanding different long-term energy scenarios. However, these types of models have certain limitations such as the need to be interpreted in a normative manner, and the assumption that the electricity market remains in equilibrium throughout. Through this work, we show that agent-based models are a viable technique to simulate decentralised electricity markets. The aim of this paper is to validate an agent-based modelling framework to increase confidence in its ability to be used in policy and decision making. Our framework can model heterogeneous agents with imperfect information. The model uses a rules-based approach to approximate the underlying dynamics of a real world, decentralised electricity market. We use the UK as a case study, however, our framework is generalisable to other countries. We increase the temporal granularity of the model by selecting representative days of electricity demand and weather using a $k$-means clustering approach. We show that our framework can model the transition from coal to gas observed in the UK between 2013 and 2018. We are also able to simulate a future scenario to 2035 which is similar to the UK Government, Department for Business and Industrial Strategy (BEIS) projections. We show a more realistic increase in nuclear power over this time period. This is due to the fact that with current nuclear technology, electricity is generated almost instantaneously and has a low short-run marginal cost \cite{Department2016}.

Mathematical model of COVID-19 intervention scenarios for Sao Paulo- Brazil

An epidemiological compartmental model was used to simulate social distancing strategies to contain the COVID-19 pandemic and prevent a second wave in Sao Paulo, Brazil. Optimization using genetic algorithm was used to determine the optimal solutions. Our results suggest the best-case strategy for Sao Paulo is to maintain or increase the current magnitude of social distancing for at least 60 more days and increase the current levels of personal protection behaviors by a minimum of 10% (e.g., wearing facemasks, proper hand hygiene and avoid agglomeration). Followed by a long-term oscillatory level of social distancing with a stepping-down approach every 80 days over a period of two years with continued protective behavior.

Joint Optimization of Transfer Location and Capacity in a Multimodal Transport Network Bilevel Modeling and Paradoxes

With the growing attention towards developing the multimodal transport system to enhance urban mobility, there is an increasing need to construct new, rebuild or expand existing infrastructure to facilitate existing and accommodate newly generated travel demand. Therefore, this paper develops a bilevel model to simultaneously determine the location and capacity of the transfer infrastructure to be built considering elastic demand in a multimodal transport network. The upper level problem is formulated as a mixed integer linear programming problem, while the lower level problem is the capacitated combined trip distribution assignment model that depicts both destination and route choices of travelers via the multinomial logit formula. To solve the model, the paper develops a matheuristics algorithm that integrates a Genetic Algorithm and a successive linear programming solution approach. Numerical studies are conducted to demonstrate the existence and examine two Braess like paradox phenomena in a multimodal transport network. The first one states that under fixed demand constructing parking spaces to stimulate the usage of Park and Ride service could deteriorate the system performance, measured by the total passengers travel time, while the second one reveals that under variable demand increasing the parking capacity for the Park and Ride services to promote the usages may fail, represented by the decline in its modal share. Meanwhile, the last experiment suggests that constructing transfer infrastructures at distributed stations outperforms building a large transfer center in terms of attracting travelers using sustainable transit modes.

Optimal measurement budget allocation for particle filtering

Particle filtering is a powerful tool for target tracking. When the budget for observations is restricted, it is necessary to reduce the measurements to a limited amount of samples carefully selected. A discrete stochastic nonlinear dynamical system is studied over a finite time horizon. The problem of selecting the optimal measurement times for particle filtering is formalized as a combinatorial optimization problem. We propose an approximated solution based on the nesting of a genetic algorithm, a Monte Carlo algorithm and a particle filter. Firstly, an example demonstrates that the genetic algorithm outperforms a random trial optimization. Then, the interest of non-regular measurements versus measurements performed at regular time intervals is illustrated and the efficiency of our proposed solution is quantified better filtering performances are obtained in 87.5% of the cases and on average, the relative improvement is 27.7%.

Deep-learning of Parametric Partial Differential Equations from Sparse and Noisy Data

Data-driven methods have recently made great progress in the discovery of partial differential equations (PDEs) from spatial-temporal data. However, several challenges remain to be solved, including sparse noisy data, incomplete candidate library, and spatially- or temporally-varying coefficients. In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously. In the framework, a trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data. Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library. Finally, a two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients. In this method, the structure of a parametric PDE is first discovered, and then the general form of varying coefficients is identified. The proposed algorithm is tested on the Burgers equation, the convection-diffusion equation, the wave equation, and the KdV equation. The results demonstrate that this method is robust to sparse and noisy data, and is able to discover parametric PDEs with an incomplete candidate library.

Evolving Antennas for Ultra-High Energy Neutrino Detection

Evolutionary algorithms borrow from biology the concepts of mutation and selection in order to evolve optimized solutions to known problems. The GENETIS collaboration is developing genetic algorithms for designing antennas that are more sensitive to ultra-high energy neutrino induced radio pulses than current designs. There are three aspects of this investigation. The first is to evolve simple wire antennas to test the concept and different algorithms. Second, optimized antenna response patterns are evolved for a given array geometry. Finally, antennas themselves are evolved using neutrino sensitivity as a measure of fitness. This is achieved by integrating the XFdtd finite-difference time-domain modeling program with simulations of neutrino experiments.

A Deep Q-learning/genetic Algorithms Based Novel Methodology For Optimizing Covid-19 Pandemic Government Actions

Whenever countries are threatened by a pandemic, as is the case with the COVID-19 virus, governments should take the right actions to safeguard public health as well as to mitigate the negative effects on the economy. In this regard, there are two completely different approaches governments can take a restrictive one, in which drastic measures such as self-isolation can seriously damage the economy, and a more liberal one, where more relaxed restrictions may put at risk a high percentage of the population. The optimal approach could be somewhere in between, and, in order to make the right decisions, it is necessary to accurately estimate the future effects of taking one or other measures. In this paper, we use the SEIR epidemiological model (Susceptible - Exposed - Infected - Recovered) for infectious diseases to represent the evolution of the virus COVID-19 over time in the population. To optimize the best sequences of actions governments can take, we propose a methodology with two approaches, one based on Deep Q-Learning and another one based on Genetic Algorithms. The sequences of actions (confinement, self-isolation, two-meter distance or not taking restrictions) are evaluated according to a reward system focused on meeting two objectives firstly, getting few people infected so that hospitals are not overwhelmed with critical patients, and secondly, avoiding taking drastic measures for too long which can potentially cause serious damage to the economy. The conducted experiments prove that our methodology is a valid tool to discover actions governments can take to reduce the negative effects of a pandemic in both senses. We also prove that the approach based on Deep Q-Learning overcomes the one based on Genetic Algorithms for optimizing the sequences of actions.

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is presented the "Evolved Explanations" model (EvEx). This methodology consists in combining Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images. A publicly available Convolutional Neural Network (CNN) was trained on this dataset to provide a binary classification for presence/absence of lymph node metastatic tissue. In turn, the classifications are explained by means of evolving segmentations, seeking to optimize three evaluation goals simultaneously. The final explanation is computed as the mean of all explanations generated by Pareto front individuals, evolved by the developed genetic algorithm. To enhance reproducibility and traceability of the explanations, each of them was generated from several different seeds, randomly chosen. The observed results show remarkable agreement between different seeds. Despite the stochastic nature of LIME explanations, regions of high explanation weights proved to have good agreement in the heat maps, as computed by pixel-wise relative standard deviations. The found heat maps coincide with expert medical segmentations, which demonstrates that this methodology can find high quality explanations (according to the evaluation metrics), with the novel advantage of automated parameter fine tuning. These results give additional insight into the inner workings of neural network black box decision making for medical data.

Crackovid Optimizing Group Testing

We study the problem usually referred to as group testing in the context of COVID-19. Given $n$ samples taken from patients, how should we select mixtures of samples to be tested, so as to maximize information and minimize the number of tests? We consider both adaptive and non-adaptive strategies, and take a Bayesian approach with a prior both for infection of patients and test errors. We start by proposing a mathematically principled objective, grounded in information theory. We then optimize non-adaptive optimization strategies using genetic algorithms, and leverage the mathematical framework of adaptive sub-modularity to obtain theoretical guarantees for the greedy-adaptive method.

Using Genetic Algorithm To Evolve Cellular Automata In Performing Edge Detection

Cellular automata are discrete and computational models thatcan be shown as general models of complexity. They are used in varied applications to derive the generalized behavior of the presented model. In this paper we have took one such application. We have made an effort to perform edge detection on an image using genetic algorithm. The purpose and the intention here is to analyze the capability and performance of the suggested genetic algorithm. Genetic algorithms are used to depict or obtain a general solution of given problem. Using this feature of GA we have tried to evolve the cellular automata and shown that how with time it converges to the desired results.

Optimizing Vessel Trajectory Compression

In previous work we introduced a trajectory detection module that can provide summarized representations of vessel trajectories by consuming AIS positional messages online. This methodology can provide reliable trajectory synopses with little deviations from the original course by discarding at least 70% of the raw data as redundant. However, such trajectory compression is very sensitive to parametrization. In this paper, our goal is to fine-tune the selection of these parameter values. We take into account the type of each vessel in order to provide a suitable configuration that can yield improved trajectory synopses, both in terms of approximation error and compression ratio. Furthermore, we employ a genetic algorithm converging to a suitable configuration per vessel type. Our tests against a publicly available AIS dataset have shown that compression efficiency is comparable or even better than the one with default parametrization without resorting to a laborious data inspection.

Feature Selection with Evolving, Fast and Slow Using Two Parallel Genetic Algorithms

Feature selection is one of the most challenging issues in machine learning, especially while working with high dimensional data. In this paper, we address the problem of feature selection and propose a new approach called Evolving Fast and Slow. This new approach is based on using two parallel genetic algorithms having high and low mutation rates, respectively. Evolving Fast and Slow requires a new parallel architecture combining an automatic system that evolves fast and an effortful system that evolves slow. With this architecture, exploration and exploitation can be done simultaneously and in unison. Evolving fast, with high mutation rate, can be useful to explore new unknown places in the search space with long jumps; and Evolving Slow, with low mutation rate, can be useful to exploit previously known places in the search space with short movements. Our experiments show that Evolving Fast and Slow achieves very good results in terms of both accuracy and feature elimination.

A New Computer-Aided Diagnosis System with Modified Genetic Feature Selection for BI-RADS Classification of Breast Masses in Mammograms

Mammography remains the most prevalent imaging tool for early breast cancer screening. The language used to describe abnormalities in mammographic reports is based on the breast Imaging Reporting and Data System (BI-RADS). Assigning a correct BI-RADS category to each examined mammogram is a strenuous and challenging task for even experts. This paper proposes a new and effective computer-aided diagnosis (CAD) system to classify mammographic masses into four assessment categories in BI-RADS. The mass regions are first enhanced by means of histogram equalization and then semiautomatically segmented based on the region growing technique. A total of 130 handcrafted BI-RADS features are then extrcated from the shape, margin, and density of each mass, together with the mass size and the patient's age, as mentioned in BI-RADS mammography. Then, a modified feature selection method based on the genetic algorithm (GA) is proposed to select the most clinically significant BI-RADS features. Finally, a back-propagation neural network (BPN) is employed for classification, and its accuracy is used as the fitness in GA. A set of 500 mammogram images from the digital database of screening mammography (DDSM) is used for evaluation. Our system achieves classification accuracy, positive predictive value, negative predictive value, and Matthews correlation coefficient of 84.5%, 84.4%, 94.8%, and 79.3%, respectively. To our best knowledge, this is the best current result for BI-RADS classification of breast masses in mammography, which makes the proposed system promising to support radiologists for deciding proper patient management based on the automatically assigned BI-RADS categories.

On the Transferability of Knowledge among Vehicle Routing Problems by using Cellular Evolutionary Multitasking

Multitasking optimization is a recently introduced paradigm, focused on the simultaneous solving of multiple optimization problem instances (tasks). The goal of multitasking environments is to dynamically exploit existing complementarities and synergies among tasks, helping each other through the transfer of genetic material. More concretely, Evolutionary Multitasking (EM) regards to the resolution of multitasking scenarios using concepts inherited from Evolutionary Computation. EM approaches such as the well-known Multifactorial Evolutionary Algorithm (MFEA) are lately gaining a notable research momentum when facing with multiple optimization problems. This work is focused on the application of the recently proposed Multifactorial Cellular Genetic Algorithm (MFCGA) to the well-known Capacitated Vehicle Routing Problem (CVRP). In overall, 11 different multitasking setups have been built using 12 datasets. The contribution of this research is twofold. On the one hand, it is the first application of the MFCGA to the Vehicle Routing Problem family of problems. On the other hand, equally interesting is the second contribution, which is focused on the quantitative analysis of the positive genetic transferability among the problem instances. To do that, we provide an empirical demonstration of the synergies arisen between the different optimization tasks.

Proactive Defense for Internet-of-Things Integrating Moving Target Defense with Cyberdeception

Resource constrained Internet-of-Things (IoT) devices are highly likely to be compromised by attackers because strong security protections may not be suitable to be deployed. This requires an alternative approach to protect vulnerable components in IoT networks. In this paper, we propose an integrated defense technique to achieve intrusion prevention by leveraging cyberdeception (i.e., a decoy system) and moving target defense (i.e., network topology shuffling). We verify the effectiveness and efficiency of our proposed technique analytically based on a graphical security model in a software defined networking (SDN)-based IoT network. We develop four strategies (i.e., fixed/random and adaptive/hybrid) to address "when" to perform network topology shuffling and three strategies (i.e., genetic algorithm/decoy attack path-based optimization/random) to address "how" to perform network topology shuffling on a decoy-populated IoT network, and analyze which strategy can best achieve a system goal such as prolonging the system lifetime, maximizing deception effectiveness, maximizing service availability, or minimizing defense cost. Our results demonstrate that a software defined IoT network running our intrusion prevention technique at the optimal parameter setting prolongs system lifetime, increases attack complexity of compromising critical nodes, and maintains superior service availability compared with a counterpart IoT network without running our intrusion prevention technique. Further, when given a single goal or a multi-objective goal (e.g., maximizing the system lifetime and service availability while minimizing the defense cost) as input, the best combination of "how" and "how" strategies is identified for executing our proposed technique under which the specified goal can be best achieved.

Adaptive Motion Planning with Artificial Potential Fields Using a Prior Path

Motion planning in an autonomous agent is responsible for providing smooth, safe and efficient navigation. Many solutions for dealing this problem have been offered, one of which is, Artificial Potential Fields (APF). APF is a simple and computationally low cost method which keeps the robot away from the obstacles in environment. However, this approach suffers from trapping in local minima of potential function and then fails to produce motion plans. Furthermore, Oscillation in presence of obstacles or in narrow passages is another disadvantage of the method which makes it unqualified for many planning problems. In this paper we aim to resolve these deficiencies by a novel approach which employs a prior path between origin and goal configuration of the robot. Therefore, the planner guarantees to lead the robot to goal area while the inherent advantages of potential fields remain. For path planning stage, we intend to use randomized sampling methods such as Rapidly-exploring Random Trees (RRT) or its derivatives, however, any path planning approach can be utilized. We have also designed an optimization procedure for evolving the motion plans towards optimal solution. Then genetic algorithm is applied to find smoother, safer and shorter plans. In our experiments, we apply a simulated vehicle in Webots simulator to test and evaluate the motion planner. Our experiments showed our method to enjoy improving the performance and speed in comparison to basic approaches.

Lenia and Expanded Universe

We report experimental extensions of Lenia, a continuous cellular automata family capable of producing lifelike self-organizing autonomous patterns. The rule of Lenia was generalized into higher dimensions, multiple kernels, and multiple channels. The final architecture approaches what can be seen as a recurrent convolutional neural network. Using semi-automatic search e.g. genetic algorithm, we discovered new phenomena like polyhedral symmetries, individuality, self-replication, emission, growth by ingestion, and saw the emergence of "virtual eukaryotes" that possess internal division of labor and type differentiation. We discuss the results in the contexts of biology, artificial life, and artificial intelligence.

Evolutionary Multi Objective Optimization Algorithm for Community Detection in Complex Social Networks

Most optimization-based community detection approaches formulate the problem in a single or bi-objective framework. In this paper, we propose two variants of a three-objective formulation using a customized non-dominated sorting genetic algorithm III (NSGA-III) to find community structures in a network. In the first variant, named NSGA-III-KRM, we considered Kernel k means, Ratio cut, and Modularity, as the three objectives, whereas the second variant, named NSGA-III-CCM, considers Community score, Community fitness and Modularity, as three objective functions. Experiments are conducted on four benchmark network datasets. Comparison with state-of-the-art approaches along with decomposition-based multi-objective evolutionary algorithm variants (MOEA/D-KRM and MOEA/D-CCM) indicates that the proposed variants yield comparable or better results. This is particularly significant because the addition of the third objective does not worsen the results of the other two objectives. We also propose a simple method to rank the Pareto solutions so obtained by proposing a new measure, namely the ratio of the hyper-volume and inverted generational distance (IGD). The higher the ratio, the better is the Pareto set. This strategy is particularly useful in the absence of empirical attainment function in the multi-objective framework, where the number of objectives is more than two.

A Multifactorial Optimization Paradigm for Linkage Tree Genetic Algorithm

Linkage Tree Genetic Algorithm (LTGA) is an effective Evolutionary Algorithm (EA) to solve complex problems using the linkage information between problem variables. LTGA performs well in various kinds of single-task optimization and yields promising results in comparison with the canonical genetic algorithm. However, LTGA is an unsuitable method for dealing with multi-task optimization problems. On the other hand, Multifactorial Optimization (MFO) can simultaneously solve independent optimization problems, which are encoded in a unified representation to take advantage of the process of knowledge transfer. In this paper, we introduce Multifactorial Linkage Tree Genetic Algorithm (MF-LTGA) by combining the main features of both LTGA and MFO. MF-LTGA is able to tackle multiple optimization tasks at the same time, each task learns the dependency between problem variables from the shared representation. This knowledge serves to determine the high-quality partial solutions for supporting other tasks in exploring the search space. Moreover, MF-LTGA speeds up convergence because of knowledge transfer of relevant problems. We demonstrate the effectiveness of the proposed algorithm on two benchmark problems Clustered Shortest-Path Tree Problem and Deceptive Trap Function. In comparison to LTGA and existing methods, MF-LTGA outperforms in quality of the solution or in computation time.

Artificial intelligence real-time prediction and physical interpretation of atomic binding energies in nano-scale metal clusters

Single atomic sites often determine the functionality and performance of materials, such as catalysts, semi-conductors or enzymes. Computing and understanding the properties of such sites is therefore a crucial component of the rational materials design process. Because of complex electronic effects at the atomic level, atomic site properties are conventionally derived from computationally expensive first-principle calculations, as this level of theory is required to achieve relevant accuracy. In this study, we present a widely applicable machine learning (ML) approach to compute atomic site properties with high accuracy in real time. The approach works well for complex non-crystalline atomic structures and therefore opens up the possibility for high-throughput screenings of nano-materials, amorphous systems and materials interfaces. Our approach includes a robust featurization scheme to transform atomic structures into features which can be used by common machine learning models. Performing a genetic algorithm (GA) based feature selection, we show how to establish an intuitive physical interpretation of the structure-property relations implied by the ML models. With this approach, we compute atomic site stabilities of metal nanoparticles ranging from 3-55 atoms with mean absolute errors in the range of 0.11-0.14 eV in real time. We also establish the chemical identity of the site as most important factor in determining atomic site stabilities, followed by structural features like bond distances and angles. Both, the featurization and GA feature selection functionality are published in open-source python modules. With this method, we enable the efficient rational design of highly specialized real-world nano-catalysts through data-driven materials screening.

Revisiting Regex Generation for Modeling Industrial Applications by Incorporating Byte Pair Encoder

Regular expression is important for many natural language processing tasks especially when used to deal with unstructured and semi-structured data. This work focuses on automatically generating regular expressions and proposes a novel genetic algorithm to deal with this problem. Different from the methods which generate regular expressions from character level, we first utilize byte pair encoder (BPE) to extract some frequent items, which are then used to construct regular expressions. The fitness function of our genetic algorithm contains multi objectives and is solved based on evolutionary procedure including crossover and mutation operation. In the fitness function, we take the length of generated regular expression, the maximum matching characters and samples for positive training samples, and the minimum matching characters and samples for negative training samples into consideration. In addition, to accelerate the training process, we do exponential decay on the population size of the genetic algorithm. Our method together with a strong baseline is tested on 13 kinds of challenging datasets. The results demonstrate the effectiveness of our method, which outperforms the baseline on 10 kinds of data and achieves nearly 50 percent improvement on average. By doing exponential decay, the training speed is approximately 100 times faster than the methods without using exponential decay. In summary, our method possesses both effectiveness and efficiency, and can be implemented for the industry application.

A Machine Learning based Framework for KPI Maximization in Emerging Networks using Mobility Parameters

Current LTE network is faced with a plethora of Configuration and Optimization Parameters (COPs), both hard and soft, that are adjusted manually to manage the network and provide better Quality of Experience (QoE). With 5G in view, the number of these COPs are expected to reach 2000 per site, making their manual tuning for finding the optimal combination of these parameters, an impossible fleet. Alongside these thousands of COPs is the anticipated network densification in emerging networks which exacerbates the burden of the network operators in managing and optimizing the network. Hence, we propose a machine learning-based framework combined with a heuristic technique to discover the optimal combination of two pertinent COPs used in mobility, Cell Individual Offset (CIO) and Handover Margin (HOM), that maximizes a specific Key Performance Indicator (KPI) such as mean Signal to Interference and Noise Ratio (SINR) of all the connected users. The first part of the framework leverages the power of machine learning to predict the KPI of interest given several different combinations of CIO and HOM. The resulting predictions are then fed into Genetic Algorithm (GA) which searches for the best combination of the two mentioned parameters that yield the maximum mean SINR for all users. Performance of the framework is also evaluated using several machine learning techniques, with CatBoost algorithm yielding the best prediction performance. Meanwhile, GA is able to reveal the optimal parameter setting combination more efficiently and with three orders of magnitude faster convergence time in comparison to brute force approach.

Type-2 fuzzy reliability redundancy allocation problem and its solution using particle swarm optimization algorithm

In this paper, the fuzzy multi-objective reliability redundancy allocation problem (FMORRAP) is proposed, which maximizes the system reliability while simultaneously minimizing the system cost under the type 2 fuzzy uncertainty. In the proposed formulation, the higher order uncertainties (such as parametric, manufacturing, environmental, and designers uncertainty) associated with the system are modeled with interval type 2 fuzzy sets (IT2 FS). The footprint of uncertainty of the interval type 2 membership functions (IT2 MFs) accommodates these uncertainties by capturing the multiple opinions from several system experts. We consider IT2 MFs to represent the subsystem reliability and cost, which are to be further aggregated using extension principle to evaluate the total system reliability and cost according to their configurations, i.e., series parallel and parallel series. We proposed a particle swarm optimization (PSO) based novel solution approach to solve the FMORRAP. To demonstrate the applicability of two formulations, namely, series parallel FMORRAP and parallel series FMORRAP, we performed experimental simulations on various numerical data sets. The decision makers/system experts assign different importance to the objectives (system reliability and cost), and these preferences are represented by sets of weights. The optimal results are obtained from our solution approach, and the Pareto optimal front is established using these different weight sets. The genetic algorithm (GA) was implemented to compare the results obtained from our proposed solution approach. A statistical analysis was conducted between PSO and GA, and it was found that the PSO based Pareto solution outperforms the GA.

Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative Multiplicative Drift

A decent number of lower bounds for non-elitist population-based evolutionary algorithms has been shown by now. Most of them are technically demanding due to the (hard to avoid) use of negative drift theorems -- general results which translate an expected progress away from the target into a high hitting time. We propose a simple negative drift theorem for multiplicative drift scenarios and show that it can simplify existing analyses. We discuss in more detail Lehre's (PPSN 2010) \emph{negative drift in populations} method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces. Together with other arguments, we obtain an alternative and simpler proof, which also strengthens and simplifies this method. In particular, now only three of the five technical conditions of the previous result have to be verified. The lower bounds we obtain are explicit instead of only asymptotic. This allows to compute concrete lower bounds for concrete algorithms, but also enables us to show that super-polynomial runtimes appear already when the reproduction rate is only a $(1 - \omega(n^{-1/2}))$ factor below the threshold. For the special case of algorithms using standard bit mutation with a random mutation rate (called uniform mixing in the language of hyper-heuristics), we prove the result stated by Dang and Lehre (PPSN 2016) and extend it to mutation rates other than $\Theta(1/n)$, which includes the heavy-tailed mutation operator proposed by Doerr, Le, Makhmara, and Nguyen (GECCO 2017). We finally apply our method and a novel domination argument to show an exponential lower bound for the runtime of the mutation-only simple genetic algorithm on \onemax for arbitrary population size.

Uncertainty Modelling in Risk-averse Supply Chain Systems Using Multi-objective Pareto Optimization

One of the arduous tasks in supply chain modelling is to build robust models against irregular variations. During the proliferation of time-series analyses and machine learning models, several modifications were proposed such as acceleration of the classical levenberg-marquardt algorithm, weight decaying and normalization, which introduced an algorithmic optimization approach to this problem. In this paper, we have introduced a novel methodology namely, Pareto Optimization to handle uncertainties and bound the entropy of such uncertainties by explicitly modelling them under some apriori assumptions. We have implemented Pareto Optimization using a genetic approach and compared the results with classical genetic algorithms and Mixed-Integer Linear Programming (MILP) models. Our results yields empirical evidence suggesting that Pareto Optimization can elude such non-deterministic errors and is a formal approach towards producing robust and reactive supply chain models.

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.

Automatic low-bit hybrid quantization of neural networks through meta learning

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference, especially when deploying to edge or IoT devices with limited computation capacity and power consumption budget. The uniform bit width quantization across all the layers is usually sub-optimal and the exploration of hybrid quantization for different layers is vital for efficient deep compression. In this paper, we employ the meta learning method to automatically realize low-bit hybrid quantization of neural networks. A MetaQuantNet, together with a Quantization function, are trained to generate the quantized weights for the target DNN. Then, we apply a genetic algorithm to search the best hybrid quantization policy that meets compression constraints. With the best searched quantization policy, we subsequently retrain or finetune to further improve the performance of the quantized target network. Extensive experiments demonstrate the performance of searched hybrid quantization scheme surpass that of uniform bitwidth counterpart. Compared to the existing reinforcement learning (RL) based hybrid quantization search approach that relies on tedious explorations, our meta learning approach is more efficient and effective for any compression requirements since the MetaQuantNet only needs be trained once.

Tip the Balance Improving Exploration of Balanced Crossover Operators by Adaptive Bias

The use of balanced crossover operators in Genetic Algorithms (GA) ensures that the binary strings generated as offsprings have the same Hamming weight of the parents, a constraint which is sought in certain discrete optimization problems. Although this method reduces the size of the search space, the resulting fitness landscape often becomes more difficult for the GA to explore and to discover optimal solutions. This issue has been studied in this paper by applying an adaptive bias strategy to a counter-based crossover operator that introduces unbalancedness in the offspring with a certain probability, which is decreased throughout the evolutionary process. Experiments show that improving the exploration of the search space with this adaptive bias strategy is beneficial for the GA performances in terms of the number of optimal solutions found for the balanced nonlinear Boolean functions problem.

Multi-Objective Counterfactual Explanations

Counterfactual explanations are one of the most popular methods to make predictions of black box machine learning models interpretable by providing explanations in the form of `what-if scenarios'. Current approaches can compute counterfactuals only for certain model classes or feature types, or they generate counterfactuals that are not consistent with the observed data distribution. To overcome these limitations, we propose the Multi-Objective Counterfactuals (MOC) method, which translates the counterfactual search into a multi-objective optimization problem and solves it with a genetic algorithm based on NSGA-II. It returns a diverse set of counterfactuals with different trade-offs between the proposed objectives, enabling either a more detailed post-hoc analysis to facilitate better understanding or more options for actionable user responses to change the predicted outcome. We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.

Evolving Dyadic Strategies for a Cooperative Physical Task

Many cooperative physical tasks require that individuals play specialized roles (e.g., leader-follower). Humans are adept cooperators, negotiating these roles and transitions between roles innately. Yet how roles are delegated and reassigned is not well understood. Using a genetic algorithm, we evolve simulated agents to explore a space of feasible role-switching policies. Applying these switching policies in a cooperative manual task, agents process visual and haptic cues to decide when to switch roles. We then analyze the evolved virtual population for attributes typically associated with cooperation load sharing and temporal coordination. We find that the best performing dyads exhibit high temporal coordination (anti-synchrony). And in turn, anti-synchrony is correlated to symmetry between the parameters of the cooperative agents. These simulations furnish hypotheses as to how human cooperators might mediate roles in dyadic tasks.

Devolutionary genetic algorithms with application to the minimum labeling Steiner tree problem

This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem. We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time. We claim that distinguishing them from the widely used evolutionary algorithms is relevant. The most important distinction lies in the fact that in the former type of processes, the value function decreases over successive generation of solutions, thus providing a natural stopping condition for the computation process. We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution among the first generations of feasible solutions. We additionally introduce a novel integer linear programming formulation of the MLST problem and a valid constraint used for speeding up the devolutionary process. Finally, we conduct an experiment comparing the performances of devolutionary algorithms to those of state of the art approaches used for solving randomly generated instances of the MLST problem. Results of this experiment support the use of devolutionary algorithms for the MLST problem and their development for other NP-hard combinatorial optimization problems.

Leveraging Cognitive Search Patterns to Enhance Automated Natural Language Retrieval Performance

The search of information in large text repositories has been plagued by the so-called document-query vocabulary gap, i.e. the semantic discordance between the contents in the stored document entities on the one hand and the human query on the other hand. Over the past two decades, a significant body of works has advanced technical retrieval prowess while several studies have shed light on issues pertaining to human search behavior. We believe that these efforts should be conjoined, in the sense that automated retrieval systems have to fully emulate human search behavior and thus consider the procedure according to which users incrementally enhance their initial query. To this end, cognitive reformulation patterns that mimic user search behaviour are highlighted and enhancement terms which are statistically collocated with or lexical-semantically related to the original terms adopted in the retrieval process. We formalize the application of these patterns by considering a query conceptual representation and introducing a set of operations allowing to operate modifications on the initial query. A genetic algorithm-based weighting process allows placing emphasis on terms according to their conceptual role-type. An experimental evaluation on real-world datasets against relevance, language, conceptual and knowledge-based models is conducted. We also show, when compared to language and relevance models, a better performance in terms of mean average precision than a word embedding-based model instantiation.

Toward Terabits-per-second Communications Low-Complexity Parallel Decoding of $G_N$-Coset Codes

Recently, a parallel decoding framework of $G_N$-coset codes was proposed. High throughput is achieved by decoding the independent component polar codes in parallel. Various algorithms can be employed to decode these component codes, enabling a flexible throughput-performance tradeoff. In this work, we adopt SC as the component decoders to achieve the highest-throughput end of the tradeoff. The benefits over soft-output component decoders are reduced complexity and simpler (binary) interconnections among component decoders. To reduce performance degradation, we integrate an error detector and a log-likelihood ratio (LLR) generator into each component decoder. The LLR generator, specifically the damping factors therein, is designed by a genetic algorithm. This low-complexity design can achieve an area efficiency of $533Gbps/mm^2$ under 7nm technology.

R package SamplingStrata new developments and extension to Spatial Sampling

The R package SamplingStrata was developed in 2011 as an instrument to optimize the design of stratified samples. The optimization is performed by considering the stratification variables available in the sampling frame, and the precision constraints on target estimates of the survey (Ballin & Barcaroli, 2014). The genetic algorithm at the basis of the optimization step explores the universe of the possible alternative stratifications determining for each of them the best allocation, that is the one of minumum total size that allows to satisfy the precision constraints the final optimal solution is the one that ensures the global minimum sample size. One fundamental requirement to make this approach feasible is the possibility to estimate the variability of target variables in generated strata; in general, as target variable values are not available in the frame, but only proxy ones, anticipated variance is calculated by modelling the relations between target and proxy variables. In case of spatial sampling, it is important to consider not only the total model variance, but also the co-variance derived by the spatial auto-correlation. The last release of SamplingStrata enables to consider both components of variance, thus allowing to harness spatial auto-correlation in order to obtain more efficient samples.

The $(1+(λ,λ))$ Genetic Algorithm for Permutations

The $(1+(\lambda,\lambda))$ genetic algorithm is a bright example of an evolutionary algorithm which was developed based on the insights from theoretical findings. This algorithm uses crossover, and it was shown to asymptotically outperform all mutation-based evolutionary algorithms even on simple problems like OneMax. Subsequently it was studied on a number of other problems, but all of these were pseudo-Boolean. We aim at improving this situation by proposing an adaptation of the $(1+(\lambda,\lambda))$ genetic algorithm to permutation-based problems. Such an adaptation is required, because permutations are noticeably different from bit strings in some key aspects, such as the number of possible mutations and their mutual dependence. We also present the first runtime analysis of this algorithm on a permutation-based problem called Ham whose properties resemble those of OneMax. On this problem, where the simple mutation-based algorithms have the running time of $\Theta(n^2 \log n)$ for problem size $n$, the $(1+(\lambda,\lambda))$ genetic algorithm finds the optimum in $O(n^2)$ fitness queries. We augment this analysis with experiments, which show that this algorithm is also fast in practice.

Color Image Segmentation using Adaptive Particle Swarm Optimization and Fuzzy C-means

Segmentation partitions an image into different regions containing pixels with similar attributes. A standard non-contextual variant of Fuzzy C-means clustering algorithm (FCM), considering its simplicity is generally used in image segmentation. Using FCM has its disadvantages like it is dependent on the initial guess of the number of clusters and highly sensitive to noise. Satisfactory visual segments cannot be obtained using FCM. Particle Swarm Optimization (PSO) belongs to the class of evolutionary algorithms and has good convergence speed and fewer parameters compared to Genetic Algorithms (GAs). An optimized version of PSO can be combined with FCM to act as a proper initializer for the algorithm thereby reducing its sensitivity to initial guess. A hybrid PSO algorithm named Adaptive Particle Swarm Optimization (APSO) which improves in the calculation of various hyper parameters like inertia weight, learning factors over standard PSO, using insights from swarm behaviour, leading to improvement in cluster quality can be used. This paper presents a new image segmentation algorithm called Adaptive Particle Swarm Optimization and Fuzzy C-means Clustering Algorithm (APSOF), which is based on Adaptive Particle Swarm Optimization (APSO) and Fuzzy C-means clustering. Experimental results show that APSOF algorithm has edge over FCM in correctly identifying the optimum cluster centers, there by leading to accurate classification of the image pixels. Hence, APSOF algorithm has superior performance in comparison with classic Particle Swarm Optimization (PSO) and Fuzzy C-means clustering algorithm (FCM) for image segmentation.

Wireless Power Transmitter Deployment for Balancing Fairness and Charging Service Quality

Wireless Energy Transfer (WET) has recently emerged as an appealing solution for power supplying mobile / Internet of Things (IoT) devices. As an enabling WET technology, Resonant Beam Charging (RBC) is well-documented for its long-range, high-power, and safe "WiFi-like" mobile power supply. To provide high-quality wireless charging services for multi-user in a given region, we formulate a deployment problem of multiple RBC transmitters for balancing the charging fairness and quality of charging service. Based on the RBC transmitter's coverage model and receiver's charging / discharging model, a Genetic Algorithm (GA)-based and a Particle Swarm Optimization (PSO)-based scheme are put forth to resolve the above issue. Moreover, we present a scheduling method to evaluate the performance of the proposed algorithms. Numerical results corroborate that the optimized deployment schemes outperform uniform and random deployment in 10%-20% charging efficiency improvement.

A Heuristics-based Home Energy Management System for Demand Response

The so-called Internet of Things (IoT) and advanced communication technologies have already demonstrated a great potential to manage residential energy resources via demand-side management. This work presents a home energy management system in that focused on the energy reallocation problem where consumers shall shift their energy consumption patterns away from peak periods and/or high electricity prices. Our solution differentiates residential loads into two categories (i) fixed power appliances and (ii) flexible ones. Therefrom, we formulate our problem as a constraint optimization problem, which is non-linear and cannot be mathematically solved in closed-form. We then employ and compare two well-known heuristics, the genetic algorithm (GA) and the harmony search algorithm (HSA), to minimize electricity expense and peak to average ratio. These two approaches are compared to the case where no reallocation happens. Our numerical results show that both methods; GAand HSA can effectively reduce the electricity cost by 0.9%, 3.98%, and PAR by 15%, 5.8%, respectively

Multi-Objective Evolutionary approach for the Performance Improvement of Learners using Ensembling Feature selection and Discretization Technique on Medical data

Biomedical data is filled with continuous real values; these values in the feature set tend to create problems like underfitting, the curse of dimensionality and increase in misclassification rate because of higher variance. In response, pre-processing techniques on dataset minimizes the side effects and have shown success in maintaining the adequate accuracy. Feature selection and discretization are the two necessary preprocessing steps that were effectively employed to handle the data redundancies in the biomedical data. However, in the previous works, the absence of unified effort by integrating feature selection and discretization together in solving the data redundancy problem leads to the disjoint and fragmented field. This paper proposes a novel multi-objective based dimensionality reduction framework, which incorporates both discretization and feature reduction as an ensemble model for performing feature selection and discretization. Selection of optimal features and the categorization of discretized and non-discretized features from the feature subset is governed by the multi-objective genetic algorithm (NSGA-II). The two objective, minimizing the error rate during the feature selection and maximizing the information gain while discretization is considered as fitness criteria.

Gumbel-softmax-based Optimization A Simple General Framework for Optimization Problems on Graphs

In computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA) and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on three representative optimization problems on graph including modularity optimization from network science, Sherrington-Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph. High-quality solutions can be obtained with much less time consuming compared to traditional approaches.

From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm

One of the key difficulties in using estimation-of-distribution algorithms is choosing the population sizes appropriately Too small values lead to genetic drift, which can cause enormous difficulties. In the regime with no genetic drift, however, often the runtime is roughly proportional to the population size, which renders large population sizes inefficient. Based on a recent quantitative analysis which population sizes lead to genetic drift, we propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size without spending too much time in situations unfavorable due to genetic drift. We prove an easy mathematical runtime guarantee for this algorithm and conduct an extensive experimental analysis on four classic benchmark problems. The former shows that under a natural assumption, our algorithm has a performance similar to the one obtainable from the best population size. The latter confirms that missing the right population size can be highly detrimental and shows that our algorithm as well as a previously proposed parameter-less one based on parallel runs avoids such pitfalls. Comparing the two approaches, ours profits from its ability to abort runs which are likely to be stuck in a genetic drift situation.

Transfer-Learning-Aware Neuro-Evolution for Diseases Detection in Chest X-Ray Images

The neural network needs excessive costs of time because of the complexity of architecture when trained on images. Transfer learning and fine-tuning can help improve time and cost efficiency when training a neural network. Yet, Transfer learning and fine-tuning needs a lot of experiment to try with. Therefore, a method to find the best architecture for transfer learning and fine-tuning is needed. To overcome this problem, neuro-evolution using a genetic algorithm can be used to find the best architecture for transfer learning. To check the performance of this study, dataset ChestX-Ray 14 and DenseNet-121 as a base neural network model are used. This study used the AUC score, differences in execution time for training, and McNemar's test to the significance test. In terms of result, this study got a 5% difference in the AUC score, 3 % faster in terms of execution time, and significance in most of the disease detection. Finally, this study gives a concrete summary of how neuro-evolution transfer learning can help in terms of transfer learning and fine-tuning.

Modular-topology optimization with Wang tilings An application to truss structures

Modularity is appealing for solving many problems in optimization. It brings the benefits of manufacturability and reconfigurability to structural optimization, and enables a trade-off between the computational performance of a Periodic Unit Cell (PUC) and the efficacy of non-uniform designs in multi-scale material optimization. Here, we introduce a novel strategy for concurrent minimum-compliance design of truss modules topologies and their macroscopic assembly encoded using Wang tiling, a formalism providing independent control over the number of modules and their interfaces. We tackle the emerging bilevel optimization problem with a combination of meta-heuristics and mathematical programming. At the upper level, we employ a genetic algorithm to optimize module assemblies. For each assembly, we obtain optimal module topologies as a solution to a convex second-order conic program that exploits the underlying modularity, incorporating stress constraints, multiple load cases, and reuse of module(s) for various structures. Merits of the proposed strategy are illustrated with three representative examples, clearly demonstrating that the best designs obtained by our method exhibited decreased compliance from 56% to 69% compared to the PUC designs.

Understanding Aesthetic Evaluation using Deep Learning

A bottleneck in any evolutionary art system is aesthetic evaluation. Many different methods have been proposed to automate the evaluation of aesthetics, including measures of symmetry, coherence, complexity, contrast and grouping. The interactive genetic algorithm (IGA) relies on human-in-the-loop, subjective evaluation of aesthetics, but limits possibilities for large search due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we use dimensionality reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in any generative system. Convolutional Neural Networks trained on the user's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings.

The $(1 + (λ, λ))$ GA Is Even Faster on Multimodal Problems

The $(1 + (\lambda,\lambda))$ genetic algorithm is a recent invention of the theory community. Rigorous runtime analyses on unimodal fitness functions showed that it can indeed be faster than classical evolutionary algorithms, though on these simple problems the gains were only moderate. In this work, we conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark. We show that with the right parameters, the $(1 + (\lambda,\lambda))$ GA optimizes any jump function with jump size $2 \le k \le n/16$ in expected time $O(n^{(k+1)/2} e^{O(k)} k^{-k/2})$, which significantly and already for constant $k$ outperforms standard mutation-based algorithms with their $\Theta(n^k)$ runtime and standard crossover-based algorithms with their $O(n^{k-1})$ runtime. Our work suggests some general advice on how to set the parameters of the $(1 + (\lambda,\lambda))$, which might ease the further use of this algorithm.

Genetic Algorithm for the Weight Maximization Problem on Weighted Automata

The weight maximization problem (WMP) is the problem of finding the word of highest weight on a weighted finite state automaton (WFA). It is an essential question that emerges in many optimization problems in automata theory. Unfortunately, the general problem can be shown to be undecidable, whereas its bounded decisional version is NP-complete. Designing efficient algorithms that produce approximate solutions to the WMP in reasonable time is an appealing research direction that can lead to several new applications including formal verification of systems abstracted as WFAs. In particular, in combination with a recent procedure that translates a recurrent neural network into a weighted automaton, an algorithm for the WMP can be used to analyze and verify the network by exploiting the simpler and more compact automata model. In this work, we propose, implement and evaluate a metaheuristic based on genetic algorithms to approximate solutions to the WMP. We experimentally evaluate its performance on examples from the literature and show its potential on different applications.

Fast Mutation in Crossover-based Algorithms

The heavy-tailed mutation operator proposed in Doerr et al. (GECCO 2017), called fast mutation to agree with the previously used language, so far was successfully used only in purely mutation-based algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives. In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. With a heavy-tailed mutation rate, the runtime of the $(1+(\lambda,\lambda))$ genetic algorithm on the OneMax benchmark function becomes linear in the problem size. This is asymptotically faster than with any static mutation rate and is the same asymptotic runtime that can be obtained with a self-adjusting choice of the mutation rate. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random MAX-3SAT instances.

Service Level Driven Job Scheduling in Multi-Tier Cloud Computing A Biologically Inspired Approach

Cloud computing environments often have to deal with random-arrival computational workloads that vary in resource requirements and demand high Quality of Service (QoS) obligations. It is typical that a Service-Level-Agreement (SLA) is employed to govern the QoS obligations of the cloud computing service provider to the client. A typical challenge service-providers face every day is maintaining a balance between the limited resources available for computing and the high QoS requirements of varying random demands. Any imbalance in managing these conflicting objectives may result in either dissatisfied clients and potentially significant commercial penalties, or an over-resourced cloud computing environment that can be significantly costly to acquire and operate. Thus, scheduling the clients' workloads as they arrive at the environment to ensure their timely execution has been a central issue in cloud computing. Various approaches have been reported in the literature to address this problem Shortest-Queue, Join-Idle-Queue, Round Robin, MinMin, MaxMin, and Least Connection, to name a few. However, optimization strategies of such approaches fail to capture QoS obligations and their associated commercial penalties. This paper presents an approach for service-level driven load scheduling and balancing in multi-tier environments. Joint scheduling and balancing operations are employed to distribute and schedule jobs among the resources, such that the total waiting time of client jobs is minimized, and thus the potential of a penalty to be incurred by the service provider is mitigated. A penalty model is used to quantify the penalty the service provider incurs as a function of the jobs' total waiting time. A Virtual-Queue abstraction is proposed to facilitate optimal job scheduling at the tier level. This problem is NP-complete, a genetic algorithm is proposed for computing job schedules.

QoS-Driven Job Scheduling Multi-Tier Dependency Considerations

For a cloud service provider, delivering optimal system performance while fulfilling Quality of Service (QoS) obligations is critical for maintaining a viably profitable business. This goal is often hard to attain given the irregular nature of cloud computing jobs. These jobs expect high QoS on an on-demand fashion, that is on random arrival. To optimize the response to such client demands, cloud service providers organize the cloud computing environment as a multi-tier architecture. Each tier executes its designated tasks and passes the job to the next tier; in a fashion similar, but not identical, to the traditional job-shop environments. An optimization process must take place to schedule the appropriate tasks of the job on the resources of the tier, so as to meet the QoS expectations of the job. Existing approaches employ scheduling strategies that consider the performance optimization at the individual resource level and produce optimal single-tier driven schedules. Due to the sequential nature of the multi-tier environment, the impact of such schedules on the performance of other resources and tiers tend to be ignored, resulting in a less than optimal performance when measured at the multi-tier level. In this paper, we propose a multi-tier-oriented job scheduling and allocation technique. The scheduling and allocation process is formulated as a problem of assigning jobs to the resource queues of the cloud computing environment, where each resource of the environment employs a queue to hold the jobs assigned to it. The scheduling problem is NP-hard, as such a biologically inspired genetic algorithm is proposed. The computing resources across all tiers of the environment are virtualized in one resource by means of a single queue virtualization. A chromosome that mimics the sequencing and allocation of the tasks in the proposed virtual queue is proposed.

Minimizing Energy Use of Mixed-Fleet Public Transit for Fixed-Route Service

Public transit can have significantly lower environmental impact than personal vehicles; however, it still uses a substantial amount of energy, causing air pollution and greenhouse gas emission. While electric vehicles (EVs) can reduce energy use, most public transit agencies have to employ them in combination with conventional, internal-combustion engine vehicles due to the high upfront costs of EVs. To make the best use of such a mixed fleet of vehicles, transit agencies need to optimize route assignments and charging schedules, which presents a challenging problem for large public transit networks. We introduce a novel problem formulation to minimize fuel and electricity use by assigning vehicles to transit trips and scheduling them for charging while serving an existing fixed-route transit schedule. We present an integer program for optimal discrete-time scheduling, and we propose polynomial-time heuristic algorithms and a genetic algorithm for finding solutions for larger networks. We evaluate our algorithms on the transit service of a mid-size U.S. city using operational data collected from public transit vehicles. Our results show that the proposed algorithms are scalable and achieve near-minimum energy use.

GGA-MG Generative Genetic Algorithm for Music Generation

Music Generation (MG) is an interesting research topic that links the art of music and Artificial Intelligence (AI). The goal is to train an artificial composer to generate infinite, fresh, and pleasurable musical pieces. Music has different parts such as melody, harmony, and rhythm. In this paper, we propose a Generative Genetic Algorithm (GGA) to produce a melody automatically. The main GGA uses a Long Short-Term Memory (LSTM) recurrent neural network as the objective function, which should be trained by a spectrum of bad-to-good melodies. These melodies have to be provided by another GGA with a different objective function. Good melodies have been provided by CAMPINs collection. We have considered the rhythm in this work, too. The experimental results clearly show that the proposed GGA method is able to generate eligible melodies with natural transitions and without rhythm error.

Continuously tuning the refractive indices by restructuring materials on the 20-1000 atoms scale improving anti-reflection coating designs

We demonstrate the capability of block-copolymer templating to tune the refractive indices of functional oxides over a broad range by structuring materials on the 20-1000 atoms scale, with simple one-pot synthesis. The presented method is then combined with genetic algorithm-based optimization to explore its application for anti-reflection coating design. Merging these techniques allows for the realization of a minimal two-layer anti-reflection stack for silicon with broadband reflectivity of just 3% from the nominal value of 38%, over a 120° angular span, validated by fabrication followed by optical measurements.

You can also browse papers in other categories.