[1] arXiv:2006.08731 [pdf]
[2] arXiv:2006.08122 [pdf]
[3] arXiv:2006.06257 [pdf]
[4] arXiv:2006.05219 [pdf]
[5] arXiv:2006.03963 [pdf]
[6] arXiv:2006.03672 [pdf]
[7] arXiv:2006.02672 [pdf]
[8] arXiv:2006.01066 [pdf]
[9] arXiv:2005.14605 [pdf]
[10] arXiv:2005.13273 [pdf]
[11] arXiv:2005.13182 [pdf]
[12] arXiv:2005.12928 [pdf]
[13] arXiv:2005.11275 [pdf]
[14] arXiv:2005.09871 [pdf]
[15] arXiv:2005.09710 [pdf]
[16] arXiv:2005.08642 [pdf]
[17] arXiv:2005.08295 [pdf]
[18] arXiv:2005.05051 [pdf]
[19] arXiv:2005.03370 [pdf]
[20] arXiv:2005.01697 [pdf]
[21] arXiv:2004.13514 [pdf]
[22] arXiv:2004.11698 [pdf]
[23] arXiv:2004.09660 [pdf]
[24] arXiv:2004.08101 [pdf]
[25] arXiv:2004.07864 [pdf]
[26] arXiv:2004.07300 [pdf]
[27] arXiv:2004.05733 [pdf]
[28] arXiv:2004.05105 [pdf]
[29] arXiv:2004.04333 [pdf]
[30] arXiv:2004.03972 [pdf]
[31] arXiv:2004.02587 [pdf]
[32] arXiv:2004.01953 [pdf]
[33] arXiv:2003.14169 [pdf]
[34] arXiv:2003.12449 [pdf]
[35] arXiv:2003.10550 [pdf]
[36] arXiv:2003.10126 [pdf]
[37] arXiv:2003.08363 [pdf]
[38] arXiv:2003.07527 [pdf]
[39] arXiv:2003.06448 [pdf]
[40] arXiv:2003.06360 [pdf]
[41] arXiv:2003.02981 [pdf]
[42] arXiv:2003.02336 [pdf]
[43] arXiv:2003.01972 [pdf]
[44] arXiv:2003.01250 [pdf]
[45] arXiv:2003.00011 [pdf]
[46] arXiv:2002.12001 [pdf]
[47] arXiv:2002.06998 [pdf]
[48] arXiv:2002.06894 [pdf]
[49] arXiv:2002.06124 [pdf]
[50] arXiv:2002.05203 [pdf]
[51] arXiv:2002.03055 [pdf]
[52] arXiv:2002.02024 [pdf]
[53] arXiv:2001.09223 [pdf]
[54] arXiv:2001.08841 [pdf]
[55] arXiv:2001.08063 [pdf]
[56] arXiv:2001.07709 [pdf]
[57] arXiv:2001.07418 [pdf]
[58] arXiv:2001.04489 [pdf]
[59] arXiv:2001.03781 [pdf]
[60] arXiv:2001.02499 [pdf]
[61] arXiv:2001.01809 [pdf]
[62] arXiv:2001.01400 [pdf]
[63] arXiv:2001.01129 [pdf]
[64] arXiv:2001.00920 [pdf]
[65] arXiv:1912.11533 [pdf]
[66] arXiv:1912.11375 [pdf]
[67] arXiv:1912.10701 [pdf]
[68] arXiv:1912.09733 [pdf]
[69] arXiv:1912.06516 [pdf]
[70] arXiv:1912.02567 [pdf]
[71] arXiv:1911.12442 [pdf]
[72] arXiv:1911.12194 [pdf]
[73] arXiv:1911.11892 [pdf]
[74] arXiv:1911.10133 [pdf]
[75] arXiv:1911.09496 [pdf]
[76] arXiv:1911.08817 [pdf]
[77] arXiv:1911.07172 [pdf]
[78] arXiv:1911.06509 [pdf]
[79] arXiv:1911.06259 [pdf]
[80] arXiv:1911.04766 [pdf]

Exact and Metaheuristic Approaches for the Production Leveling Problem

In this paper we introduce a new problem in the field of production planning which we call the Production Leveling Problem. The task is to assign orders to production periods such that the load in each period and on each production resource is balanced, capacity limits are not exceeded and the orders' priorities are taken into account. Production Leveling is an important intermediate step between long-term planning and the final scheduling of orders within a production period, as it is responsible for selecting good subsets of orders to be scheduled within each period. A formal model of the problem is proposed and NP-hardness is shown by reduction from Bin Backing. As an exact method for solving moderately sized instances we introduce a MIP formulation. For solving large problem instances, metaheuristic local search is investigated. A greedy heuristic and two neighborhood structures for local search are proposed, in order to apply them using Variable Neighborhood Descent and Simulated Annealing. Regarding exact techniques, the main question of research is, up to which size instances are solvable within a fixed amount of time. For the metaheuristic approaches the aim is to show that they produce near-optimal solutions for smaller instances, but also scale well to very large instances. A set of realistic problem instances from an industrial partner is contributed to the literature, as well as random instance generators. The experimental evaluation conveys that the proposed MIP model works well for instances with up to 250 orders. Out of the investigated metaheuristic approaches, Simulated Annealing achieves the best results. It is shown to produce solutions with less than 3% average optimality gap on small instances and to scale well up to thousands of orders and dozens of periods and products. The presented metaheuristic methods are already being used in the industry.

Classification and Recognition of Encrypted EEG Data Neural Network

With the rapid development of Machine Learning technology applied in electroencephalography (EEG) signals, Brain-Computer Interface (BCI) has emerged as a novel and convenient human-computer interaction for smart home, intelligent medical and other Internet of Things (IoT) scenarios. However, security issues such as sensitive information disclosure and unauthorized operations have not received sufficient concerns. There are still some defects with the existing solutions to encrypted EEG data such as low accuracy, high time complexity or slow processing speed. For this reason, a classification and recognition method of encrypted EEG data based on neural network is proposed, which adopts Paillier encryption algorithm to encrypt EEG data and meanwhile resolves the problem of floating point operations. In addition, it improves traditional feed-forward neural network (FNN) by using the approximate function instead of activation function and realizes multi-classification of encrypted EEG data. Extensive experiments are conducted to explore the effect of several metrics (such as the hidden neuron size and the learning rate updated by improved simulated annealing algorithm) on the recognition results. Followed by security and time cost analysis, the proposed model and approach are validated and evaluated on public EEG datasets provided by PhysioNet, BCI Competition IV and EPILEPSIAE. The experimental results show that our proposal has the satisfactory accuracy, efficiency and feasibility compared with other solutions.

On the comparison of optimization algorithms for the random-field Potts model

For many systems with quenched disorder the study of ground states can crucially contribute to a thorough understanding of the physics at play, be it for the critical behavior if that is governed by a zero-temperature fixed point or for uncovering properties of the ordered phase. While ground states can in principle be computed using general-purpose optimization algorithms such as simulated annealing or genetic algorithms, it is often much more efficient to use exact or approximate techniques specifically tailored to the problem at hand. For certain systems with discrete degrees of freedom such as the random-field Ising model, there are polynomial-time methods to compute exact ground states. But even as the number of states increases beyond two as in the random-field Potts model, the problem becomes NP hard and one cannot hope to find exact ground states for relevant system sizes. Here, we compare a number of approximate techniques for this problem and evaluate their performance.

SANOM Results for OAEI 2019

Simulated annealing-based ontology matching (SANOM) participates for the second time at the ontology alignment evaluation initiative (OAEI) 2019. This paper contains the configuration of SANOM and its results on the anatomy and conference tracks. In comparison to the OAEI 2017, SANOM has improved significantly, and its results are competitive with the state-of-the-art systems. In particular, SANOM has the highest recall rate among the participated systems in the conference track, and is competitive with AML, the best performing system, in terms of F-measure. SANOM is also competitive with LogMap on the anatomy track, which is the best performing system in this track with no usage of particular biomedical background knowledge. SANOM has been adapted to the HOBBIT platfrom and is now available for the registered users.

Combinatorial Black-Box Optimization with Expert Advice

We consider the problem of black-box function optimization over the boolean hypercube. Despite the vast literature on black-box function optimization over continuous domains, not much attention has been paid to learning models for optimization over combinatorial domains until recently. However, the computational complexity of the recently devised algorithms are prohibitive even for moderate numbers of variables; drawing one sample using the existing algorithms is more expensive than a function evaluation for many black-box functions of interest. To address this problem, we propose a computationally efficient model learning algorithm based on multilinear polynomials and exponential weight updates. In the proposed algorithm, we alternate between simulated annealing with respect to the current polynomial representation and updating the weights using monomial experts' advice. Numerical experiments on various datasets in both unconstrained and sum-constrained boolean optimization indicate the competitive performance of the proposed algorithm, while improving the computational time up to several orders of magnitude compared to state-of-the-art algorithms in the literature.

Planning of School Teaching during COVID-19

More than one billion students are out of school because of Covid-19, forced to a remote learning that has several drawbacks and has been hurriedly arranged; in addition, most countries are currently uncertain on how to plan school activities for the 2020-2021 school year; all of this makes learning and education some of the biggest world issues of the current pandemic. Unfortunately, due to the length of the incubation period of Covid-19, full opening of schools seems to be impractical till a vaccine is available. In order to support the possibility of some in-person learning, we study a mathematical model of the diffusion of the epidemic due to school opening, and evaluate plans aimed at containing the extra Covid-19 cases due to school activities while ensuring an adequate number of in-class learning periods. We consider a SEAIR model with an external source of infection and a suitable loss function; after a realistic parameter selection, we numerically determine optimal school opening strategies by simulated annealing. It turns out that blended models, with almost periodic alternations of in-class and remote teaching days or weeks, are generally optimal. Besides containing Covid-19 diffusion, these solutions could be pedagogically acceptable, and could also become a driving model for the society at large. In a prototypical example, the optimal strategy results in the school opening 90 days out of 200 with the number of Covid-19 cases among the individuals related to the school increasing by about 66%, instead of the about 250% increase that would have been a consequence of full opening.

Sample Efficient Graph-Based Optimization with Noisy Observations

We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations. We define a notion of convexity, and we show that a variant of best-arm identification can find a near-optimal solution after a small number of queries that is independent of the size of the graph. For functions that have local minima and are nearly convex, we show a sample complexity for the classical simulated annealing under noisy observations. We show effectiveness of the greedy algorithm with restarts and the simulated annealing on problems of graph-based nearest neighbor classification as well as a web document re-ranking application.

Quantum Zeno approach with maximum stabilizer Hamiltonians for molecular energies

We consider the adiabatic and the simulated-annealing framework for the quantum chemistry of small molecules. The initial Hamiltonian is taken to be the maximum stabilizer Hamiltonian constructed from the final Hamiltonian for molecules in the Pauli basis. We compare two variants, with the second variant being the main contribution of this work. The first method is simply to use the adiabatic evolution on the obtained time- or path-dependent Hamiltonian with the initial state as the ground state of the maximum stabilizer Hamiltonian. However, this does suffer from the usual problems of adiabatic quantum computation due to the degeneracy and energy-level crossing in the path-dependent Hamiltonian. This problem is solved by a Zeno method, i.e., via the eigen-state projection used in the quantum simulated annealing, with the path-dependent Hamiltonian augmented by the sum of Pauli X terms whose contribution vanishes at the beginning and the end. In addition to the ground state, the low lying excited states can also be obtained using this quantum Zeno approach by varying initial states, whose accuracy is independent of the ground state.

CoolMomentum A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing

Deep learning applications require optimization of nonconvex objective functions. These functions have multiple local minima and their optimization is a challenging problem. Simulated Annealing is a well-established method for optimization of such functions, but its efficiency depends on the efficiency of the adapted sampling methods. We explore relations between the Langevin dynamics and stochastic optimization. By combining the Momentum optimizer with Simulated Annealing, we propose CoolMomentum - a prospective stochastic optimization method. Empirical results confirm the efficiency of the proposed theoretical approach.

Selective Inference for Latent Block Models

Model selection in latent block models has been a challenging but important task in the field of statistics. Specifically, a major challenge is encountered when constructing a test on a block structure obtained by applying a specific clustering algorithm to a finite size matrix. In this case, it becomes crucial to consider the selective bias in the block structure, that is, the block structure is selected from all the possible cluster memberships based on some criterion by the clustering algorithm. To cope with this problem, this study provides a selective inference method for latent block models. Specifically, we construct a statistical test on a set of row and column cluster memberships of a latent block model, which is given by a squared residue minimization algorithm. The proposed test, by its nature, includes and thus can also be used as the test on the set of row and column cluster numbers. We also propose an approximated version of the test based on simulated annealing to avoid combinatorial explosion in searching the optimal block structure. The results show that the proposed exact and approximated tests work effectively, compared to the naive test that did not take the selective bias into account.

Resource Allocation for mmWave-NOMA Communication through Multiple Access Points Considering Human Blockages

In this paper, a novel framework for optimizing the resource allocation in a millimeter-wave-non-orthogonal multiple access (mmWave-NOMA) communication for crowded venues is proposed. MmWave communications suffer from severe blockage caused by obstacles such as the human body, especially in a dense region. Thus, a detailed method for modeling the blockage events in the in-venue scenarios is introduced. Also, several mmWave access points are considered in different locations. The resource allocation problem in this network is formulated in the form of an optimization problem to maximize the network sum rate, which is NP-hard. Hence, a three-stage low-complex solution is proposed to solve the problem. At first, a user scheduling algorithm, i.e., modified worst connection swapping (MWCS), is proposed. Secondly, the antenna allocation problem is solved using the simulated annealing algorithm. Afterward, to maximize the network sum rate and guarantee the quality of service constraints, a non-convex power allocation optimization problem is solved by adopting the difference of convex programming approach. The simulation results show that, under the blockage effect, the proposed mmWave-NOMA scheme performs, on average, $12\%$ better than the conventional mmWave-orthogonal multiple access (OMA) scheme. In addition, the proposed scheme considering blockage even outperforms the corresponding OMA system without blockage.

A hierarchical field-level inference approach to reconstruction from sparse Lyman-$α$ forest data

We address the problem of inferring the three-dimensional matter distribution from a sparse set of one-dimensional quasar absorption spectra of the Lyman-$\alpha$ forest. Using a Bayesian forward modelling approach, we focus on extending the dynamical model to a fully self-consistent hierarchical field-level prediction of redshift-space quasar absorption sightlines. Our field-level approach rests on a recently developed semiclassical analogue to Lagrangian perturbation theory (LPT), which improves over noise problems and interpolation requirements of LPT. It furthermore allows for a manifestly conservative mapping of the optical depth to redshift space. In addition, this new dynamical model naturally introduces a coarse-graining scale, which we exploit to accelerate the MCMC sampler using simulated annealing. By gradually reducing the effective temperature of the forward model, we can allow it to converge first on large spatial scales before the sampler becomes sensitive to the increasingly larger space of smaller scales. We demonstrate the advantages -- in terms of speed and noise properties -- of this field-level approach over using LPT as a forward model, and, using mock data, validate its performance to reconstruct three-dimensional primordial perturbations and matter distribution from sparse quasar sightlines.

Fast differentiable DNA and protein sequence optimization for molecular design

Designing DNA and protein sequences with improved or novel function has the potential to greatly accelerate synthetic biology. Machine learning models that accurately predict biological fitness from sequence are becoming a powerful tool for molecular design. Activation maximization offers a simple design strategy for differentiable models one-hot coded sequences are first approximated by a continuous representation which is then iteratively optimized with respect to the predictor oracle by gradient ascent. While elegant, this method is limited by technical challenges, as it suffers from vanishing gradients and may cause predictor pathologies leading to poor convergence. Here, we build on a previously proposed straight-through approximation method to optimize through discrete sequence samples. By normalizing nucleotide logits across positions and introducing an adaptive entropy variable, we remove bottlenecks arising from overly large or skewed sampling parameters. This results in a markedly improved algorithm with up to 100-fold faster convergence. Moreover, our method finds improved fitness optima compared to existing methods, including the original algorithm without normalization and global optimization heuristics such as Simulated Annealing. We demonstrate our improved method by designing DNA and enzyme sequences for six deep learning predictors, including a protein structure predictor (trRosetta).

Local semi-supervised approach to brain tissue classification in child brain MRI

Most segmentation methods in child brain MRI are supervised and are based on global intensity distributions of major brain structures. The successful implementation of a supervised approach depends on availability of an age-appropriate probabilistic brain atlas. For the study of early normal brain development, the construction of such a brain atlas remains a significant challenge. Moreover, using global intensity statistics leads to inaccurate detection of major brain tissue classes due to substantial intensity variations of MR signal within the constituent parts of early developing brain. In order to overcome these methodological limitations we develop a local, semi-supervised framework. It is based on Kernel Fisher Discriminant Analysis (KFDA) for pattern recognition, combined with an objective structural similarity index (SSIM) for perceptual image quality assessment. The proposed method performs optimal brain partitioning into subdomains having different average intensity values followed by SSIM-guided computation of separating surfaces between the constituent brain parts. The classified image subdomains are then stitched slice by slice via simulated annealing to form a global image of the classified brain. In this paper, we consider classification into major tissue classes (white matter and grey matter) and the cerebrospinal fluid and illustrate the proposed framework on examples of brain templates for ages 8 to 11 months and ages 44 to 60 months. We show that our method improves detection of the tissue classes by its comparison to state-of-the-art classification techniques known as Partial Volume Estimation.

Sum of Three Cubes via Optimisation

By first solving the equation $x^3+y^3+z^3=k$ with fixed $k$ for $z$ and then considering the distance to the nearest integer function of the result, we turn the sum of three cubes problem into an optimisation one. We then apply three stochastic optimisation algorithms to this function in the case with $k=2$, where there are many known solutions. The goal is to test the effectiveness of the method in searching for integer solutions. The algorithms are a modification of particle swarm optimisation and two implementations of simulated annealing. We want to compare their effectiveness as measured by the running times of the algorithms. To this end, we model the time data by assuming two underlying probability distributions -- exponential and log-normal, and calculate some numerical characteristics for them. Finally, we evaluate the statistical distinguishability of our models with respect to the geodesic distance in the manifold with the corresponding Fisher information metric.

Atom Search Optimization with Simulated Annealing -- a Hybrid Metaheuristic Approach for Feature Selection

'Hybrid meta-heuristics' is one of the most interesting recent trends in the field of optimization and feature selection (FS). In this paper, we have proposed a binary variant of Atom Search Optimization (ASO) and its hybrid with Simulated Annealing called ASO-SA techniques for FS. In order to map the real values used by ASO to the binary domain of FS, we have used two different transfer functions S-shaped and V-shaped. We have hybridized this technique with a local search technique called, SA We have applied the proposed feature selection methods on 25 datasets from 4 different categories UCI, Handwritten digit recognition, Text, non-text separation, and Facial emotion recognition. We have used 3 different classifiers (K-Nearest Neighbor, Multi-Layer Perceptron and Random Forest) for evaluating the strength of the selected featured by the binary ASO, ASO-SA and compared the results with some recent wrapper-based algorithms. The experimental results confirm the superiority of the proposed method both in terms of classification accuracy and number of selected features.

Simulated Annealing for Volcano Muography

We discuss the geophysical inversion methodology for volcanic muography based on the Simulated Annealing algorithm, using a semi-empirical model of the muon flux reaching the volcano, its surrounding topography and a framework for the energy loss of muons in rock. We determined the minimum muon energy -- as function of the arrival direction -- needed to cross the volcanic building, the emerging integrated flux of muons and the density profile inside a model of Cerro Machín volcano (Tolima, Colombia) within a maximum error of $1$\% concerning the theoretical model.

Sparsifying Parity-Check Matrices

Parity check matrices (PCMs) are used to define linear error correcting codes and ensure reliable information transmission over noisy channels. The set of codewords of such a code is the null space of this binary matrix. We consider the problem of minimizing the number of one-entries in parity-check matrices. In the maximum-likelihood (ML) decoding method, the number of ones in PCMs is directly related to the time required to decode messages. We propose a simple matrix row manipulation heuristic which alters the PCM, but not the code itself. We apply simulated annealing and greedy local searches to obtain PCMs with a small number of one entries quickly, i.e. in a couple of minutes or hours when using mainstream hardware. The resulting matrices provide faster ML decoding procedures, especially for large codes.

Energy-efficient topology to enhance the wireless sensor network lifetime using connectivity control

Wireless sensor networks have attracted much attention because of many applications in the fields of industry, military, medicine, agriculture, and education. In addition, the vast majority of researches has been done to expand its applications and improve its efficiency. However, there are still many challenges for increasing the efficiency in different parts of this network. One of the most important parts is to improve the network lifetime in the wireless sensor network. Since the sensor nodes are generally powered by batteries, the most important issue to consider in these types of networks is to reduce the power consumption of the nodes in such a way as to increase the network lifetime to an acceptable level. The contribution of this paper is using topology control, the threshold for the remaining energy in nodes, and two of the meta-algorithms include SA (Simulated annealing) and VNS (Variable Neighbourhood Search) to increase the energy remaining in the sensors. Moreover, using a low-cost spanning tree, an appropriate connectivity control among nodes is created in the network in order to increase the network lifetime. The results of simulations show that the proposed method improves the sensor lifetime and reduces the energy consumed.

Setting up experimental Bell test with reinforcement learning

Finding optical setups producing measurement results with a targeted probability distribution is hard as a priori the number of possible experimental implementations grows exponentially with the number of modes and the number of devices. To tackle this complexity, we introduce a method combining reinforcement learning and simulated annealing enabling the automated design of optical experiments producing results with the desired probability distributions. We illustrate the relevance of our method by applying it to a probability distribution favouring high violations of the Bell-CHSH inequality. As a result, we propose new unintuitive experiments leading to higher Bell-CHSH inequality violations than the best currently known setups. Our method might positively impact the usefulness of photonic experiments for device-independent quantum information processing.

Tree decompositions of real-world networks from simulated annealing

Decompositions of networks are useful not only for structural exploration. They also have implications and use in analysis and computational solution of processes (such as the Ising model, percolation, SIR model) running on a given network. Tree and branch decompositions considered here directly represent network structure as trees for recursive computation of network properties. Unlike coarse-graining approximations in terms of community structure or metapopulations, tree decompositions of sufficiently small width allow for exact results on equilibrium processes. Here we use simulated annealing to find tree decompositions of narrow width for a set of medium-size empirical networks. Rather than optimizing tree decompositions directly, we employ a search space constituted by so-called elimination orders being permutations on the network's node set. For each in a database of empirical networks with up to 1000 edges, we find a tree decomposition of low width.

Structural Model Updating Using Adaptive Multi-Response Gaussian Process Meta-modeling

Finite element model updating utilizing frequency response functions as inputs is an important procedure in structural analysis, design and control. This paper presents a highly efficient framework that is built upon Gaussian process emulation to inversely identify model parameters through sampling. In particular, a multi-response Gaussian process (MRGP) meta-modeling approach is formulated that can accurately construct the error response surface, i.e., the discrepancies between the frequency response predictions and actual measurement. In order to reduce the computational cost of repeated finite element simulations, an adaptive sampling strategy is established, where the search of unknown parameters is guided by the response surface features. Meanwhile, the information of previously sampled model parameters and the corresponding errors is utilized as additional training data to refine the MRGP meta-model. Two stochastic optimization techniques, i.e., particle swarm and simulated annealing, are employed to train the MRGP meta-model for comparison. Systematic case studies are conducted to examine the accuracy and robustness of the new framework of model updating.

Data-Driven Optimization for Police Beat Design in South Fulton, Georgia

We redesign the police patrol beat in South Fulton, Georgia, in collaboration with the South Fulton Police Department (SFPD), using a predictive data-driven optimization approach. Due to rapid urban development and population growth, the original police beat design done in the 1970s was far from efficient, which leads to low policing efficiency and long 911 call response time. We balance the police workload among different regions in the city, improve operational efficiency, and reduce 911 call response time by redesigning beat boundaries for the SFPD. We discretize the city into small geographical atoms, which correspond to our decision variables; the decision is to map the atoms into ``beats'', which are the basic unit of the police operation. We analyze workload and trend in each atom using the rich dataset for police incidents reports and U.S. census data and predict future police workload for each atom using spatial statistical regression models. Based on this, we formulate the optimal beat design as a mixed-integer programming (MIP) program with contiguity and compactness constraints on the shape of the beats. The optimization problem is solved using simulated annealing due to its large-scale and non-convex nature. Our resulted beat design can reduce workload variance by over 90\% according to our simulation. We redesign the police patrol beat in South Fulton, Georgia, in collaboration with the South Fulton Police Department (SFPD), using a predictive data-driven optimization approach.

A stochastic approach to handle knapsack problems in the creation of ensembles

Ensemble-based methods are highly popular approaches that increase the accuracy of a decision by aggregating the opinions of individual voters. The common point is to maximize accuracy; however, a natural limitation occurs if incremental costs are also assigned to the individual voters. Consequently, we investigate creating ensembles under an additional constraint on the total cost of the members. This task can be formulated as a knapsack problem, where the energy is the ensemble accuracy formed by some aggregation rules. However, the generally applied aggregation rules lead to a nonseparable energy function, which takes the common solution tools -- such as dynamic programming -- out of action. We introduce a novel stochastic approach that considers the energy as the joint probability function of the member accuracies. This type of knowledge can be efficiently incorporated in a stochastic search process as a stopping rule, since we have the information on the expected accuracy or, alternatively, the probability of finding more accurate ensembles. Experimental analyses of the created ensembles of pattern classifiers and object detectors confirm the efficiency of our approach. Moreover, we propose a novel stochastic search strategy that better fits the energy, compared with general approaches such as simulated annealing.

A Neural Architecture Search based Framework for Liquid State Machine Design

Liquid State Machine (LSM), also known as the recurrent version of Spiking Neural Networks (SNN), has attracted great research interests thanks to its high computational power, biological plausibility from the brain, simple structure and low training complexity. By exploring the design space in network architectures and parameters, recent works have demonstrated great potential for improving the accuracy of LSM model with low complexity. However, these works are based on manually-defined network architectures or predefined parameters. Considering the diversity and uniqueness of brain structure, the design of LSM model should be explored in the largest search space possible. In this paper, we propose a Neural Architecture Search (NAS) based framework to explore both architecture and parameter design space for automatic dataset-oriented LSM model. To handle the exponentially-increased design space, we adopt a three-step search for LSM, including multi-liquid architecture search, variation on the number of neurons and parameters search such as percentage connectivity and excitatory neuron ratio within each liquid. Besides, we propose to use Simulated Annealing (SA) algorithm to implement the three-step heuristic search. Three datasets, including image dataset of MNIST and NMNIST and speech dataset of FSDD, are used to test the effectiveness of our proposed framework. Simulation results show that our proposed framework can produce the dataset-oriented optimal LSM models with high accuracy and low complexity. The best classification accuracy on the three datasets is 93.2%, 92.5% and 84% respectively with only 1000 spiking neurons, and the network connections can be averagely reduced by 61.4% compared with a single LSM. Moreover, we find that the total quantity of neurons in optimal LSM models on three datasets can be further reduced by 20% with only about 0.5% accuracy loss.

Gumbel-softmax-based Optimization A Simple General Framework for Optimization Problems on Graphs

In computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA) and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on three representative optimization problems on graph including modularity optimization from network science, Sherrington-Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph. High-quality solutions can be obtained with much less time consuming compared to traditional approaches.

Exponential Upper Bounds for the Runtime of Randomized Search Heuristics

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in evolutionary computation and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold.

On the identifiability of Bayesian factor analytic models

A well known identifiability issue in factor analytic models is the invariance with respect to orthogonal transformations. This problem burdens the inference under a Bayesian setup, where Markov chain Monte Carlo (MCMC) methods are used to generate samples from the posterior distribution. We introduce a post-processing scheme in order to deal with rotation, sign and permutation invariance of the MCMC sample. The exact version of the contributed algorithm requires to solve $2^q$ assignment problems per (retained) MCMC iteration, where $q$ denotes the number of factors of the fitted model. For large numbers of factors two approximate schemes based on simulated annealing are also discussed. We demonstrate that the proposed method leads to interpretable posterior distributions using synthetic and publicly available data from typical factor analytic models as well as mixtures of factor analyzers. An R package is available online at CRAN web-page.

HopGAT Hop-aware Supervision Graph Attention Networks for Sparsely Labeled Graphs

Due to the cost of labeling nodes, classifying a node in a sparsely labeled graph while maintaining the prediction accuracy deserves attention. The key point is how the algorithm learns sufficient information from more neighbors with different hop distances. This study first proposes a hop-aware attention supervision mechanism for the node classification task. A simulated annealing learning strategy is then adopted to balance two learning tasks, node classification and the hop-aware attention coefficients, along the training timeline. Compared with state-of-the-art models, the experimental results proved the superior effectiveness of the proposed Hop-aware Supervision Graph Attention Networks (HopGAT) model. Especially, for the protein-protein interaction network, in a 40% labeled graph, the performance loss is only 3.9%, from 98.5% to 94.6%, compared to the fully labeled graph. Extensive experiments also demonstrate the effectiveness of supervised attention coefficient and learning strategies.

Hybrid Quantum Annealing via Molecular Dynamics

A novel quantum-classical hybrid scheme is proposed to efficiently solve large-scale combinatorial optimization problems. The key concept is to introduce a Hamiltonian dynamics of the classical flux variables associated with the quantum spins of the transverse-field Ising model. Molecular dynamics of the classical fluxes can be used as a powerful preconditioner to sort out the frozen and ambivalent spins for quantum annealers. The performance and accuracy of our smooth hybridization in comparison to the standard classical algorithms (the tabu search and the simulated annealing) are demonstrated by employing the MAX-CUT and Ising spin-glass problems.

A two-stage reconstruction of microstructures with arbitrarily shaped inclusions

The main goal of our research is to develop an effective method with a wide range of applications for the statistical reconstruction of heterogeneous microstructures with compact inclusions of any shape, such as highly irregular grains. The devised approach uses multi-scale extended entropic descriptors (ED) that quantify the degree of spatial non-uniformity of configurations of finite-sized objects. This technique is an innovative development of previously elaborated entropy methods for statistical reconstruction. For simplicity, we discuss the two-dimensional case, but this method can be generalized into three dimensions. At the first stage, the developed procedure creates a set of black synthetic clusters that serve as surrogate inclusions. The clusters have the same individual areas and interfaces as their target counterparts, but random shapes. Then, from a given number of easy-to-generate synthetic cluster configurations, we choose the one with the lowest value of the cost function defined by us using extended ED. At the second stage, we make a significant change in the standard technique of simulated annealing (SA). Instead of swapping pixels of different phases, we randomly move each of the selected synthetic clusters. To verify our method, we analyze a sample selected from the literature dealing with silica randomly dispersed in a rubber matrix. The results show that our method provides convincing realizations for this type of complex microstructures. The advantages of the two-stage reconstruction (TSR) include the ease of obtaining synthetic microstructures, very low computational costs and satisfactory mapping in the statistical context of inclusion shapes. Finally, the simplicity of TSR should greatly facilitate independent applications.

Energy-aware Allocation of Graph Jobs in Vehicular Cloud Computing-enabled Software-defined IoV

Software-defined internet of vehicles (SDIoV) has emerged as a promising paradigm to realize flexible and comprehensive resource management, for next generation automobile transportation systems. In this paper, a vehicular cloud computing-based SDIoV framework is studied wherein the joint allocation of transmission power and graph job is formulated as a nonlinear integer programming problem. To effectively address the problem, a structure-preservation-based two-stage allocation scheme is proposed that decouples template searching from power allocation. Specifically, a hierarchical tree-based random subgraph isomorphism mechanism is applied in the first stage by identifying potential mappings (templates) between the components of graph jobs and service providers. A structure-preserving simulated annealing-based power allocation algorithm is adopted in the second stage to achieve the trade-off between the job completion time and energy consumption. Extensive simulations are conducted to verify the performance of the proposed algorithms.

An Artificially-intelligent Means to Escape Discreetly from the Departmental Holiday Party; guide for the socially awkward

We employ simulated annealing to identify the global solution of a dynamical model, to make a favorable impression upon colleagues at the departmental holiday party and then exit undetected as soon as possible. The procedure, Gradual Freeze-out of an Optimal Estimation via Optimization of Parameter Quantification - GFOOEOPQ, is designed for the socially awkward. The socially awkward among us possess little instinct for pulling off such a maneuver, and may benefit from a machine to do it for us. The method rests upon Bayes' Theorem, where the probability of a future model state depends on current knowledge of the model. Here, model state vectors are party attendees, and the future event of interest is their disposition toward us at times following the party. We want these dispositions to be favorable. To this end, we first interact so as to make favorable impressions, or at least ensure that these people remember having seen us there. Then we identify the exit that minimizes the chance that anyone notes how early we high-tailed it. Now, poorly-resolved estimates will correspond to degenerate solutions. As noted, we possess no instinct to identify a global optimum by ourselves. This can have disastrous consequences. For this reason, GFOOEOPQ employs annealing to iteratively home in on this optimum. The method is illustrated via a simulated event hosted by someone in the physics department (I am not sure who), in a two-bedroom apartment on the fifth floor of an elevator building in Manhattan, with viable Exit parameters front door, side door to a stairwell, fire escape, and a bathroom window that opens onto the fire escape. Preliminary tests are reported at two real social celebrations. The procedure is generalizable to corporate events and family gatherings. Readers are encouraged to report novel applications of GFOOEOPQ, to expand the algorithm.

Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA

Convolutional neural network (CNN) dataflow inference accelerators implemented in Field Programmable Gate Arrays (FPGAs) have demonstrated increased energy efficiency and lower latency compared to CNN execution on CPUs or GPUs. However, the complex shapes of CNN parameter memories do not typically map well to FPGA on-chip memories (OCM), which results in poor OCM utilization and ultimately limits the size and types of CNNs which can be effectively accelerated on FPGAs. In this work, we present a design methodology that improves the mapping efficiency of CNN parameters to FPGA OCM. We frame the mapping as a bin packing problem and determine that traditional bin packing algorithms are not well suited to solve the problem within FPGA- and CNN-specific constraints. We hybridize genetic algorithms and simulated annealing with traditional bin packing heuristics to create flexible mappers capable of grouping parameter memories such that each group optimally fits FPGA on-chip memories. We evaluate these algorithms on a variety of FPGA inference accelerators. Our hybrid mappers converge to optimal solutions in a matter of seconds for all CNN use-cases, achieve an increase of up to 65% in OCM utilization efficiency for deep CNNs, and are up to 200$\times$ faster than current state-of-the-art simulated annealing approaches.

Efficient Gaussian Process Bandits by Believing only Informative Actions

Bayesian optimization is a framework for global search via maximum a posteriori updates rather than simulated annealing, and has gained prominence for decision-making under uncertainty. In this work, we cast Bayesian optimization as a multi-armed bandit problem, where the payoff function is sampled from a Gaussian process (GP). Further, we focus on action selections via upper confidence bound (UCB) or expected improvement (EI) due to their prevalent use in practice. Prior works using GPs for bandits cannot allow the iteration horizon $T$ to be large, as the complexity of computing the posterior parameters scales cubically with the number of past observations. To circumvent this computational burden, we propose a simple statistical test only incorporate an action into the GP posterior when its conditional entropy exceeds an $\epsilon$ threshold. Doing so permits us to derive sublinear regret bounds of GP bandit algorithms up to factors depending on the compression parameter $\epsilon$ for both discrete and continuous action sets. Moreover, the complexity of the GP posterior remains provably finite. Experimentally, we observe state of the art accuracy and complexity tradeoffs for GP bandit algorithms applied to global optimization, suggesting the merits of compressed GPs in bandit settings.

Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM

The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with hidden variables. Many authors modified its simple design to fit more specific situations. For instance the Expectation (E) step has been replaced by Monte Carlo (MC) approximations, Markov Chain Monte Carlo approximations, tempered approximations... Most of the well studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state of the art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for cases with intractable E steps, we introduce a deterministic alternative to the MC-EM, using Riemann sums. This method is easy to implement and does not require the tuning of hyper-parameters. Then, we consider the tempered approximation, borrowed from the Simulated Annealing optimisation technique and meant to improve the EM solution. We prove that the the tempered EM verifies the convergence guarantees for a wide range of temperature profiles. We showcase empirically how it is able to escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations to accomplish both their purposes.

Simulated annealing based heuristic for multiple agile satellites scheduling under cloud coverage uncertainty

Agile satellites are the new generation of Earth observation satellites (EOSs) with stronger attitude maneuvering capability. Since optical remote sensing instruments equipped on satellites cannot see through the cloud, the cloud coverage has a significant influence on the satellite observation missions. We are the first to address multiple agile EOSs scheduling problem under cloud coverage uncertainty where the objective aims to maximize the entire observation profit. The chance constraint programming model is adopted to describe the uncertainty initially, and the observation profit under cloud coverage uncertainty is then calculated via sample approximation method. Subsequently, an improved simulated annealing based heuristic combining a fast insertion strategy is proposed for large-scale observation missions. The experimental results show that the improved simulated annealing heuristic outperforms other algorithms for the multiple AEOSs scheduling problem under cloud coverage uncertainty, which verifies the efficiency and effectiveness of the proposed algorithm.

Traffic Signal Optimization on a Square Lattice using the D-Wave Quantum Annealer

The spread of intelligent transportation systems in urban cities has caused heavy computational loads, requiring a novel architecture for managing large-scale traffic. In this study, we develop a method for globally controlling traffic signals arranged on a square lattice by means of a quantum annealing machine, namely the D-Wave quantum annealer. We first formulate a signal optimization problem that minimizes the imbalance of traffic flows in two orthogonal directions. Then we reformulate this problem as an Ising Hamiltonian, which is fully compatible with quantum annealers. The new control method is compared with a conventional local control method for a large 50-by-50 city, and the results exhibit the superiority of our global control method in suppressing traffic imbalance over wide parameter ranges. Furthermore, the solutions to the global control method obtained with the quantum annealing machine are better than those obtained with conventional simulated annealing. In addition, we prove analytically that the local and the global control methods converge at the limit where cars have equal probabilities for turning and going straight. These results are verified with numerical experiments.

On the Generalised Langevin Equation for Simulated Annealing

In this paper, we consider the generalised (higher order) Langevin equation for the purpose of simulated annealing and optimisation of nonconvex functions. Our approach modifies the underdamped Langevin equation by replacing the Brownian noise with an appropriate Ornstein-Uhlenbeck process to account for memory in the system. Under reasonable conditions on the loss function and the annealing schedule, we establish convergence of the continuous time dynamics to a global minimum. In addition, we investigate the performance numerically and show better performance and higher exploration of the state space compared to the underdamped and overdamped Langevin dynamics with the same annealing schedule.

On The Simulated Annealing In $\mathbf{R}^d$

Using a localization procedure and the result of Holley-Kusuoka-Stroock [7] in the torus, we widely weaken the usual growth assumptions concerning the success of the continuous-time simulated annealing in $\mathbf{R}^d$. Our only assumption is the existence of an invariant probability measure for a sufficiently low temperature. We also prove, in an appendix, a non-explosion criterion for a class of time-inhomogeneous diffusions.

Learning Complexity of Simulated Annealing

Simulated annealing is an effective and general means of optimization. It is in fact inspired by metallurgy, where the temperature of a material determines its behavior in thermodynamics. Likewise, in simulated annealing, the actions that the algorithm takes depend entirely on the value of a variable which captures the notion of temperature. Typically, simulated annealing starts with a high temperature, which makes the algorithm pretty unpredictable, and gradually cools the temperature down to become more stable. A key component that plays a crucial role in the performance of simulated annealing is the criteria under which the temperature changes namely, the cooling schedule. Motivated by this, we study the following question in this work "Given enough samples to the instances of a specific class of optimization problems, can we design optimal (or approximately optimal) cooling schedules that minimize the runtime or maximize the success rate of the algorithm on average when the underlying problem is drawn uniformly at random from the same class?" We provide positive results both in terms of sample complexity and simulation complexity. For sample complexity, we show that $\tilde O(\sqrt{m})$ samples suffice to find an approximately optimal cooling schedule of length $m$. We complement this result by giving a lower bound of $\tilde \Omega(m^{1/3})$ on the sample complexity of any learning algorithm that provides an almost optimal cooling schedule. These results are general and rely on no assumption. For simulation complexity, however, we make additional assumptions to measure the success rate of an algorithm. To this end, we introduce the monotone stationary graph that models the performance of simulated annealing. Based on this model, we present polynomial time algorithms with provable guarantees for the learning problem.

Approximating Optimal Bidirectional Macro Schemes

Lempel-Ziv is an easy-to-compute member of a wide family of so-called macro schemes; it restricts pointers to go in one direction only. Optimal bidirectional macro schemes are NP-complete to find, but they may provide much better compression on highly repetitive sequences. We consider the problem of approximating optimal bidirectional macro schemes. We describe a simulated annealing algorithm that usually converges quickly. Moreover, in some cases, we obtain bidirectional macro schemes that are provably a 2-approximation of the optimal. We test our algorithm on a number of artificial repetitive texts and verify that it is efficient in practice and outperforms Lempel-Ziv, sometimes by a wide margin.

Variational Auto-Encoder not all failures are equal

We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model.A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data.Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline) Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression. A second empirical contribution is to show how the control of this trade-off is instrumental in escaping poor local optima, akin a simulated annealing schedule.Both claims are backed upon experiments on artificial data, MNIST and CelebA, showing how sharpness learning addresses the notorious VAE blurriness issue.

Explicitly Trained Spiking Sparsity in Spiking Neural Networks with Backpropagation

Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations. Many recent works have demonstrated effective backpropagation for deep Spiking Neural Networks (SNNs) by approximating gradients over discontinuous neuron spikes or firing events. A beneficial side-effect of these surrogate gradient spiking backpropagation algorithms is that the spikes, which trigger additional computations, may now themselves be directly considered in the gradient calculations. We propose an explicit inclusion of spike counts in the loss function, along with a traditional error loss, causing the backpropagation learning algorithms to optimize weight parameters for both accuracy and spiking sparsity. As supported by existing theory of over-parameterized neural networks, there are many solution states with effectively equivalent accuracy. As such, appropriate weighting of the two loss goals during training in this multi-objective optimization process can yield an improvement in spiking sparsity without a significant loss of accuracy. We additionally explore a simulated annealing-inspired loss weighting technique to increase the weighting for sparsity as training time increases. Our preliminary results on the Cifar-10 dataset show up to 70.1% reduction in spiking activity with iso-accuracy compared to an equivalent SNN trained only for accuracy and up to 73.3% reduction in spiking activity if allowed a trade-off of 1% reduction in classification accuracy.

Controlled Online Optimization Learning (COOL) Finding the ground state of spin Hamiltonians with reinforcement learning

Reinforcement learning (RL) has become a proven method for optimizing a procedure for which success has been defined, but the specific actions needed to achieve it have not. We apply the so-called "black box" method of RL to what has been referred as the "black art" of simulated annealing (SA), demonstrating that an RL agent based on proximal policy optimization can, through experience alone, arrive at a temperature schedule that surpasses the performance of standard heuristic temperature schedules for two classes of Hamiltonians. When the system is initialized at a cool temperature, the RL agent learns to heat the system to "melt" it, and then slowly cool it in an effort to anneal to the ground state; if the system is initialized at a high temperature, the algorithm immediately cools the system. We investigate the performance of our RL-driven SA agent in generalizing to all Hamiltonians of a specific class; when trained on random Hamiltonians of nearest-neighbour spin glasses, the RL agent is able to control the SA process for other Hamiltonians, reaching the ground state with a higher probability than a simple linear annealing schedule. Furthermore, the scaling performance (with respect to system size) of the RL approach is far more favourable, achieving a performance improvement of one order of magnitude on L=14x14 systems. We demonstrate the robustness of the RL approach when the system operates in a "destructive observation" mode, an allusion to a quantum system where measurements destroy the state of the system. The success of the RL agent could have far-reaching impact, from classical optimization, to quantum annealing, to the simulation of physical systems.

Learning Optimal Temperature Region for Solving Mixed Integer Functional DCOPs

Distributed Constraint Optimization Problems (DCOPs) are an important framework that models coordinated decision-making problem in multi-agent systems with a set of discrete variables. Later work has extended this to model problems with a set of continuous variables (F-DCOPs). In this paper, we combine both of these models into the Mixed Integer Functional DCOP (MIF-DCOP) model that can deal with problems regardless of its variables' type. We then propose a novel algorithm, called Distributed Parallel Simulated Annealing (DPSA), where agents cooperatively learn the optimal parameter configuration for the algorithm while also solving the given problem using the learned knowledge. Finally, we empirically benchmark our approach in DCOP, F-DCOP and MIF-DCOP settings and show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding setting.

RapidLayout Fast Hard Block Placement of FPGA-optimized Systolic Arrays using Evolutionary Algorithms

Evolutionary algorithms can outperform conventional simulated annealing placement on metrics such as runtime, wirelength, pipelining cost, and clock frequency when mapping FPGA hard block intensive designs such as systolic arrays on Xilinx UltraScale+ FPGAs. Such designs can take advantage of repeatable design organization of the arrays, the columnar arrangement of hard blocks such as DSPs and RAMs, coupled with cascade nearest-neighbor interconnect for systolic-friendly data movement. However, the commercial-grade Xilinx Vivado CAD tool is unable to provide a legal routing solution for such hard block intensive designs without tedious manual placement constraints. Instead, we formulate an automatic FPGA placement algorithm for these hard blocks as a multi-objective optimization problem that targets wirelength squared and maximum bounding box size metrics. We build an end-to-end placement and routing flow called RapidLayout using the Xilinx RapidWright framework. RapidLayout runs 5--6$\times$ faster than Vivado with manual constraints, and eliminates the weeks long effort to manually generate placement constraints for the hard blocks. We also perform automated post-placement pipelining of the long wires inside each convolution block to target 650 MHz URAM-limited operation. SLR replication in RapidWright allows us to clone the placement and routing image for one SLR to the multi-SLR FPGA chip in a couple of minutes. When compared to a conventional simulated annealing placer, RapidLayout's evolutionary engine delivers up to 27% improvement in wirelength, 11% lower bounding box size, 4.2$\times$ reduction in runtime, and $\approx$7% reduction in pipeline registers for 650 MHz operation.

Charge ordering in the three-dimensional Coulomb glass at finite temperatures and low disorders

In this paper, we have studied the three dimensional Coulomb glass lattice model at half-filling using Monte Carlo Simulations. Annealing of the system shows a second-order transition from paramagnetic to charge-ordered phase for zero as well as small disorders. We have also calculated the critical exponents and transition temperature using a finite sizing scaling approach. The Monte Carlo simulation is done using the Metropolis algorithm, which allowed us to study larger lattice sizes. The transition temperature and the critical exponents at zero disorder matched the previous studies within numerical error. We found that the transition temperature of the system decreased as the disorder is increased. The values of critical exponents $\alpha$ and $\gamma$ were less and value of $\nu$ more than the corresponding zero disorder values. The use of large system sizes led to the correct variation of critical exponents with the disorder.

Simulated Annealing with Adaptive Cooling Rates

As one of the most robust global optimization methods, simulated annealing has received considerable attention, with many variations that attempt to improve the cooling schedule. This paper introduces a variant of simulated annealing that is useful for optimizing atomistic structures, and makes use of the statistical mechanical properties of the system, determined on the fly during optimization, to adaptively control the cooling rate. The adaptive cooling approach is demonstrated to be more computationally efficient than classical simulated annealing, when applied to Lennard-Jones clusters. This increase in efficiency is approximately a factor of two for clusters with 25--40 atoms, and improves with the size of the system.

The attractive Hubbard model as an $SO(3)$ system of competing phases supersolid order and its thermal melting

Competition between superconductivity and charge order is a recurring theme in contemporary condensed matter physics. This is quintessentially captured in the attractive Hubbard model, a simple theoretical model where the competition can be directly tuned. In previous studies by the current authors, it has been suggested that the Hubbard model maps to an $SO(3)$ non-linear sigma model, where the phase competition becomes manifest. In this article, we rigorously demonstrate this mapping and use it to study thermal disordering of a supersolid. Starting with the attractive Hubbard model in the presence of an orbital field, we take the limit of strong coupling where a pseudospin description emerges. The in-plane pseudospin components represent superconducting pairing while the out-of-plane component encodes charge density wave order. We obtain an effective spin-$1/2$ Hamiltonian with ferromagnetic in-plane couplings and antiferromagnetic z-z couplings. In addition, the orbital field gives rise to a textured Dzyaloshinskii-Moriya interaction that has the same periodicity as the magnetic unit cell. In order to examine the nature of ordering in this spin model, we consider it in the classical limit. We assume slowly varying fields, leading to the $SO(3)$ non-linear sigma model description. As an application of these ideas, we study the nature of ordering using simulated annealing and classical Monte Carlo simulations. The ground state represents a supersolid with coexisting superconductivity and charge order. It can be viewed as a `meron crystal', a regular arrangement of superconducting vortices with charge-ordered cores. The coherent overlap of core regions gives rise to coherent long-ranged charge order. As the temperature is raised, this charge order is lost via a sharp phase transition in the Ising universality class.

A Simulated Annealing Algorithm for the Directed Steiner Tree Problem

In \cite{siebert2019linear} the authors present a set of integer programs (IPs) for the Steiner tree problem, which can be used for both, the directed and the undirected setting of the problem. Each IP finds an optimal Steiner tree with a specific structure. A solution with the lowest cost, corresponds to an optimal solution to the entire problem. The authors show that the linear programming relaxation of each IP is integral and, also, that each IP is polynomial in the size of the instance, consequently, they can be solved in polynomial time. The main issue is that the number of IPs to solve grows exponentially with the number of terminal nodes, which makes this approach impractical for large instances. In this paper, we propose a local search procedure to solve the directed Steiner tree problem using the approach presented in \cite{siebert2019linear}. In order to do this, we present a dynamic programming algorithm to solve each IP efficiently. Then we provide a characterization of the neighborhood of each tree structure. Finally, we use the proposed algorithm and the neighborhood characterization to solve the problem using a simulated annealing framework. Computational experiments show that the quality of the solutions delivered by our approach is better than the ones presented in the literature for the directed Steiner tree problem.

Fast Stable Parameter Estimation for Linear Dynamical Systems

Dynamical systems describe the changes in processes that arise naturally from their underlying physical principles, such as the laws of motion or the conservation of mass, energy or momentum. These models facilitate a causal explanation for the drivers and impediments of the processes. But do they describe the behaviour of the observed data? And how can we quantify the models' parameters that cannot be measured directly? This paper addresses these two questions by providing a methodology for estimating the solution; and the parameters of linear dynamical systems from incomplete and noisy observations of the processes. The proposed procedure builds on the parameter cascading approach, where a linear combination of basis functions approximates the implicitly defined solution of the dynamical system. The systems' parameters are then estimated so that this approximating solution adheres to the data. By taking advantage of the linearity of the system, we have simplified the parameter cascading estimation procedure, and by developing a new iterative scheme, we achieve fast and stable computation. We illustrate our approach by obtaining a linear differential equation that represents real data from biomechanics. Comparing our approach with popular methods for estimating the parameters of linear dynamical systems, namely, the non-linear least-squares approach, simulated annealing, parameter cascading and smooth functional tempering reveals a considerable reduction in computation and an improved bias and sampling variance.

Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks

An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users, by optimizing offloading decision, transmission power and resource allocation in the large-scale mobile edge computing (MEC) system. Towards this end, a deep reinforcement learning (DRL) based solution is proposed, which includes the following components. Firstly, a related and regularized stacked auto encoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Secondly, we present an adaptive simulated annealing based approach (ASA) as the action search method of DRL, in which an adaptive h-mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Thirdly, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. Numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks.

Autonomous Control of a Line Follower Robot Using a Q-Learning Controller

In this paper, a MIMO simulated annealing SA based Q learning method is proposed to control a line follower robot. The conventional controller for these types of robots is the proportional P controller. Considering the unknown mechanical characteristics of the robot and uncertainties such as friction and slippery surfaces, system modeling and controller designing can be extremely challenging. The mathematical modeling for the robot is presented in this paper, and a simulator is designed based on this model. The basic Q learning methods are based pure exploitation and the epsilon-greedy methods, which help exploration, can harm the controller performance after learning completion by exploring nonoptimal actions. The simulated annealing based Q learning method tackles this drawback by decreasing the exploration rate when the learning increases. The simulation and experimental results are provided to evaluate the effectiveness of the proposed controller.

Algorithms for Tensor Network Contraction Ordering

Contracting tensor networks is often computationally demanding. Well-designed contraction sequences can dramatically reduce the contraction cost. We explore the performance of simulated annealing and genetic algorithms, two common discrete optimization techniques, to this ordering problem. We benchmark their performance as well as that of the commonly-used greedy search on physically relevant tensor networks. Where computationally feasible, we also compare them with the optimal contraction sequence obtained by an exhaustive search. We find that the algorithms we consider consistently outperform a greedy search given equal computational resources, with an advantage that scales with tensor network size. We compare the obtained contraction sequences and identify signs of highly non-local optimization, with the more sophisticated algorithms sacrificing run-time early in the contraction for better overall performance.

Adaptive Large Neighborhood Search for Circle Bin Packing Problem

We address a new variant of packing problem called the circle bin packing problem (CBPP), which is to find a dense packing of circle items to multiple square bins so as to minimize the number of used bins. To this end, we propose an adaptive large neighborhood search (ALNS) algorithm, which uses our Greedy Algorithm with Corner Occupying Action (GACOA) to construct an initial layout. The greedy solution is usually in a local optimum trap, and ALNS enables multiple neighborhood search that depends on the stochastic annealing schedule to avoid getting stuck in local minimum traps. Specifically, ALNS perturbs the current layout to jump out of a local optimum by iteratively reassigns some circles and accepts the new layout with some probability during the search. The acceptance probability is adjusted adaptively using simulated annealing that fine-tunes the search direction in order to reach the global optimum. We benchmark computational results against GACOA in heterogeneous instances. ALNS always outperforms GACOA in improving the objective function, and in several cases, there is a significant reduction on the number of bins used in the packing.

A Physical Embedding Model for Knowledge Graphs

Knowledge graph embedding methods learn continuous vector representations for entities in knowledge graphs and have been used successfully in a large number of applications. We present a novel and scalable paradigm for the computation of knowledge graph embeddings, which we dub PYKE . Our approach combines a physical model based on Hooke's law and its inverse with ideas from simulated annealing to compute embeddings for knowledge graphs efficiently. We prove that PYKE achieves a linear space complexity. While the time complexity for the initialization of our approach is quadratic, the time complexity of each of its iterations is linear in the size of the input knowledge graph. Hence, PYKE's overall runtime is close to linear. Consequently, our approach easily scales up to knowledge graphs containing millions of triples. We evaluate our approach against six state-of-the-art embedding approaches on the DrugBank and DBpedia datasets in two series of experiments. The first series shows that the cluster purity achieved by PYKE is up to 26% (absolute) better than that of the state of art. In addition, PYKE is more than 22 times faster than existing embedding solutions in the best case. The results of our second series of experiments show that PYKE is up to 23% (absolute) better than the state of art on the task of type prediction while maintaining its superior scalability. Our implementation and results are open-source and are available at this http URL.

On the Computational Viability of Quantum Optimization for PMU Placement

Using optimal phasor measurement unit placement as a prototypical problem, we assess the computational viability of the current generation D-Wave Systems 2000Q quantum annealer for power systems design problems. We reformulate minimum dominating set for the annealer hardware, solve the reformulation for a standard set of IEEE test systems, and benchmark solution quality and time to solution against the CPLEX Optimizer and simulated annealing. For some problem instances the 2000Q outpaces CPLEX. For instances where the 2000Q underperforms with respect to CPLEX and simulated annealing, we suggest hardware improvements for the next generation of quantum annealers.

BioMETA A multiple specification parameter estimation system for stochastic biochemical models

The inherent behavioral variability exhibited by stochastic biochemical systems makes it a challenging task for human experts to manually analyze them. Computational modeling of such systems helps in investigating and predicting the behaviors of the underlying biochemical processes but at the same time introduces the presence of several unknown parameters. A key challenge faced in this scenario is to determine the values of these unknown parameters against known behavioral specifications. The solutions that have been presented so far estimate the parameters of a given model against a single specification whereas a correct model is expected to satisfy all the behavioral specifications when instantiated with a single set of parameter values. We present a new method, BioMETA, to address this problem such that a single set of parameter values causes a parameterized stochastic biochemical model to satisfy all the given probabilistic temporal logic behavioral specifications simultaneously. Our method is based on combining a multiple hypothesis testing based statistical model checking technique with simulated annealing search to look for a single set of parameter values so that the given parameterized model satisfies multiple probabilistic behavioral specifications. We study two stochastic rule-based models of biochemical receptors, namely, Fc$\epsilon$RI and T-cell as our benchmarks to evaluate the usefulness of the presented method. Our experimental results successfully estimate $26$ parameters of Fc$\epsilon$RI and $29$ parameters of T-cell receptor model against three probabilistic temporal logic behavioral specifications each.

A Deterministic Model for the Transshipment Problem of a Fast Fashion Retailer under Capacity Constraints

In this paper we present a novel transshipment problem for a large apparel retailer that operates an extensive retail network. Our problem is inspired by the logistics operations of a very large fast fashion retailer in Turkey, LC Waikiki, with over 450 retail branches and thousands of products. The purpose of transshipments is to rebalance stocks across the retail network to better match supply with demand. We formulate this problem as a large mixed integer linear program and develop a Lagrangian relaxation with a primal-dual approach to find upper bounds and a simulated annealing based metaheuristic to find promising solutions, both of which have proven to be quite effective. While our metaheuristic does not always produce better solutions than a commercial optimizer, it has consistently produced solutions with optimality gaps lower than 7% while the commercial optimizer may produce very poor solutions with optimality gaps as high as almost 300%. We have also conducted a set of numerical experiments to uncover implications of various operational practices of LC Waikiki on its system's performance and important managerial insights.

Clustering Binary Data by Application of Combinatorial Optimization Heuristics

We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters. Five new and original methods are introduced, using neighborhoods and population behavior combinatorial optimization metaheuristics first ones are simulated annealing, threshold accepting and tabu search, and the others are a genetic algorithm and ant colony optimization. The methods are implemented, performing the proper calibration of parameters in the case of heuristics, to ensure good results. From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means partitioning around medoids or PAM. Simulated annealing perform very well, especially compared to classical methods.

Model Predictive Control for Finite Input Systems using the D-Wave Quantum Annealer

The D-Wave quantum annealer has emerged as a novel computational architecture that is attracting significant interest, but there have been only a few practical algorithms exploiting the power of quantum annealers. Here we present a model predictive control (MPC) algorithm using a quantum annealer for a system allowing a finite number of input values. Such an MPC problem is classified as a non-deterministic polynomial-time-hard combinatorial problem, and thus real-time sequential optimization is difficult to obtain with conventional computational systems. We circumvent this difficulty by converting the original MPC problem into a quadratic unconstrained binary optimization problem, which is then solved by the D-Wave quantum annealer. Two practical applications, namely stabilization of a spring-mass-damper system and dynamic audio quantization, are demonstrated. For both, the D-Wave method exhibits better performance than the classical simulated annealing method. Our results suggest new applications of quantum annealers in the direction of dynamic control problems.

TCM-ICP Transformation Compatibility Measure for Registering Multiple LIDAR Scans

Rigid registration of multi-view and multi-platform LiDAR scans is a fundamental problem in 3D mapping, robotic navigation, and large-scale urban modeling applications. Data acquisition with LiDAR sensors involves scanning multiple areas from different points of view, thus generating partially overlapping point clouds of the real world scenes. Traditionally, ICP (Iterative Closest Point) algorithm is used to register the acquired point clouds together to form a unique point cloud that captures the scanned real world scene. Conventional ICP faces local minima issues and often needs a coarse initial alignment to converge to the optimum. In this work, we present an algorithm for registering multiple, overlapping LiDAR scans. We introduce a geometric metric called Transformation Compatibility Measure (TCM) which aids in choosing the most similar point clouds for registration in each iteration of the algorithm. The LiDAR scan most similar to the reference LiDAR scan is then transformed using simplex technique. An optimization of the transformation using gradient descent and simulated annealing techniques are then applied to improve the resulting registration. We evaluate the proposed algorithm on four different real world scenes and experimental results shows that the registration performance of the proposed method is comparable or superior to the traditionally used registration methods. Further, the algorithm achieves superior registration results even when dealing with outliers.

Estimation of the yield curve for Costa Rica using combinatorial optimization metaheuristics applied to nonlinear regression

The term structure of interest rates or yield curve is a function relating the interest rate with its own term. Nonlinear regression models of Nelson-Siegel and Svensson were used to estimate the yield curve using a sample of historical data supplied by the National Stock Exchange of Costa Rica. The optimization problem involved in the estimation process of model parameters is addressed by the use of four well known combinatorial optimization metaheuristics Ant colony optimization, Genetic algorithm, Particle swarm optimization and Simulated annealing. The aim of the study is to improve the local minima obtained by a classical quasi-Newton optimization method using a descent direction. Good results with at least two metaheuristics are achieved, Particle swarm optimization and Simulated annealing. Keywords Yield curve, nonlinear regression, Nelson-

Estudo comparativo de meta-heurísticas para problemas de colorações de grafos

A classic graph coloring problem is to assign colors to vertices of any graph so that distinct colors are assigned to adjacent vertices. Optimal graph coloring colors a graph with a minimum number of colors, which is its chromatic number. Finding out the chromatic number is a combinatorial optimization problem proven to be computationally intractable, which implies that no algorithm that computes large instances of the problem in a reasonable time is known. For this reason, approximate methods and metaheuristics form a set of techniques that do not guarantee optimality but obtain good solutions in a reasonable time. This paper reports a comparative study of the Hill-Climbing, Simulated Annealing, Tabu Search, and Iterated Local Search metaheuristics for the classic graph coloring problem considering its time efficiency for processing the DSJC125 and DSJC250 instances of the DIMACS benchmark.

Diffuse optical tomography by simulated annealing via a spin Hamiltonian

The inverse problem of diffuse optical tomography is solved by the Markov-chain Monte Carlo. The Metropolis algorithm or single-component Metropolis-Hastings algorithm is used. The value of the unknown parameter is disretized and a spin Hamiltonian is introduced in the cost function. Then an initial random spin configuration is brought to a converged configuration by simulated annealing.

Fair Sampling by Simulated Annealing on Quantum Annealer

Conventional quantum annealing does not sample all ground states fairly. We demonstrate that fair sampling can be achieved by performing simulated annealing on a quantum annealer. We discuss the problems that occur when implementing this method and propose an alternative way to overcome them. We numerically verify the fair sampling ability of our method in a small-scale toy model.

An adaptive simulated annealing EM algorithm for inference on non-homogeneous hidden Markov models

Non-homogeneous hidden Markov models (NHHMM) are a subclass of dependent mixture models used for semi-supervised learning, where both transition probabilities between the latent states and mean parameter of the probability distribution of the responses (for a given state) depend on the set of $p$ covariates. A priori we do not know which (and how) covariates influence the transition probabilities and the mean parameters. This induces a complex combinatorial optimization problem for model selection with $4^p$ potential configurations. To address the problem, in this article we propose an adaptive (A) simulated annealing (SA) expectation maximization (EM) algorithm (ASA-EM) for joint optimization of models and their parameters with respect to a criterion of interest.

Quadrupole Arrangements and the Ground State of Solid Hydrogen

The electric quadrupole-quadrupole ($\mathcal{E}_{qq}$) interaction is believed to play an important role in the broken symmetry transition from Phase I to II in solid hydrogen. To evaluate this, we study structures adopted by purely classical quadrupoles using Markov Chain Monte Carlo simulations of fcc and hcp quadrupolar lattices. Both undergo first-order phase transitions from rotationally ordered to disordered structures, as indicated by a discontinuity in both quadrupole interaction energy ($\mathcal{E}_{qq}$) and its heat capacity. Cooling fcc reliably induced a transition to the P$a3$ structure, whereas cooling hcp gave inconsistent, frustrated and $c/a$-ratio-dependent broken symmetry states. Analysing the lowest-energy hcp states using simulated annealing, we found P$6_3/m$ and P$ca2_1$ structures found previously as minimum-energy structures in full electronic structure calculations. The candidate structures for hydrogen Phases III-V were not observed. This demonstrates that $\mathcal{E}_{qq}$ is the dominant interaction determining the symmetry breaking in Phase II. The disorder transition occurs at significantly lower temperature in hcp than fcc, showing that the $\mathcal{E}_{qq}$ cannot be responsible for hydrogen Phase II being based on hcp.

Improving center vortex detection by usage of center regions as guidance for the direct maximal center gauge

The center vortex model of quantum chromodynamic states that vortices, closed color-magnetic flux, percolate the vacuum. Vortices are seen as the relevant excitations of the vacuum, causing confinement and dynamical chiral symmetry breaking. In an appropriate gauge, as \textit{direct maximal center gauge}, vortices are detected by projecting onto the center degrees of freedom. Such gauges suffer from Gribov copy problems different local maxima of the corresponding gauge functional can result in different predictions of the string tension. By using non-trivial center regions, that is, regions whose boundary evaluates to a non-trivial center element, a resolution of this issue seems possible. We use such non-trivial center regions to guide simulated annealing procedures, preventing an underestimation of the string tension in order to resolve the Gribov copy problem.

A New Inventory Control Approach For Considering Customer Classes In An Integrated Supply Chain Management

Supply chain management is an integrated approach for planning and controlling materials, information, and finances as they move in a process which begins from suppliers and ends with customers in forward approach. As distribution network planning is strategically done, the related decisions should be optimized. This supply chain planning involves transportation, the location of facilities, and inventory control decisions. This study is a new approach for considering customers' differentiation in an integrated model to location-allocation and inventory control supply chain. The proposed model is multi-product, single-period and with stochastic demands. Additionally, warehouses have multilevel capacity limitation. For more reality, the probability of transportation through different vehicles, different transportation capacity, and transportation costs are also taken into the consideration. The customers are divided into two strategic and nonstrategic groups by adopting the critical level policy. The exact calculation method is employed for small scale instances while hybrid meta-heuristic algorithms (Genetic and Simulated Annealing) developed for real samples. Efficiency and quality of solutions are examined via the ANOVA to report proper and near optimum solution. Finally, sensitivity analysis is carried out for different instances to evaluate the effect of different indexes on the duration of CPU time and values of the objective function.

Electronic transport through correlated electron systems with nonhomogeneous charge orderings

The spinless Falicov-Kimball model exhibits outside the particle-hole symmetric point different stable nonhomogeneous charge orderings. These include the well known charge stripes and a variety of orderings with phase separated domains, which can significantly influence the charge transport through the correlated electron system. We show this by investigating a heterostructure, in which the Falicov-Kimball model on a finite two-dimensional lattice is located between two noninteracting semi-infinite leads. We use a combination of nonequilibrium Green's functions techniques with a sign-problem-free Monte Carlo method for finite temperatures or a simulated annealing technique for the ground state to address steady-state transport through the system. We show that different ground-state phases of the central system can lead to simple metallic-like or insulating charge transport characteristics, but also to more complicated current-voltage dependencies reflecting a multi-band character of the transmission function. Interestingly, with increasing temperature, the orderings tend to form transient phases before the system reaches the disordered phase. This leads to nontrivial temperature dependencies of the transmission function and charge current.

Parton Hadron Quantum Molecular Dynamics (PHQMD) -- a Novel Microscopic N-Body Transport Approach for Heavy-Ion Dynamics and Hypernuclei Production

We present the novel microscopic n-body dynamical transport approach PHQMD(Parton-Hadron-Quantum-Molecular-Dynamics) for the description of particle production and cluster formation in heavy-ion reactions at relativistic energies. The PHQMD extends the established PHSD (Parton-Hadron-String-Dynamics) transport approach by replacing the mean field by density dependent two body interactions in a similar way as in the Quantum Molecular Dynamics (QMD) models. This allows for the calculation of the time evolution of the n-body Wigner density and therefore for a dynamical description of clusters and hypernuclei formation. The clusters are identified with the MST ('Minimum Spanning Tree') or the SACA ('Simulated Annealing Cluster Algorithm') algorithm which - by regrouping the nucleons in single nucleons and noninteracting clusters - finds the most bound configuration of nucleons and clusters. The selected results on clusters and hypernuclei production from Ref. arXiv1907.03860 are discussed in this contribution.

An introductory guide to aligning networks using SANA, the Simulated Annealing Network Aligner

Sequence alignment has had an enormous impact on our understanding of biology, evolution, and disease. The alignment of biological {\em networks} holds similar promise. Biological networks generally model interactions between biomolecules such as proteins, genes, metabolites, or mRNAs. There is strong evidence that the network topology -- the "structure" of the network -- is correlated with the functions performed, so that network topology can be used to help predict or understand function. However, unlike sequence comparison and alignment -- which is an essentially solved problem -- network comparison and alignment is an NP-complete problem for which heuristic algorithms must be used. Here we introduce SANA, the {\it Simulated Annealing Network Aligner}. SANA is one of many algorithms proposed for the arena of biological network alignment. In the context of global network alignment, SANA stands out for its speed, memory efficiency, ease-of-use, and flexibility in the arena of producing alignments between 2 or more networks. SANA produces better alignments in minutes on a laptop than most other algorithms can produce in hours or days of CPU time on large server-class machines. We walk the user through how to use SANA for several types of biomolecular networks. Availability this https URL

The PHQMD model for the formation of nuclear clusters and hypernuclei in heavy-ion collisions

Modeling of the process of the formation of nuclear clusters in the hot nuclear matter is a challenging task. We present the novel n-body dynamical transport approach - PHQMD (Parton-Hadron-Quantum-Molecular Dynamics) [1] for the description of heavy-ion collisions as well as clusters and hpernuclei formation. The PHQMD extends well established PHSD (Parton-Hadron-String Dynamics) approach - which incorporates explicit partonic degrees-of-freedom (quarks and gluons), an equation-of-state from lattice QCD, as well as dynamical hadronization and hadronic elastic and inelastic collisions in the final reaction phase, by n-body quantum molecular dynamic propagation of hadrons which allows choosing of the equation of state with different compression modulus. The formation of clusters, including hypernuclei, is realized by incorporation the Simulated Annealing Clusterization Algorithm (SACA). We present first results from PHQMD on the study of the production rates of strange hadrons, nuclear clusters and hypernuclei in e1elementary and heavy-ion collisions at NICA energies. In particular, sensitivity on the "hard" and "soft" equation of state within the PHQMD model was investigated for "bulk" observables.

Black-box Combinatorial Optimization using Models with Integer-valued Minima

When a black-box optimization objective can only be evaluated with costly or noisy measurements, most standard optimization algorithms are unsuited to find the optimal solution. Specialized algorithms that deal with exactly this situation make use of surrogate models. These models are usually continuous and smooth, which is beneficial for continuous optimization problems, but not necessarily for combinatorial problems. However, by choosing the basis functions of the surrogate model in a certain way, we show that it can be guaranteed that the optimal solution of the surrogate model is integer. This approach outperforms random search, simulated annealing and one Bayesian optimization algorithm on the problem of finding robust routes for a noise-perturbed traveling salesman benchmark problem, with similar performance as another Bayesian optimization algorithm, and outperforms all compared algorithms on a convex binary optimization problem with a large number of variables.

State Space Emulation and Annealed Sequential Monte Carlo for High Dimensional Optimization

Many high dimensional optimization problems can be reformulated into a problem of finding theoptimal state path under an equivalent state space model setting. In this article, we present a general emulation strategy for developing a state space model whose likelihood function (or posterior distribution) shares the same general landscape as the objective function of the original optimization problem. Then the solution of the optimization problem is the same as the optimal state path that maximizes the likelihood function or the posterior distribution under the emulated system. To find such an optimal path, we adapt a simulated annealing approach by inserting a temperature control into the emulated dynamic system and propose a novel annealed Sequential Monte Carlo (SMC) method that effectively generating Monte Carlo sample paths utilizing samples obtained on the high temperature scale. Compared to the vanilla simulated annealing implementation, annealed SMC is an iterative algorithm for state space model optimization that directly generates state paths from the equilibrium distributions with a decreasing sequence of temperatures through sequential importance sampling which does not require burn-in or mixing iterations to ensure quasi-equilibrium condition. Several applications of state space model emulation and the corresponding annealed SMC results are demonstrated.

Improved algorithm for neuronal ensemble inference by Monte Carlo method

Neuronal ensemble inference is one of the significant problems in the study of biological neural networks. Various methods have been proposed for ensemble inference from their activity data taken experimentally. Here we focus on Bayesian inference approach for ensembles with generative model, which was proposed in recent work. However, this method requires large computational cost, and the result sometimes gets stuck in bad local maximum solution of Bayesian inference. In this work, we give improved Bayesian inference algorithm for these problems. We modify ensemble generation rule in Markov chain Monte Carlo method, and introduce the idea of simulated annealing for hyperparameter control. We also compare the performance of ensemble inference between our algorithm and the original one.

Restricted Boltzmann Machines for galaxy morphology classification with a quantum annealer

We present the application of Restricted Boltzmann Machines (RBMs) to the task of astronomical image classification using a quantum annealer built by D-Wave Systems. Morphological analysis of galaxies provides critical information for studying their formation and evolution across cosmic time scales. We compress galaxy images using principal component analysis to fit a representation on the quantum hardware. Then, we train RBMs with discriminative and generative algorithms, including contrastive divergence and hybrid generative-discriminative approaches, to classify different galaxy morphologies. The methods we compare include Quantum Annealing (QA), Markov Chain Monte Carlo (MCMC) Gibbs Sampling, and Simulated Annealing (SA) as well as machine learning algorithms like gradient boosted decision trees. We find that RBMs implemented on D-Wave hardware perform well, and that they show some classification performance advantages on small datasets, but they don't offer a broadly strategic advantage for this task. During this exploration, we analyzed the steps required for Boltzmann sampling with the D-Wave 2000Q, including a study of temperature estimation, and examined the impact of qubit noise by comparing and contrasting the original D-Wave 2000Q to the lower-noise version recently made available. While these analyses ultimately had minimal impact on the performance of the RBMs, we include them for reference.

Investigating Constraint Programming and Hybrid Methods for Real World Industrial Test Laboratory Scheduling

In this paper we deal with a complex real world scheduling problem closely related to the well-known Resource-Constrained Project Scheduling Problem (RCPSP). The problem concerns industrial test laboratories in which a large number of tests has to be performed by qualified personnel using specialised equipment, while respecting deadlines and other constraints. We present different constraint programming models and search strategies for this problem. Furthermore, we propose a Very Large Neighborhood Search approach based on our CP methods. Our models are evaluated using CP solvers and a MIP solver both on real-world test laboratory data and on a set of generated instances of different sizes based on the real-world data. Further, we compare the exact approaches with VLNS and a Simulated Annealing heuristic. We could find feasible solutions for all instances and several optimal solutions and we show that using VLNS we can improve upon the results of the other approaches.

You can also browse papers in other categories.