Symbolic regression is a method that discovers mathematical formulas from data without assumptions on what those formulas should look like. Given a set of input variables x1, x2, x3, etc, and a target variable y, it will use trial and error find f such that y = f(x1, x2, x3, …).
The method is very general, given that the target variable y can be anything, and given that a variety of error metrics can be chosen for the search. Here we want to enumerate a few creative applications to give the reader some ideas.
All of these problems can be modeled out of the box with the TuringBot symbolic regression software.
1. Forecast the next values of a time series
Say you have a sequence of numbers and you want to predict the next one. This could be the monthly revenue of a company or the daily prices of a stock, for instance.
In special cases, this kind of problem can be solved by simply fitting a line to the data and extrapolating to the next point, a task that can be easily accomplished with numpy.polyfit. While this will work just fine in many cases, it will not be useful if the time series evolves in a nonlinear way.
Symbolic regression offers a more general alternative. One can look for formulas for y = f(index), where y are the values of the series and index = 1, 2, 3, etc. A prediction can then be made by evaluating the resulting formulas at a future index.
This is not a mainstream way to go about this kind of problem, but the simplicity of the resulting models can make them much more informative than mainstream forecasting methods like Monte Carlo simulations, used for instance by Facebook’s Prophet library.
2. Predict binary outcomes
A machine learning problem of great practical importance is to predict whether something will happen or not. This is a central problem in options trading, gambling, and finance (“will a recession happen?”).
Numerically, this problem translates to predicting 0 or 1 based on a set of input features.
Symbolic regression allows binary problems to be solved by using classification accuracy as the error metric for the search. In order to minimize the error, the optimization will converge without supervision towards formulas that only output 0 or 1, usually involving floor/ceil/round of some bounded function like tanh(x) or cos(x).
3. Predict continuous outcomes
A generalization of the problem of making a binary prediction is the problem of predicting a continuous quantity in the future.
For instance, in agriculture one could be interested in predicting the time for a crop to mature given parameters known at the time of sowing, such as soil composition, the month of the year, temperature, etc.
Usually, few data points will be available to train the model in this kind of scenario, but since symbolic models are simple, they are the least likely to overfit the data. The problem can be modeled by running the optimization with a standard error metric like root-mean-square error or mean error.
4. Solve classification problems
Classification problems, in general, can be solved by symbolic regression with a simple trick: representing different categorical variables as different integer numbers.
If your data points have 10 possible labels that should be predicted based on a set of input features, you can use symbolic regression to find formulas that output integers from 1 to 10 based on these features.
This may sound like asking too much — a formula capable of that is highly specific. But a good symbolic regression engine will be thorough in its search over the space of all mathematical formulas and will eventually find appropriate solutions.
5. Classify rare events
Another interesting case of classification problem is that of a highly imbalanced dataset, in which only a handful of rows contain the relevant label and the rest are negatives. This could be medical diagnostic images or fraudulent credit card transactions.
For this kind of problem, the usual classification accuracy search metric is not appropriate, since f(x1, x2, x3, …) = 0 will have a very high accuracy while being a useless function.
Special search metrics exist for this kind of problem, the most popular of which being the F1 score, which consists of the geometric mean between precision and recall. This search metric is available in TuringBot, allowing this kind of problem to be easily modeled.
6. Compress data
A mathematical formula is perhaps the shortest possible representation of a dataset. If the target variable features some kind of regularity, symbolic regression can turn gigabytes of data into something that can be equivalently expressed in one line.
Examples of target variables could be rgb colors of an image as a function of (x, y) pixels. We have tried finding a formula for the Mona Lisa, but unfortunately, nothing simple could be found in this case.
7. Interpolate data
Say you have a table of numbers and you want to compute the target variable for intermediate values not present in the table itself.
One way to go about this is to generate a spline interpolation from the table, which is a somewhat cumbersome and non-portable solution.
With symbolic regression, one can turn the entire table into a mathematical expression, and then proceed to do the interpolation without the need for specialized libraries or data structures, and also without the need to store the table itself anywhere.
8. Discover upper or lower bounds for a function
In problems of engineering and applied mathematics, one is often interested not in the particular value of a variable but in how fast this variable grows or how large it can be given an input. In this case, it is more informative to obtain an upper bound for the function than an approximation for the function itself.
With symbolic regression, this can be accomplished by discarding formulas that are not always larger or always smaller than the target variable. This kind of search is available out of the box in TuringBot with its “Bound search mode” option.
9. Discover the most predictive variables
When creating a machine learning model, it is extremely useful to know which input variables are the most relevant in predicting the target variable.
With black-box methods like neural networks, answering this kind of question is nontrivial because all variables are used at once indiscriminately.
But with symbolic regression the situation is different: since the formulas are kept as short as possible, variables that are not predictive end up not appearing, making it trivial to spot which variables are actually predictive and relevant.
10. Explore the limits of computability
Few people are aware of this, but the notion of computability has been first introduced by Alan Turing himself in his famous paper “On Computable Numbers, with an Application to the Entscheidungsproblem“.
Some things are easy to compute, for instance the function f(x) = x or common functions like sin(x) and exp(x) that can be converted into simple series expansions. But other things are much harder to compute, for instance, the N-th prime number.
With symbolic regression, one can try to derandomize tables of numbers and discover highly nonlinear patterns connecting variables. Since this is done in a very free way, even absurd solutions like tan(tan(tan(tan(x)))) end up being a possibility. This makes the method operate on the edge of computability.