How to find formulas from values

Finding mathematical formulas from data is an extremely useful machine learning task. A formula is the most compressed representation of a table, allowing large amounts of data to be compressed into something simple, while also making explicit the relationship that exists between the different variables.

In this tutorial, we are going to generate a dataset and try to recover the original formula using the symbolic regression software TuringBot, without any previous knowledge of what that formula was.

What symbolic regression is

Symbolic regression is a machine learning technique that tries to find explicit mathematical formulas that connect variables. The technique starts from a set of base functions to be used in the search, for instance, addition, multiplication, sin(x), exp(x), etc, and then tries to combine those functions in such a way that the target variable is accurately predicted.

Simplicity is as important as accuracy in a symbolic regression model. Every dataset can be represented with perfect accuracy by a polynomial, but that is uninformative since the number of free parameters in the model the same as the number of training data points. For this reason, a symbolic regression optimization penalizes large formulas, favoring simpler ones that perform just as well.

Generating an example dataset

Let’s give an explicit example of how symbolic regression can be used to find a formula from data. We will generate a dataset that consists of the formula x*cos(10*x) + 2, add noise to this data, and then see if we can recover this formula using symbolic regression.

The following Python script generates the input data:

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0, 1, 100)
y = np.cos(10*x)*x + 2 + np.random.random(len(x))*0.1

And this is what the result looks like:

The input data that we have generated.

Now we are going to try to find a formula for this data and see what happens.

Finding a formula using TuringBot

The usage of TuringBot is very simple. All we have to do is load the input data using its interface and start the search. First, we save the data to an input file:

arr = np.column_stack((x, y))
np.savetxt('input.txt', arr, fmt='%f')

After loading input.txt into TuringBot, starting the search, and letting it work for a minute, these were the formulas that it found, ordered by complexity:

The formulas found by TuringBot for our input dataset.

It can be seen that it has successfully found our original formula!

Conclusion

Here we have seen how symbolic regression can be used to automatically find mathematical formulas from data values. The example that we have given was a simple one, but the procedure that we have used would also work for a real-world dataset in which the dependencies between the variables was not known beforehand, and in which more than one input variable was present.

If you are interested in trying to find formulas from your own dataset, you can download TuringBot for free from the official website.

Share this with your network:

How to create an AI trading system

Predicting whether the price of a stock will rise or fall is perhaps one of the most difficult machine learning tasks. Signals must be found on datasets which are dominated by noise, and in a robust way that will not overfit the training data.

In this tutorial, we are going to show how an AI trading system can be created using a technique called symbolic regression. The idea will be to try to find a formula that classifies whether the price of a stock will rise or fall in the following day based on its price candles (open, high, low, close) in the last 14 days.

AI trading system concept

Our AI trading system will be a classification algorithm: it will take past data as input, and output 0 if the stock is likely to fall in the following day and 1 if it is likely to rise. The first step in generating this model is to prepare a training dataset in which each row contains all the relevant past data and also a 0 or 1 label based on what happened in the following day.

We can be very creative about what past data to use as input while generating the model. For instance, we could include technical indicators such as RSI and MACD, sentiment data, etc. But for the sake of this example, all we are going to use are the OHLC prices of the last 14 candles.

Our training dataset should then contain the following columns:

 open_1,high_1,low_1,close_1,...,open_14,high_14,low_14,close_14,label

Here the index 1 denotes the last trading day, the index 2 the trading day prior to that, etc.

Generating the training dataset

To make things interesting, we are going to train our model on data for the S&P 500 index over the last year, as retrieved from Yahoo Finance. The raw dataset can be found here: S&P 500.csv.

To process this CSV file into the format that we need for the training, we have created the following Python script which uses the Pandas library:

import pandas as pd

df = pd.read_csv('S&P 500.csv')

training_data = []

for i,row in df.iterrows():
    if i < 13 or i+1 >= len(df):
        continue

    features = []
    for j in range(i, i-14, -1):
        features.append(df.iloc[j]['Open'])
        features.append(df.iloc[j]['High'])
        features.append(df.iloc[j]['Low'])
        features.append(df.iloc[j]['Close'])
    if df.iloc[i+1]['Close'] > row['Close']:
        features.append(1)
    else:
        features.append(0)
    
    training_data.append(features)
    
columns = []
for i in range(1, 15):
    columns.append('open_%d' % i)
    columns.append('high_%d' % i)
    columns.append('low_%d' % i)
    columns.append('close_%d' % i)
columns.append('label')

training_data = pd.DataFrame(training_data, columns=columns)

training_data.to_csv('training.csv', index=False)

All this script does is iterate through the rows in the Yahoo Finance data and generate rows with the OHLC prices of the last 14 candles, and an additional ‘label’ column based on what happened in the following day. The result can be found here: training.csv.

Creating a model with symbolic regression

Now that we have the training dataset, we are going to try to find formulas that predict what will happen to the S&P 500 in the following day. For that, we are going to use the desktop symbolic regression software TuringBot. This is what the interface of the program looks like:

The interface of the TuringBot symbolic regression software.

The input file is selected from the menu on the upper left. We also select the following settings:

  • Search metric: classification accuracy.
  • Test/train split: 50/50. This will allow us to easily discard overfit models.
  • Test sample: the last points. The other option is “chosen randomly”, which would make it easier to overfit the data due to autocorrelation.

With these settings in place, we can start the search by clicking on the play button at the top of the interface. The best solutions found so far will be shown in real time, ordered by complexity, and their out-of-sample errors can be seen by toggling the “show cross validation” button on the upper right.

After letting the optimization run for a few minutes, these were the models that were encountered:

Symbolic models found for predicting S&P 500 returns.

The one with the best ouf-of-sample accuracy turned out to be the one with size 23. Its win rate in the test domain was 60.5%. This is the model:

label = 1-floor((open_5-high_4+open_12+tan(-0.541879*low_1-high_1))/high_13)

It can be seen that it depends on the low and high of the current day, and also on a few key parameters of previous days.

Conclusion

In this tutorial, we have generated an AI trading signal using symbolic regression. This model had good out-of-sample accuracy in predicting what the S&P 500 would do the next day, using for that nothing but the OHLC prices of the last 14 trading days. Even better models could probably be obtained if more interesting past data was used for the training, such as technical indicators (RSI, MACD, etc).

You can generate your own models by downloading TuringBot for free from the official website. We encourage you to experiment with different stocks and timeframes to see what you can find.

Share this with your network:

How to create an equation for data points

In order to find an equation from a list of values, a special technique called symbolic regression must be used. The idea is to search over the space of all possible mathematical formulas for the ones with the greatest accuracy, while trying to keep those formulas as simple as possible.

In this tutorial, we are going to show how to find formulas using the desktop symbolic regression software TuringBot, which is very easy to use.

How symbolic regression works

Symbolic regression starts from a set of base functions to be used in the search, such as addition, multiplication, sin(x), exp(x), etc, and then tries to combine those functions in all possible ways with the goal of finding a model that will be as accurate as possible in predicting a target variable. Some examples of base functions used by TuringBot are the following:

Some base functions that TuringBot uses for symbolic regression.

As important as the accuracy of a formula is its simplicity. A huge formula can predict with perfect accuracy the data points, but if the number of free parameters in the model is the same as the number of points then this model is not really informative. For this reason, a symbolic regression optimization will discard a larger formula if it finds a smaller one that performs just as well.

Finding a formula with TuringBot

Finding equations from data points with TuringBot is a simple process. The first step is selecting the input file with the data through the interface. This input file should be in TXT or CSV format. After it has been loaded, the target variable can be selected (by default it will be the last column in the file), and the search can be started. This is what the interface looks like:

The interface of the TuringBot symbolic regression software.

Several options are available on the menus on the left, such as setting a test/train split to be able to detect overfit solutions, selecting which base functions should be used, and selecting the search metric, which by default is root-mean-square error, but that can also be set to classification accuracy, mean relative error and others. For this example, we are going to keep it simple and just use the defaults.

The optimization is started by clicking on the play button at the top of the interface. The best formulas found so far will be shown in the solutions box, ordered by complexity:

The formulas found by TuringBot for an example dataset.

The software allows the solutions to be exported to common programming languages from the menu, and also to simply be exported as text. Here are the formulas in the example above exported in text format:

Complexity   Error      Function
1            1.91399    -0.0967549
3            1.46283    0.384409*x
4            1.362      atan(x)
5            1.18186    0.546317*x-1.00748
6            1.11019    asinh(x)-0.881587
9            1.0365     ceil(asinh(x))-1.4131
13           0.985787   round(tan(floor(0.277692*x)))
15           0.319857   cos(x)*(1.96036-x)*tan(x)
19           0.311375   cos(x)*(1.98862-1.02261*x)*tan(1.00118*x)

Conclusion

In this tutorial, we have seen how symbolic regression can be used to find formulas from values. Symbolic regression is very different from regular curve-fitting methods, since no assumption is made about what the shape of the formulas should be. This allows patterns to be found in datasets with an arbitrary number of dimensions, making symbolic regression a general purpose machine learning technique.

Share this with your network:

Machine learning black box models: some alternatives

In this article, we will discuss a very basic question regarding machine learning: is every model a black box? Certainly most methods seem to be, but as we will see, there are very interesting exceptions to this.

What is a black box method?

A method is said to be a black box when it performs complicated computations under the hood that cannot be clearly explained and understood. Data is fed into the model, internal transformations are performed on this data and an output is given, but these transformations are such that basic questions cannot be answered in a straightforward way:

  • Which of the input variables contributed the most to generating the output?
  • Exactly what features did the model derive from the input data?
  • How does the output change as a function of one of the variables?

Not only are black box models hard to understand, they are also hard to move around: since complicated data structures are necessary for the relevant computations, they cannot be readily translated to different programming languages.

Can there be machine learning without black boxes?

The answer to that question is yes. In the simplest case, a machine learning model can be a linear regression and consist of a line defined by an explicit algebraic equation. This is not a black box method, since it is clear how the variables are being used to compute an output.

But linear models are quite limited and cannot perform the same kinds of tasks that neural networks do, for example. So a more interesting question is: is there a machine learning method capable of finding nonlinear patterns in an explicit and understandable way?

It turns out that such method exists, and is called symbolic regression.

Symbolic regression as an alternative

The idea of symbolic regression is to find explicit mathematical formulas that connect input variables to an output, while trying to keep those formulas as simple as possible. The resulting models end up being explicit equations that can be written on a sheet of paper, making it apparent how the input variables are being used despite the presence of nonlinear computations.

To give a clearer picture, consider some models found by TuringBot, a symbolic regression software for PC:

Symbolic models found by the TuringBot symbolic regression software.

In the “Solutions” box above, a typical result of a symbolic regression optimization can be seen. A set of formulas of increasing complexity was found, with more complex formulas only being shown if they perform better than all simpler alternatives. A nonlinearity in the input dataset was successfully recovered through the use of nonlinear base functions like cos(x), atan(x) and multiplication.

Symbolic regression is a very general technique: although the most obvious use case is to solve regression problems, it can also be used to solve classification problems by representing categorical variables as different integer numbers, and running the optimization with classification accuracy as the search metric instead of RMS error. Both of these options are available in TuringBot.

Conclusion

In this article, we have seen that despite most machine learning methods indeed being black boxes, not all of them are. A simple counterexample are linear models, which are explicit and hence not black boxes. More interestingly, we have seen how symbolic regression is capable of solving machine learning tasks where nonlinear patterns are present, generating models that are mathematical equations that can be analyzed and interpreted.

Share this with your network:

A regression model example and how to generate it

Regression models are perhaps the most important class of machine learning models. In this tutorial, we will show how to easily generate a regression model from data values.

What is regression

The goal of a regression model is to be able to predict a target variable taking as input one or more input variables. The simplest case is that of a linear relationship between the variables, in which case basic methods such as least squares regression can be used.

In real-world datasets, the relationship between the variables is often highly non-linear. This motivates the use of more sophisticated machine learning techniques to solve the regression problems, including for instance neural networks and random forests.

A regression problem example is to predict the value of a house from its characteristics (location, number of bedrooms, total area, etc), using for that information from other houses which are not identical to it but for which the prices are known.

Regression model example

To give a concrete example, let’s consider the following dataset of house prices: house_prices.txt. It contains the following columns:

Index;
Local selling prices, in hundreds of dollars;
Number of bathrooms;
Area of the site in thousands of square feet;
Size of the living space in thousands of square feet;
Number of garages;
Number of rooms;
Number of bedrooms;
Age in years;
Construction type (1=brick, 2=brick/wood, 3=aluminum/wood, 4=wood);
Number of fire places;
Selling price.

The goal is to predict the last column, the selling price, as a function of all the other variables. In order to do that, we are going to use a technique called symbolic regression, which attempts to find explicit mathematical formulas that connect the input variables to the target variable.

We will use the desktop software TuringBot, which can be downloaded for free, to find that regression model. The usage is quite straightforward: you load the input file through the interface, select which variable is the target and which variables should be used as input, and then start the search. This is what its interface looks like with the data loaded in:

The TuringBot interface.

We have also enabled the cross validation feature with a 50/50 test/train split (see the “Search options” menu in the image above). This will allow us to easily discard overfit formulas.

After running the optimization for a few minutes, the formulas found by the program and their corresponding out-of-sample errors were the following:

The regression models found for the house prices.

The highlighted one turned out to be the best — more complex solutions did not offer increased out-of-sample accuracy. Its mean relative error in the test dataset was of roughly 8%. Here is that formula:

price = fire_place+15.5668+(1.66153+bathrooms)*local_pric

The variables that are present in it are only three: the number of bathrooms, the number of fire places and the local price. It is a completely non-trivial fact that the house price should only depend on these three parameters, but the symbolic regression optimization made this fact evident.

Conclusion

In this tutorial, we have seen an example of generating a regression model. The technique that we used was symbolic regression, implemented in the desktop software TuringBot. The model that was found had a good out-of-sample accuracy in predicting the prices of houses based on their characteristics, and it allowed us to clearly see the most relevant variables in estimating that price.

Share this with your network:

Neural networks: what are the alternatives?

In this article, we will see some alternatives to neural networks that can be used to solve the same types of machine learning tasks that they do.

What are neural networks

Neural networks are by far the most popular machine learning method. They are capable of automatically learning hidden features from input data prior to computing an output value, and established algorithms exist for finding the optimal internal parameters (weights and biases) based on a training dataset.

The basic architecture is the following. The building blocks are perceptrons, which take values as input, calculate a weighed sum of those values and apply a non-linear activation function to the result. The output is then either fed into perceptrons of a next layer, or it is sent to the output if that was the last layer.

The basic architecture of a neural network. Blue circles are perceptrons.

This architecture is directly inspired on the workings of a human brain. Combined with a neural network’s ability to learn from data, a strong association between this machine learning method and the notion of artificial intelligence can be drawn.

Alternatives to neural networks

Despite being so popular, neural networks are not the only machine learning method available. Several alternatives exist, and in many contexts these alternatives may perform better than them.

Some noteworthy alternatives are the following:

  • Random forests, which consist of an ensemble of decision trees, each trained with a random subset of the training dataset. This method corrects a decision tree’s tendency to overfit the input data.
  • Support vector machines, which attempt to map the input data into a space where it is linearly separable into different categories.
  • k-nearest neighbors algorithm (KNN), which looks for the values in the training dataset that are closest to a new input, and combines the target variables associated to those nearest neighbors into a new prediction.
  • Symbolic regression, a technique which tries to find explicit mathematical formulas that connect the input variables to the target variable.

A noteworthy alternative

Among the alternatives above, all but symbolic regression involve implicit computations under the hood that cannot be easily interpreted. With symbolic regression, the model is an explicit mathematical formula that can be written on a sheet of paper, making this technique an alternative to neural networks of particular interest.

Here is how it works: given a set of base functions, for instance sin(x), exp(x), addition, multiplication, etc, a training algorithm tries to find the combinations of those functions that best predict the output variable taking as input the input variables. It is important that the formulas encountered are the simplest ones possible, so the algorithm will automatically discard a formula if it finds a simpler one that performs just as well.

Here is an example of output for a symbolic regression optimization, in which a set of formulas of increasing complexity were found that describe the input dataset. The symbolic regression package used is called TuringBot, a desktop application that can be downloaded for free.

Formulas found with a symbolic regression optimization.

This method very much resembles a scientist looking for mathematical laws that explain data, like Kepler did with data on the positions of planets in the sky to find his laws of planetary motion.

Conclusion

In this article, we have seen some alternatives to neural networks based on completely different ideas, including for instance symbolic regression which generates models that are explicit and more explainable than a neural network. Exploring different models is very valuable, because they may perform differently in different particular contexts.

Share this with your network:

A free AI software for PC

If you are interested in solving AI problems and would like an easy to use desktop software that yields state of the art results, you might like TuringBot. In this article, we will show you how it can be used to easily solve classification and regression problems, and explain the methodology that it uses, which is called symbolic regression.

The software

TuringBot is a desktop application that runs on both Windows and Linux, and that can be downloaded for free from the official website. This is what its interface looks like:

The interface of TuringBot.

The usage is simple: you load your data in CSV or TXT format through the interface, select which column should be predicted and which columns should be used as input, and start the search. The program will look for explicit mathematical formulas that predict this target variable, and show the results in the Solutions box.

Symbolic regression

The name of this technique, which looks for explicit formulas that solve AI problems, is symbolic regression. It is capable of solving the same problems as neural networks, but in an explicit way that does not involve black box computations.

Think of what Kepler did when he extracted his laws of planetary motion from observations. He looked for algebraic equations that could explain this data, and found timeless patterns that are taught to this day in schools. What TuringBot does is something similar to that, but millions of times faster than a human could ever do.

An important point in symbolic regression is that it is not sufficient for a model to be accurate — it also has to be simple. This is why TuringBot’s algorithm tries to find the best formulas of all possible sizes simultaneously, discarding larger formulas that do not perform better than simpler alternatives.

The problems that it can solve

Some examples of problems that can be solved by the program are the following:

  • Regression problems, in which a continuous target variable should be predicted. See here a tutorial in which we use the program to recover a mathematical formula without previous knowledge of what that formula was.
  • Classification problems, in which the goal is to classify inputs into two or more different categories. The rationale of solving this kind of problem using symbolic regression is to represent different categorical variables as different integer numbers, and run the optimization with “classification accuracy” as the search metric (this can easily be selected through the interface). In this article, we teach how to use the program to classify the Iris dataset.
  • Classification of rare events, in which a classification task must be solved on highly imbalanced datasets. The logic is similar to that of a regular classification problem, but in this case a special metric called F1 score should be used (also available in TuringBot). In this article, we found a formula that successfully classified credit card frauds on a real-world dataset that is highly imbalanced.

Getting TuringBot

If you liked the concept of TuringBot, you can download it for free from the official website. There you can also find the official documentation, with more information about the search metrics that are available, the input file formats and the various features that the program offers.

Share this with your network:

How to find a formula for the nth term of a sequence

Given a sequence of numbers, finding an explicit mathematical formula that computes the nth term of the sequence can be challenging, except in very special cases like arithmetic and geometric sequences.

In the general case, this task involves searching over the space of all mathematical formulas for the most appropriate one. A special technique exists that does just that: symbolic regression. Here we will introduce how it works, and use it to find a formula for the nth term in the Fibonacci sequence (A000045 in the OEIS) as an example.

What symbolic regression is

Regression is the task of establishing a relationship between an output variable and one or more input variables. Symbolic regression solves this task by searching over the space of all possible mathematical formulas for the ones with the greatest accuracy, while trying to keep those formulas as simple as possible.

The technique starts from a set of base functions — for instance, sin(x), exp(x), addition, multiplication, etc. Then it tries to combine those base functions in various ways using an optimization algorithm, keeping track of the most accurate ones found so far.

An important point in symbolic regression is simplicity. It is easy to find a polynomial that will fit any sequence of numbers with perfect accuracy, but that does not really tell you anything since the number of free parameters in the model is the same as the number of input variables. For this reason, a symbolic regression procedure will discard a larger formula if it finds a smaller one that performs just as well.

Finding the nth Fibonacci term

Now let’s show how symbolic regression can be used in practice by trying to find a formula for the Fibonacci sequence using the desktop symbolic regression software TuringBot. The first two terms of the sequence are 1 and 1, and every next term is defined as the sum of the previous two terms. Its first terms are the following, where the first column is the index:

1 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55

A list of the first 30 terms can be found on this file: fibonacci.txt.

TuringBot takes as input TXT or CSV files with one variable per column and efficiently finds formulas that connect those variables. This is how it looks like after we load fibonacci.txt and run the optimization:

Finding a formula for the nth Fibonacci term with TuringBot.

The software finds not only a single formula, but the best formulas of all possible complexities. A larger formula is only shown if it performs better than all simpler alternatives. In this case, the last formula turned out to predict with perfect accuracy every single one of the first 30 Fibonacci terms. The formula is the following:

f(x) = floor(cosh(-0.111572+0.481212*x))

Clearly a very elegant solution. The same procedure can be used to find a formula for the nth term of any other sequence (if it exists).

Conclusion

In this tutorial, we have seen how the symbolic regression software TuringBot can be used to find a closed-form expression for the nth term in a sequence of numbers. We found a very short formula for the Fibonacci sequence by simply writing it into a text file with one number per row and loading this file into the software.

If you are interested in trying TuringBot your own data, you can download it from the official website. It is available for both Windows and Linux.

Share this with your network:

A machine learning software for data science

Data science is becoming more and more widespread, pushed by companies that are finding that very valuable and actionable information can be extracted from their databases.

It can be challenging to develop useful models from raw data. Here we will introduce a tool that makes it very easy to develop state of the art models from any dataset.

What is TuringBot

TuringBot is a desktop machine learning software. It runs on both Windows and Linux, and what it does is generate models that predict some target variable taking as input one or more input variables. It does that through a technique called symbolic regression. This is what its interface looks like:

TuringBot’s interface.

The idea of symbolic regression is to search over the space of all possible mathematical formulas for the ones that best connect the input variables to the target variable, while trying to keep those formulas as simple as possible. The target variable can be anything: for instance, it can represent different categorical variables as different integer numbers, allowing the program to solve classification problems, or it can be a regular continuous variable.

Machine learning with TuringBot

The usage of TuringBot is very straightforward. All you have to do is save your data in CSV or TXT format, with one variable per column, and load this input file through the program’s interface.

Once the data is loaded, you can select the target variable and which variables should be used as input, as well as the search metric, and then start the search. Several search metrics are available, including RMS error, mean error and classification accuracy. A list of formulas encountered so far will be shown in real time, ordered by complexity. Those formulas can be easily exported as Python, C or text from the interface:

Some solutions found by TuringBot. They can readily be exported to common programming languages.

Most machine learning methods are black boxes, which carry out complex computations under the hood before giving a result. This is how neural networks and random forests work, for instance. A great advantage of TuringBot over these methods is that the models that it generates are very explicit, allowing some understanding to be gained into the data. This turns data science into something much more similar to natural science and its search for mathematical laws that explain the world.

How to get the software

If you are interested in trying TuringBot on your own data, you can download it for free from the official website. There you can also find the official documentation, with detailed information about all the features and parameters of the software. Many engineers and data scientists are already making use of the software to find hidden patterns in their data.

Share this with your network:

Symbolic regression tutorial with TuringBot

In this tutorial, we are going to show how you can find a formula from your data using the symbolic regression software TuringBot. It is a desktop software that runs on both Windows and Linux, and as you will see the usage is very simple.

Preparing the data

TuringBot takes as input files in .txt or CSV format containing one variable per column. The first row may contain the names of the variables, otherwise they will be labelled col1, col2, col3, etc.

For instance, the following is a valid input file:

x y z w classification
5.20 2.70 3.90 1.40 1
6.50 2.80 4.60 1.50 1
7.70 2.80 6.70 2.00 2
5.90 3.20 4.80 1.80 1
5.00 3.50 1.60 0.60 0
5.10 3.50 1.40 0.20 0
4.60 3.10 1.50 0.20 0
6.90 3.20 5.70 2.30 2

Loading the data into TuringBot

This is what the program looks like when you open it:

The TuringBot interface.

By clicking on the “Input file” button on the upper left, you can select your input file and load it. Different search metrics are available, including for instance classification accuracy, and a handy cross validation feature can also be enabled in the “Search options” box — if enabled, it will automatically create a test/train split and allow you to see the out-of-sample error as the optimization goes on. But in this example we are going to keep things simple and just use the defaults.

Finding the formulas

After loading the data, you can click on the play button at the top of the interface to start the optimization. The best formulas found so far will be shown in the “Solutions” box, in ascending order of complexity. A formula is only shown if its accuracy is greater than that of all simpler alternatives — in symbolic regression, the goal is not simply to find a formula, but to find the simplest ones possible.

Here are the formulas it found for an example dataset:

Finding formulas with TuringBot.

The formulas are all written in a format that is compatible out of the box with Python and C. Indeed, the menu on the upper right allows you to export the solutions to these languages:

Exporting solutions to different languages.

In this example, the true formula turned out to be sqrt(x), which was recovered in a few seconds. The methodology would be the same for a real-world dataset with many input variables and an unknown dependency between them.

How to get TuringBot

If you have liked this tutorial, we encourage you to download TuringBot for free from the official website. As we have shown, it is very simple to use, and its powerful mathematical modelling capabilities allow you to find very subtle numerical patterns in your data. Much like a scientist would do from empirical observations, but in an automatic way and millions of times faster.

Share this with your network:

Machine learning with symbolic regression

Many machine learning methods are presently available, including for instance neural networks, random forests and support vector machines. In this article, we will talk about a very unexplored algorithm called symbolic regression, and will show how it can be used to solve machine learning problems in a very transparent and explicit way.

What is machine learning

Machine learning concerns algorithms capable of predicting numerical values (regression) and creating classifications, among other tasks. The real world is messy and randomness appears everywhere, so a major challenge that these algorithms face is being able to discern meaningful signals from the underlying noise contained in the training datasets.

What most machine learning methods have in common is that they are very implicit and resemble black boxes: numbers are fed into the model, and it spits out a result after performing a series of complex computations under the hood. This kind of processing of information is strongly connected to the notion of “artificial intelligence”, since the inner workings of the human brain are also very hard to describe, while it is capable of learning and recognizing patterns across a very wide range of domains.

Symbolic regression

Symbolic regression is a technique that looks for mathematical formulas that predict some target variable taking as input one or more input variables. Thus, a symbolic model is nothing more than an algebraic formula that can be written on a piece of paper.

A simple case of symbolic model is a polynomial. Any dataset can be represented with perfect accuracy by a polynomial, but that is not very interesting because polynomials quickly diverge outside the train domain, and because they contain as many free parameters as the training dataset itself. So they do not really compress information in any way.

More interesting models are found by combining a set of base functions and trying to find the simplest combinations that predict some target variable. Examples of base functions are trigonometric functions, exponentials, sum, multiplication, division, etc.

For instance, these are some of the base functions used by the symbolic regression software TuringBot:

Base functions used by TuringBot.

After the base functions are defined, the task is then to combine them in such way that a target variable is successfully predicted from the input variables. There is more than one way to carry out the optimization — one might be interested in maximizing the classification accuracy, or in recovering the overall shape of a curve without much regard for outliers, etc. For this reason, TuringBot allows many different search metrics to be used:

The search metrics available in TuringBot.

Some examples of problems that can be solved with symbolic regression include:

Clearly the method is very general, and can be creatively used to solve a variety of problems.

Conclusion

In this article, we have seen how symbolic regression is an alternative machine learning method capable of generating explicit models and solving various classes of problems in an elegant way. If you are interested in generating symbolic models from your own data and seeing what patterns it can find, you can download the symbolic regression software TuringBot, which works on both Windows and Linux, for free.

Share this with your network: