In this article, we will see some alternatives to neural networks that can be used to solve the same types of machine learning tasks that they do.
What are neural networks
Neural networks are by far the most popular machine learning method. They are capable of automatically learning hidden features from input data prior to computing an output value, and established algorithms exist for finding the optimal internal parameters (weights and biases) based on a training dataset.
The basic architecture is the following. The building blocks are perceptrons, which take values as input, calculate a weighed sum of those values and apply a non-linear activation function to the result. The output is then either fed into perceptrons of a next layer, or it is sent to the output if that was the last layer.
This architecture is directly inspired on the workings of a human brain. Combined with a neural network’s ability to learn from data, a strong association between this machine learning method and the notion of artificial intelligence can be drawn.
Alternatives to neural networks
Despite being so popular, neural networks are not the only machine learning method available. Several alternatives exist, and in many contexts these alternatives may perform better than them.
Some noteworthy alternatives are the following:
- Random forests, which consist of an ensemble of decision trees, each trained with a random subset of the training dataset. This method corrects a decision tree’s tendency to overfit the input data.
- Support vector machines, which attempt to map the input data into a space where it is linearly separable into different categories.
- k-nearest neighbors algorithm (KNN), which looks for the values in the training dataset that are closest to a new input, and combines the target variables associated to those nearest neighbors into a new prediction.
- Symbolic regression, a technique which tries to find explicit mathematical formulas that connect the input variables to the target variable.
A noteworthy alternative
Among the alternatives above, all but symbolic regression involve implicit computations under the hood that cannot be easily interpreted. With symbolic regression, the model is an explicit mathematical formula that can be written on a sheet of paper, making this technique an alternative to neural networks of particular interest.
Here is how it works: given a set of base functions, for instance sin(x), exp(x), addition, multiplication, etc, a training algorithm tries to find the combinations of those functions that best predict the output variable taking as input the input variables. It is important that the formulas encountered are the simplest ones possible, so the algorithm will automatically discard a formula if it finds a simpler one that performs just as well.
Here is an example of output for a symbolic regression optimization, in which a set of formulas of increasing complexity were found that describe the input dataset. The symbolic regression package used is called TuringBot, a desktop application that can be downloaded for free.
This method very much resembles a scientist looking for mathematical laws that explain data, like Kepler did with data on the positions of planets in the sky to find his laws of planetary motion.
In this article, we have seen some alternatives to neural networks based on completely different ideas, including for instance symbolic regression which generates models that are explicit and more explainable than a neural network. Exploring different models is very valuable, because they may perform differently in different particular contexts.