When it comes to machine learning, working with normalized numbers may lead to faster convergence while training the models. Here we will show how you can normalize your dataset in Python using either NumPy or Pandas.
To normalize a NumPy array, you can use:
import numpy as np data = np.loadtxt('data.txt') for col in range(data.shape): data[:,col] -= np.average(data[:,col]) data[:,col] /= np.std(data[:,col])
Here data.shape is the number of columns in the dataset, and we are using NumPy to normalize the average and standard deviation of each column to 0 and 1 respectively.
Normalizing a Pandas dataframe is even easier:
import pandas as pd df = pd.read_csv('data.csv') df = (df-df.mean())/df.std()
This will normalize each column of the dataframe.