import pandas as pd
print(pd.__version__)
Download link: https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
Summary: Predict the price of a vehicle given other information about it
Parameters:
1st argument is the location of the file (not necessarily a csv file)
sep specifies the separator, which can also be expressed as a regular expression. Here we trim whitespaces around the commas
engine represents the parsing engine. The values are c and python. The C engine is marginally faster but Python may offer more features
auto_data = pd.read_csv('../data/imports-85.data', sep=r'\s*,\s*', engine='python')
auto_data.head()
Numeric packages typically have special functions that allow you to deal with not a number or NAN values. So it's useful to replace our question marks with NANs which we'll do using the NumPy library.
import numpy as np
auto_data = auto_data.replace('?', np.nan)
auto_data.head()
Note that the automobile price is not present
auto_data.describe()
auto_data.describe(include='all')
If you examine the price column alone you'll find that this is of type object. An object cannot be summarized with numerical statistics. We need to convert this column to be of type float.
Also called data cleansing. Involves identifying incomplete, incorrect, inaccurate or irrelevant parts of the data and then replacing, modifying, or deleting the dirty or coarse data.
auto_data['price'].describe()
If conversion throws an error set to NaN (by setting errors='coerce')
auto_data['price'] = pd.to_numeric(auto_data['price'], errors='coerce')
auto_data['price'].describe()
auto_data = auto_data.drop('normalized-losses', axis=1)
auto_data.head()
auto_data.describe()
auto_data['horsepower'].describe()
auto_data['horsepower'] = pd.to_numeric(auto_data['horsepower'], errors='coerce')
auto_data['horsepower'].describe()
auto_data['num-of-cylinders'].describe()
cylinders_dict = {'two': 2,
'three': 3, 'four': 4, 'five': 5, 'six': 6, 'eight': 8, 'twelve': 12}
auto_data['num-of-cylinders'].replace(cylinders_dict, inplace=True)
auto_data.head()
auto_data = pd.get_dummies(auto_data,
columns=['make',
'fuel-type',
'aspiration',
'num-of-doors',
'body-style',
'drive-wheels',
'engine-location',
'engine-type',
'fuel-system'])
auto_data.head()
auto_data = auto_data.dropna()
auto_data.head()
auto_data[auto_data.isnull().any(axis=1)]
We can now use our data to build our models.
The train test split offers a very easy way to split our data set into training instances and test instances. While we're building machine learning models we want to make sure that our model works well on new instances it has never seen before. It has seen all of the training data before. That's what we used to tune our model parameters. The test data is what we use to check whether our model performs well.
The features which form the input to our model includes all of the columns in our dataframe except the price column. The price column is the label. Go ahead and drop the price column. The price column is our Y variable.
from sklearn.model_selection import train_test_split
X = auto_data.drop('price', axis=1)
# Taking the labels (price)
Y = auto_data['price']
# Spliting into 80% for training set and 20% for testing set so we can see our accuracy
X_train, x_test, Y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
Instantiate a linear regression object called an estimator. An estimator is a high level API that scikit-learn offers which implements the fit method. Any object in scikit-learn which learns from data which can be trained is an Estimator. Here linear regression is our estimator object and we call the fit method passing in the X data as well as the Y variables to start the training process for this estimator.
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression()
linear_model.fit(X_train, Y_train)
Linear Regression involves fitting a line using a slope as well as an intercept. Fit intercept by default is set to true, indicating that intercepts ought to be used. The data is not centered around zero. Normalize is set to false indicating that we do not want to normalize our data. Normalization is a process similar to standardization. It involves subtracting the mean and dividing by the L two norm, not the standard deviation.
linear_model.score(X_train, Y_train)
linear_model.coef_
List of features and their coefficients, ordered by coefficient value.
First get the feature names from the columns of our training data set and then set up a panda series that will associate the coefficients with the feature names and sort them by the coefficients. This association will allow us to see how much weight that is given to a particular feature in the final regression model.
We can see what factors really drive the price of automobiles, you can immediately see that some features are associated with negative coefficients which means these features are negatively correlated to the price
predictors = X_train.columns
coef = pd.Series(linear_model.coef_,predictors).sort_values()
print(coef)
y_predict = linear_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')
plt.legend()
plt.show()
For our model, how well do the features describe the price?
r_square = linear_model.score(x_test, y_test)
r_square
from sklearn.metrics import mean_squared_error
linear_model_mse = mean_squared_error(y_predict, y_test)
linear_model_mse
import math
math.sqrt(linear_model_mse)
Interesting metric for any regression model is the root mean square error or the RMSE which is the square root of the mean square error. The root mean square error describes on average how much our predicted label will differ from the actual label.
The RMSE is around 5,100 for our model which means on average the price predicted by our model is around $5,100 away from the actual price. This can be in the positive or negative direction
Preventing overfitting in regression requires the use of regularized regression models. Lasso regression penalizes large regression coefficients. Remember that the bias variance trade off pulls in opposite directions. Regularization reduces the variance error but increases the bias error. We have to find the right balance in our trade off.
Cost Function: RSS + Ξ±*(sum of absolute values of coefficients)
RSS = Residual Sum of Squares
Larger values of Ξ± should result in smaller coefficients as the cost function needs to be minimized
Ξ± is a hyperparameter. It's a tuning parameter that you have for your machine learning model in order to find the best possible model for your data set. Alpha helps you eliminate unimportant features
from sklearn.linear_model import Lasso
lasso_model = Lasso(alpha=0.5, normalize=True)
lasso_model.fit(X_train, Y_train)
lasso_model.score(X_train, Y_train)
coef = pd.Series(lasso_model.coef_,predictors).sort_values()
print(coef)
y_predict = lasso_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')
plt.legend()
plt.show()
r_square = lasso_model.score(x_test, y_test)
r_square
lasso_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(lasso_model_mse)
Cost Function: RSS + Ξ±*(sum of squares of coefficients)
RSS = Residual Sum of Squares
Larger values of Ξ± should result in smaller coefficients as the cost function needs to be minimized
Ridge Regression penalizes large coefficients even more than Lasso as coefficients are squared in cost function
from sklearn.linear_model import Ridge
ridge_model = Ridge(alpha=0.05, normalize=True)
ridge_model.fit(X_train, Y_train)
ridge_model.score(X_train, Y_train)
coef = pd.Series(ridge_model.coef_,predictors).sort_values()
print(coef)
y_predict = ridge_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 6)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')
plt.legend()
plt.show()
r_square = ridge_model.score(x_test, y_test)
r_square
ridge_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(ridge_model_mse)
from sklearn.svm import SVR
regression_model = SVR(kernel='linear', C=1.0)
regression_model.fit(X_train, Y_train)
regression_model.score(X_train, Y_train)
coef = pd.Series(regression_model.coef_[0], predictors).sort_values()
print(coef)
y_predict = regression_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')
plt.legend()
plt.show()
r_square = regression_model.score(x_test, y_test)
r_square
regression_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(regression_model_mse)