Lasso and Ridge RegressionΒΆ

First use Linear Regression to predict automobile prices. Then apply Lasso and Ridge Regression models on the same data and compare resultsΒΆ
In [1]:
import pandas as pd
In [2]:
print(pd.__version__)
0.23.4

Download the Automobile data setΒΆ

Download link: https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data

Summary: Predict the price of a vehicle given other information about it

Parameters:
1st argument is the location of the file (not necessarily a csv file)
sep specifies the separator, which can also be expressed as a regular expression. Here we trim whitespaces around the commas
engine represents the parsing engine. The values are c and python. The C engine is marginally faster but Python may offer more features

In [3]:
auto_data = pd.read_csv('../data/imports-85.data', sep=r'\s*,\s*', engine='python')
auto_data.head()
Out[3]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 ? alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495
1 3 ? alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500
2 1 ? alfa-romero gas std two hatchback rwd front 94.5 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500
3 2 164 audi gas std four sedan fwd front 99.8 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950
4 2 164 audi gas std four sedan 4wd front 99.4 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450

5 rows Γ— 26 columns

Fill missing values with NaNΒΆ

Numeric packages typically have special functions that allow you to deal with not a number or NAN values. So it's useful to replace our question marks with NANs which we'll do using the NumPy library.

In [4]:
import numpy as np

auto_data = auto_data.replace('?', np.nan)
auto_data.head()
Out[4]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 NaN alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495
1 3 NaN alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500
2 1 NaN alfa-romero gas std two hatchback rwd front 94.5 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500
3 2 164 audi gas std four sedan fwd front 99.8 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950
4 2 164 audi gas std four sedan 4wd front 99.4 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450

5 rows Γ— 26 columns

Information about numeric fields in our dataframeΒΆ

Note that the automobile price is not present

In [5]:
auto_data.describe()
Out[5]:
symboling wheel-base length width height curb-weight engine-size compression-ratio city-mpg highway-mpg
count 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000
mean 0.834146 98.756585 174.049268 65.907805 53.724878 2555.565854 126.907317 10.142537 25.219512 30.751220
std 1.245307 6.021776 12.337289 2.145204 2.443522 520.680204 41.642693 3.972040 6.542142 6.886443
min -2.000000 86.600000 141.100000 60.300000 47.800000 1488.000000 61.000000 7.000000 13.000000 16.000000
25% 0.000000 94.500000 166.300000 64.100000 52.000000 2145.000000 97.000000 8.600000 19.000000 25.000000
50% 1.000000 97.000000 173.200000 65.500000 54.100000 2414.000000 120.000000 9.000000 24.000000 30.000000
75% 2.000000 102.400000 183.100000 66.900000 55.500000 2935.000000 141.000000 9.400000 30.000000 34.000000
max 3.000000 120.900000 208.100000 72.300000 59.800000 4066.000000 326.000000 23.000000 49.000000 54.000000

Information about all fields in our dataframeΒΆ

In [6]:
auto_data.describe(include='all')
Out[6]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
count 205.000000 164 205 205 205 203 205 205 205 205.000000 ... 205.000000 205 201 201 205.000000 203 203 205.000000 205.000000 201
unique NaN 51 22 2 2 2 5 3 2 NaN ... NaN 8 38 36 NaN 59 23 NaN NaN 186
top NaN 161 toyota gas std four sedan fwd front NaN ... NaN mpfi 3.62 3.4 NaN 68 5500 NaN NaN 18150
freq NaN 11 32 185 168 114 96 120 202 NaN ... NaN 94 23 20 NaN 19 37 NaN NaN 2
mean 0.834146 NaN NaN NaN NaN NaN NaN NaN NaN 98.756585 ... 126.907317 NaN NaN NaN 10.142537 NaN NaN 25.219512 30.751220 NaN
std 1.245307 NaN NaN NaN NaN NaN NaN NaN NaN 6.021776 ... 41.642693 NaN NaN NaN 3.972040 NaN NaN 6.542142 6.886443 NaN
min -2.000000 NaN NaN NaN NaN NaN NaN NaN NaN 86.600000 ... 61.000000 NaN NaN NaN 7.000000 NaN NaN 13.000000 16.000000 NaN
25% 0.000000 NaN NaN NaN NaN NaN NaN NaN NaN 94.500000 ... 97.000000 NaN NaN NaN 8.600000 NaN NaN 19.000000 25.000000 NaN
50% 1.000000 NaN NaN NaN NaN NaN NaN NaN NaN 97.000000 ... 120.000000 NaN NaN NaN 9.000000 NaN NaN 24.000000 30.000000 NaN
75% 2.000000 NaN NaN NaN NaN NaN NaN NaN NaN 102.400000 ... 141.000000 NaN NaN NaN 9.400000 NaN NaN 30.000000 34.000000 NaN
max 3.000000 NaN NaN NaN NaN NaN NaN NaN NaN 120.900000 ... 326.000000 NaN NaN NaN 23.000000 NaN NaN 49.000000 54.000000 NaN

11 rows Γ— 26 columns

If you examine the price column alone you'll find that this is of type object. An object cannot be summarized with numerical statistics. We need to convert this column to be of type float.

Data CleaningΒΆ

Also called data cleansing. Involves identifying incomplete, incorrect, inaccurate or irrelevant parts of the data and then replacing, modifying, or deleting the dirty or coarse data.

What data type is price?ΒΆ

In [7]:
auto_data['price'].describe()
Out[7]:
count       201
unique      186
top       18150
freq          2
Name: price, dtype: object

Convert the values in the price column to numeric valuesΒΆ

If conversion throws an error set to NaN (by setting errors='coerce')

In [8]:
auto_data['price'] = pd.to_numeric(auto_data['price'], errors='coerce') 
In [9]:
auto_data['price'].describe()
Out[9]:
count      201.000000
mean     13207.129353
std       7947.066342
min       5118.000000
25%       7775.000000
50%      10295.000000
75%      16500.000000
max      45400.000000
Name: price, dtype: float64

Dropping a column which we deem unnecessaryΒΆ

In [10]:
auto_data = auto_data.drop('normalized-losses', axis=1)
auto_data.head()
Out[10]:
symboling make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base length ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495.0
1 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500.0
2 1 alfa-romero gas std two hatchback rwd front 94.5 171.2 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500.0
3 2 audi gas std four sedan fwd front 99.8 176.6 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950.0
4 2 audi gas std four sedan 4wd front 99.4 176.6 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450.0

5 rows Γ— 25 columns

In [11]:
auto_data.describe()
Out[11]:
symboling wheel-base length width height curb-weight engine-size compression-ratio city-mpg highway-mpg price
count 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 201.000000
mean 0.834146 98.756585 174.049268 65.907805 53.724878 2555.565854 126.907317 10.142537 25.219512 30.751220 13207.129353
std 1.245307 6.021776 12.337289 2.145204 2.443522 520.680204 41.642693 3.972040 6.542142 6.886443 7947.066342
min -2.000000 86.600000 141.100000 60.300000 47.800000 1488.000000 61.000000 7.000000 13.000000 16.000000 5118.000000
25% 0.000000 94.500000 166.300000 64.100000 52.000000 2145.000000 97.000000 8.600000 19.000000 25.000000 7775.000000
50% 1.000000 97.000000 173.200000 65.500000 54.100000 2414.000000 120.000000 9.000000 24.000000 30.000000 10295.000000
75% 2.000000 102.400000 183.100000 66.900000 55.500000 2935.000000 141.000000 9.400000 30.000000 34.000000 16500.000000
max 3.000000 120.900000 208.100000 72.300000 59.800000 4066.000000 326.000000 23.000000 49.000000 54.000000 45400.000000

Horsepower is also non-numeric...ΒΆ

In [12]:
auto_data['horsepower'].describe()
Out[12]:
count     203
unique     59
top        68
freq       19
Name: horsepower, dtype: object

...so this is also converted to a numeric valueΒΆ

In [13]:
auto_data['horsepower'] = pd.to_numeric(auto_data['horsepower'], errors='coerce') 
In [14]:
auto_data['horsepower'].describe()
Out[14]:
count    203.000000
mean     104.256158
std       39.714369
min       48.000000
25%       70.000000
50%       95.000000
75%      116.000000
max      288.000000
Name: horsepower, dtype: float64
In [15]:
auto_data['num-of-cylinders'].describe()
Out[15]:
count      205
unique       7
top       four
freq       159
Name: num-of-cylinders, dtype: object

Since there are only 7 unique values, we can explicitly set the corresponding numeric valuesΒΆ

In [16]:
cylinders_dict = {'two': 2, 
                  'three': 3, 'four': 4, 'five': 5, 'six': 6, 'eight': 8, 'twelve': 12}
auto_data['num-of-cylinders'].replace(cylinders_dict, inplace=True)

auto_data.head()
Out[16]:
symboling make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base length ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111.0 5000 21 27 13495.0
1 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111.0 5000 21 27 16500.0
2 1 alfa-romero gas std two hatchback rwd front 94.5 171.2 ... 152 mpfi 2.68 3.47 9.0 154.0 5000 19 26 16500.0
3 2 audi gas std four sedan fwd front 99.8 176.6 ... 109 mpfi 3.19 3.4 10.0 102.0 5500 24 30 13950.0
4 2 audi gas std four sedan 4wd front 99.4 176.6 ... 136 mpfi 3.19 3.4 8.0 115.0 5500 18 22 17450.0

5 rows Γ— 25 columns

All other non-numeric fields can be made into usable features by applying one-hot-encodingΒΆ

In [17]:
auto_data = pd.get_dummies(auto_data, 
                           columns=['make', 
                                    'fuel-type', 
                                    'aspiration', 
                                    'num-of-doors', 
                                    'body-style', 
                                    'drive-wheels', 
                                    'engine-location', 
                                    'engine-type', 
                                    'fuel-system'])
auto_data.head()
Out[17]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi
0 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
1 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
2 1 94.5 171.2 65.5 52.4 2823 6 152 2.68 3.47 ... 1 0 0 0 0 0 0 1 0 0
3 2 99.8 176.6 66.2 54.3 2337 4 109 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0
4 2 99.4 176.6 66.4 54.3 2824 5 136 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0

5 rows Γ— 69 columns

Drop rows containing missing valuesΒΆ

In [18]:
auto_data = auto_data.dropna()
auto_data.head()
Out[18]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi
0 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
1 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
2 1 94.5 171.2 65.5 52.4 2823 6 152 2.68 3.47 ... 1 0 0 0 0 0 0 1 0 0
3 2 99.8 176.6 66.2 54.3 2337 4 109 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0
4 2 99.4 176.6 66.4 54.3 2824 5 136 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0

5 rows Γ— 69 columns

Verify that there are no null values in the data setΒΆ

In [19]:
auto_data[auto_data.isnull().any(axis=1)]
Out[19]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi

0 rows Γ— 69 columns

Data Cleaning is now completeΒΆ

We can now use our data to build our models.

The train test split offers a very easy way to split our data set into training instances and test instances. While we're building machine learning models we want to make sure that our model works well on new instances it has never seen before. It has seen all of the training data before. That's what we used to tune our model parameters. The test data is what we use to check whether our model performs well.

The features which form the input to our model includes all of the columns in our dataframe except the price column. The price column is the label. Go ahead and drop the price column. The price column is our Y variable.

Create training and test data using train_test_splitΒΆ

In [20]:
from sklearn.model_selection import train_test_split

X = auto_data.drop('price', axis=1)

# Taking the labels (price)
Y = auto_data['price']

# Spliting into 80% for training set and 20% for testing set so we can see our accuracy
X_train, x_test, Y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0)

Create a LinearRegression model with our training dataΒΆ

Instantiate a linear regression object called an estimator. An estimator is a high level API that scikit-learn offers which implements the fit method. Any object in scikit-learn which learns from data which can be trained is an Estimator. Here linear regression is our estimator object and we call the fit method passing in the X data as well as the Y variables to start the training process for this estimator.

In [21]:
from sklearn.linear_model import LinearRegression

linear_model = LinearRegression()
linear_model.fit(X_train, Y_train)
Out[21]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
         normalize=False)

Linear Regression involves fitting a line using a slope as well as an intercept. Fit intercept by default is set to true, indicating that intercepts ought to be used. The data is not centered around zero. Normalize is set to false indicating that we do not want to normalize our data. Normalization is a process similar to standardization. It involves subtracting the mean and dividing by the L two norm, not the standard deviation.

Check R-square on training dataΒΆ

In [22]:
linear_model.score(X_train, Y_train)
Out[22]:
0.967922737092433

View coefficients for each featureΒΆ

In [23]:
linear_model.coef_
Out[23]:
array([-5.80061464e+02,  3.06874682e+02, -1.06499194e+02,  1.52236727e+02,
       -5.05771408e+02,  5.30902624e+00, -4.15988710e+03,  2.82885047e+02,
       -1.75044342e+04, -4.49860407e+03, -1.59555122e+03, -4.82296063e+01,
        2.15673623e+00, -6.79852969e+01,  1.73362582e+02, -3.74284422e+03,
        3.71721668e+03,  6.86565908e+03, -4.12474818e+03, -4.48093478e+03,
       -2.94555422e+02, -4.74283895e+03,  3.47319261e+03, -4.08732786e+02,
        3.02841192e+03,  2.30284405e+03, -4.71448781e+03, -6.67651176e+02,
       -8.40276958e+03, -5.25933085e+03,  1.04713265e+04,  9.09494702e-12,
        8.36042927e+03, -4.81543507e+02, -2.06062392e+03,  6.21153583e+01,
        1.09986569e+03,  6.06615966e+03, -6.06615966e+03, -1.38232301e+03,
        1.38232301e+03,  3.37569880e+03,  3.28229109e+03,  2.99875643e+03,
       -1.00643481e+03, -1.20820627e+03, -3.92414032e+02, -3.91701310e+02,
       -4.99487338e+02, -6.59140532e+02,  1.15862787e+03, -2.46748485e+03,
        2.46748485e+03, -2.04927851e+02,  0.00000000e+00,  2.88374518e+03,
       -9.80899556e+02,  1.98594135e+03, -3.68385912e+03,  0.00000000e+00,
       -3.85312742e+03, -5.52868364e+02,  0.00000000e+00,  6.06615966e+03,
       -1.73448716e+03, -7.37469779e+02, -2.06266033e+03,  2.87445339e+03])

A better view of the coefficientsΒΆ

List of features and their coefficients, ordered by coefficient value.

First get the feature names from the columns of our training data set and then set up a panda series that will associate the coefficients with the feature names and sort them by the coefficients. This association will allow us to see how much weight that is given to a particular feature in the final regression model.

We can see what factors really drive the price of automobiles, you can immediately see that some features are associated with negative coefficients which means these features are negatively correlated to the price

In [24]:
predictors = X_train.columns
coef = pd.Series(linear_model.coef_,predictors).sort_values()

print(coef)
bore                     -1.750443e+04
make_peugot              -8.402770e+03
fuel-type_gas            -6.066160e+03
make_plymouth            -5.259331e+03
make_isuzu               -4.742839e+03
make_mitsubishi          -4.714488e+03
stroke                   -4.498604e+03
make_dodge               -4.480935e+03
num-of-cylinders         -4.159887e+03
make_chevrolet           -4.124748e+03
fuel-system_1bbl         -3.853127e+03
make_alfa-romero         -3.742844e+03
engine-type_ohcv         -3.683859e+03
engine-location_front    -2.467485e+03
fuel-system_spdi         -2.062660e+03
make_toyota              -2.060624e+03
fuel-system_mfi          -1.734487e+03
compression-ratio        -1.595551e+03
aspiration_std           -1.382323e+03
body-style_hatchback     -1.208206e+03
body-style_hardtop       -1.006435e+03
engine-type_ohc          -9.808996e+02
fuel-system_mpfi         -7.374698e+02
make_nissan              -6.676512e+02
drive-wheels_fwd         -6.591405e+02
symboling                -5.800615e+02
fuel-system_2bbl         -5.528684e+02
height                   -5.057714e+02
drive-wheels_4wd         -4.994873e+02
make_subaru              -4.815435e+02
                              ...     
engine-type_rotor         0.000000e+00
engine-type_dohcv         0.000000e+00
fuel-system_4bbl          0.000000e+00
make_renault              9.094947e-12
peak-rpm                  2.156736e+00
curb-weight               5.309026e+00
make_volkswagen           6.211536e+01
width                     1.522367e+02
highway-mpg               1.733626e+02
engine-size               2.828850e+02
wheel-base                3.068747e+02
make_volvo                1.099866e+03
drive-wheels_rwd          1.158628e+03
aspiration_turbo          1.382323e+03
engine-type_ohcf          1.985941e+03
make_mercury              2.302844e+03
engine-location_rear      2.467485e+03
fuel-system_spfi          2.874453e+03
engine-type_l             2.883745e+03
body-style_convertible    2.998756e+03
make_mercedes-benz        3.028412e+03
num-of-doors_two          3.282291e+03
num-of-doors_four         3.375699e+03
make_jaguar               3.473193e+03
make_audi                 3.717217e+03
fuel-system_idi           6.066160e+03
fuel-type_diesel          6.066160e+03
make_bmw                  6.865659e+03
make_saab                 8.360429e+03
make_porsche              1.047133e+04
Length: 68, dtype: float64

Make predictions on test dataΒΆ

In [25]:
y_predict = linear_model.predict(x_test)

Compare predicted and actual values of PriceΒΆ

In [32]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

R-square scoreΒΆ

For our model, how well do the features describe the price?

In [33]:
r_square = linear_model.score(x_test, y_test)
r_square
Out[33]:
0.6322583416118098

Calculate Mean Square ErrorΒΆ

In [34]:
from sklearn.metrics import mean_squared_error

linear_model_mse = mean_squared_error(y_predict, y_test)
linear_model_mse
Out[34]:
26099937.604333173

Root of Mean Square ErrorΒΆ

In [35]:
import math

math.sqrt(linear_model_mse)
Out[35]:
5108.809803108075

Interesting metric for any regression model is the root mean square error or the RMSE which is the square root of the mean square error. The root mean square error describes on average how much our predicted label will differ from the actual label.

The RMSE is around 5,100 for our model which means on average the price predicted by our model is around $5,100 away from the actual price. This can be in the positive or negative direction

Lasso RegressionΒΆ

Preventing overfitting in regression requires the use of regularized regression models. Lasso regression penalizes large regression coefficients. Remember that the bias variance trade off pulls in opposite directions. Regularization reduces the variance error but increases the bias error. We have to find the right balance in our trade off.

Cost Function: RSS + Ξ±*(sum of absolute values of coefficients)

RSS = Residual Sum of Squares

Larger values of Ξ± should result in smaller coefficients as the cost function needs to be minimized

Ξ± is a hyperparameter. It's a tuning parameter that you have for your machine learning model in order to find the best possible model for your data set. Alpha helps you eliminate unimportant features

In [36]:
from sklearn.linear_model import Lasso

lasso_model = Lasso(alpha=0.5, normalize=True)
lasso_model.fit(X_train, Y_train)
Out[36]:
Lasso(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=1000,
   normalize=True, positive=False, precompute=False, random_state=None,
   selection='cyclic', tol=0.0001, warm_start=False)

Check R-square on training dataΒΆ

In [37]:
lasso_model.score(X_train, Y_train)
Out[37]:
0.965108127252755

Coefficients when using LassoΒΆ

In [38]:
coef = pd.Series(lasso_model.coef_,predictors).sort_values()
print(coef)
bore                     -9.092176e+03
make_peugot              -6.467778e+03
make_jaguar              -6.193545e+03
engine-location_front    -4.537584e+03
engine-type_ohcv         -4.223274e+03
make_plymouth            -3.546019e+03
stroke                   -3.543804e+03
make_dodge               -3.152300e+03
make_mitsubishi          -3.142282e+03
make_chevrolet           -2.942939e+03
make_isuzu               -2.910845e+03
aspiration_std           -2.498593e+03
fuel-system_1bbl         -2.184594e+03
num-of-cylinders         -1.361323e+03
make_toyota              -1.303444e+03
engine-type_dohc         -1.156469e+03
drive-wheels_fwd         -9.650933e+02
symboling                -5.782481e+02
make_nissan              -5.647727e+02
compression-ratio        -3.531524e+02
height                   -3.477801e+02
body-style_hatchback     -3.119457e+02
make_alfa-romero         -2.269308e+02
body-style_hardtop       -1.024258e+02
length                   -8.684394e+01
fuel-system_spdi         -7.328266e+01
fuel-system_mpfi         -4.383129e+01
drive-wheels_4wd          0.000000e+00
engine-type_ohc          -0.000000e+00
engine-type_dohcv         0.000000e+00
                              ...     
city-mpg                 -0.000000e+00
make_mercury              0.000000e+00
aspiration_turbo          9.267945e-11
peak-rpm                  1.970555e+00
curb-weight               3.968390e+00
num-of-doors_two          1.270225e+01
fuel-system_mfi           6.146399e+01
body-style_sedan          9.572899e+01
highway-mpg               1.068108e+02
make_volvo                1.445752e+02
width                     1.874224e+02
engine-size               1.895810e+02
wheel-base                2.402207e+02
engine-type_ohcf          2.660292e+02
num-of-doors_four         2.970371e+02
drive-wheels_rwd          3.464650e+02
make_honda                4.344793e+02
make_mazda                4.668807e+02
fuel-system_2bbl          6.156961e+02
fuel-system_idi           7.518829e+02
make_volkswagen           1.190833e+03
fuel-system_spfi          2.703127e+03
fuel-type_diesel          2.875479e+03
body-style_convertible    3.621342e+03
make_audi                 3.866295e+03
engine-type_l             4.268718e+03
make_mercedes-benz        4.728695e+03
make_bmw                  6.586646e+03
make_saab                 6.762817e+03
make_porsche              7.787668e+03
Length: 68, dtype: float64

Make predictions on test dataΒΆ

In [39]:
y_predict = lasso_model.predict(x_test)

Compare predicted and actual values of PriceΒΆ

In [40]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

Check R-square value on test dataΒΆ

In [41]:
r_square = lasso_model.score(x_test, y_test)
r_square
Out[41]:
0.887194953444848

Is the root mean square error any better?ΒΆ

In [42]:
lasso_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(lasso_model_mse)
Out[42]:
2829.518800848218

Ridge RegressionΒΆ

Cost Function: RSS + Ξ±*(sum of squares of coefficients)

RSS = Residual Sum of Squares

Larger values of Ξ± should result in smaller coefficients as the cost function needs to be minimized

Ridge Regression penalizes large coefficients even more than Lasso as coefficients are squared in cost function

In [43]:
from sklearn.linear_model import Ridge

ridge_model = Ridge(alpha=0.05, normalize=True)
ridge_model.fit(X_train, Y_train)
Out[43]:
Ridge(alpha=0.05, copy_X=True, fit_intercept=True, max_iter=None,
   normalize=True, random_state=None, solver='auto', tol=0.001)

Check R-square on training dataΒΆ

In [44]:
ridge_model.score(X_train, Y_train)
Out[44]:
0.9538248035642568

Coefficients when using RidgeΒΆ

In [45]:
coef = pd.Series(ridge_model.coef_,predictors).sort_values()
print(coef)
make_jaguar              -4721.702089
engine-location_front    -3375.974878
make_peugot              -3042.959224
make_isuzu               -2186.005015
engine-type_ohcv         -2070.071555
make_mercury             -1886.420238
make_plymouth            -1866.358266
make_mitsubishi          -1849.750707
make_dodge               -1424.482791
make_toyota              -1217.908816
make_nissan              -1136.396198
make_subaru              -1125.402397
fuel-system_1bbl         -1029.775416
stroke                    -923.794701
fuel-system_mfi           -895.203607
aspiration_std            -803.283234
engine-type_dohc          -721.693473
bore                      -596.120417
drive-wheels_fwd          -572.601182
body-style_wagon          -512.619400
fuel-system_spdi          -408.012399
fuel-type_gas             -330.762925
symboling                 -290.993542
body-style_hatchback      -288.551757
make_volvo                -175.881081
fuel-system_mpfi          -115.001771
height                    -113.378836
compression-ratio          -69.634278
engine-type_ohcf           -50.303944
body-style_sedan           -42.693856
                             ...     
horsepower                  28.209908
highway-mpg                 43.790147
engine-size                 53.865198
num-of-doors_two            88.101865
wheel-base                 105.716263
make_mazda                 138.696560
num-of-doors_four          216.318361
body-style_hardtop         260.140775
fuel-system_2bbl           299.762791
fuel-system_idi            330.762925
fuel-type_diesel           330.762925
make_chevrolet             359.937733
engine-type_ohc            391.467043
make_honda                 408.525572
width                      419.126401
drive-wheels_4wd           485.184562
make_volkswagen            495.001272
drive-wheels_rwd           516.328229
aspiration_turbo           803.283234
make_alfa-romero          1014.731294
num-of-cylinders          1196.377925
fuel-system_spfi          1579.941974
engine-type_l             1745.877239
make_audi                 2554.419827
make_saab                 3164.670139
body-style_convertible    3298.480263
engine-location_rear      3375.974878
make_porsche              5452.966732
make_mercedes-benz        6665.125605
make_bmw                  7055.960924
Length: 68, dtype: float64

Make predictions on test dataΒΆ

In [46]:
y_predict = ridge_model.predict(x_test)

Compare predicted and actual values of PriceΒΆ

In [47]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 6)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

Get R-square value for test dataΒΆ

In [48]:
r_square = ridge_model.score(x_test, y_test)
r_square
Out[48]:
0.8875663034165489
In [49]:
ridge_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(ridge_model_mse)
Out[49]:
2824.857626833409

Apply SVR on this data setΒΆ

In [50]:
from sklearn.svm import SVR

regression_model = SVR(kernel='linear', C=1.0)
regression_model.fit(X_train, Y_train)
Out[50]:
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1,
  gamma='auto_deprecated', kernel='linear', max_iter=-1, shrinking=True,
  tol=0.001, verbose=False)

R-square on training dataΒΆ

In [51]:
regression_model.score(X_train, Y_train)
Out[51]:
0.7601810603162245
In [52]:
coef = pd.Series(regression_model.coef_[0], predictors).sort_values()
print(coef)
body-style_hatchback     -15.913994
drive-wheels_fwd         -10.684989
make_mitsubishi           -9.000000
fuel-system_spdi          -8.000000
stroke                    -7.631257
body-style_wagon          -6.000000
make_subaru               -4.806859
num-of-doors_two          -4.625734
make_toyota               -4.000000
make_plymouth             -4.000000
engine-type_ohcv          -3.940745
fuel-system_1bbl          -3.000000
engine-location_front     -3.000000
make_isuzu                -2.000000
make_nissan               -2.000000
engine-type_ohcf          -1.806859
make_honda                -1.770995
aspiration_std            -1.625734
fuel-type_gas             -1.518599
make_peugot               -1.481401
bore                      -1.480885
fuel-system_mfi           -1.000000
make_mercury              -1.000000
fuel-system_spfi          -1.000000
make_jaguar               -0.940745
engine-type_l             -0.481401
fuel-system_4bbl           0.000000
make_dodge                 0.000000
engine-type_rotor          0.000000
engine-type_dohcv          0.000000
                            ...    
fuel-type_diesel           1.518599
fuel-system_idi            1.518599
aspiration_turbo           1.625734
make_volkswagen            2.000000
make_chevrolet             2.000000
engine-location_rear       3.000000
make_mazda                 3.000000
make_volvo                 3.000000
make_audi                  3.000000
body-style_hardtop         3.000000
curb-weight                3.891324
make_porsche               4.000000
make_mercedes-benz         5.000000
engine-type_ohc            5.229005
num-of-doors_four          5.625734
body-style_convertible     6.000000
make_bmw                   7.000000
num-of-cylinders           9.474039
drive-wheels_rwd           9.577854
fuel-system_mpfi          10.166390
city-mpg                  10.823610
body-style_sedan          12.913994
width                     26.846516
highway-mpg               26.993361
height                    27.352161
compression-ratio         32.209153
length                    33.213934
wheel-base                40.150685
horsepower                46.862621
engine-size               52.193945
Length: 68, dtype: float64
In [53]:
y_predict = regression_model.predict(x_test)
In [54]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('Price')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

R-square on test dataΒΆ

In [55]:
r_square = regression_model.score(x_test, y_test)
r_square
Out[55]:
0.8719400287912358
In [56]:
regression_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(regression_model_mse)
Out[56]:
3014.775792136513
In [ ]: