Decision Tree make up Gradient Boosting in scikit-learn

The Whole idea behind decision trees is that these rules are not specialized rules that you are aware of. Instead they are determined by the machine learning model. The rules themselves are important and the order in which these decision variables are applied are also important. Our machine learning model needs to be able to pick the right decisions to make, ask the right questions, and also apply these decisions in the right order. This decision tree model that you see depicted on the right is a CART tree, where cart stands for classification and regression tree.

Decision trees are also non-parametric machine learning models. There isn't a lot of hyper parameter tuning involved. The machine learning model that you get is typically the best possible model. An additional advantage is that it requires little data preparation. You can usually feed in your training data as is.

Gradient Boost Model for Regression

Using Gradient Boosting to predict the price of an automobile

Often you can have many ML models work together as an ensemble to build a stronger model. We will see how this can be done in gradient boosting regression, which uses several weak decision trees to build a stronger regression model.

In [1]:
import pandas as pd

Download the Automobile data set

Download Linkhttps://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data

Summary: Predict the price of a vehicle given other information about it

In [25]:
auto_data = pd.read_csv('../data/imports-85.data', sep=r'\s*,\s*', engine='python')
auto_data.head(5)
Out[25]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 ? alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495
1 3 ? alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500
2 1 ? alfa-romero gas std two hatchback rwd front 94.5 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500
3 2 164 audi gas std four sedan fwd front 99.8 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950
4 2 164 audi gas std four sedan 4wd front 99.4 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450

5 rows × 26 columns

Fill missing values with NaN

In [26]:
import numpy as np

auto_data = auto_data.replace('?', np.nan)
auto_data.head()
Out[26]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 NaN alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495
1 3 NaN alfa-romero gas std two convertible rwd front 88.6 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500
2 1 NaN alfa-romero gas std two hatchback rwd front 94.5 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500
3 2 164 audi gas std four sedan fwd front 99.8 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950
4 2 164 audi gas std four sedan 4wd front 99.4 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450

5 rows × 26 columns

Information about numeric fields in our dataframe

In [27]:
auto_data.describe()
Out[27]:
symboling wheel-base length width height curb-weight engine-size compression-ratio city-mpg highway-mpg
count 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000
mean 0.834146 98.756585 174.049268 65.907805 53.724878 2555.565854 126.907317 10.142537 25.219512 30.751220
std 1.245307 6.021776 12.337289 2.145204 2.443522 520.680204 41.642693 3.972040 6.542142 6.886443
min -2.000000 86.600000 141.100000 60.300000 47.800000 1488.000000 61.000000 7.000000 13.000000 16.000000
25% 0.000000 94.500000 166.300000 64.100000 52.000000 2145.000000 97.000000 8.600000 19.000000 25.000000
50% 1.000000 97.000000 173.200000 65.500000 54.100000 2414.000000 120.000000 9.000000 24.000000 30.000000
75% 2.000000 102.400000 183.100000 66.900000 55.500000 2935.000000 141.000000 9.400000 30.000000 34.000000
max 3.000000 120.900000 208.100000 72.300000 59.800000 4066.000000 326.000000 23.000000 49.000000 54.000000

Information about all fields in our dataframe

In [28]:
auto_data.describe(include='all')
Out[28]:
symboling normalized-losses make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
count 205.000000 164 205 205 205 203 205 205 205 205.000000 ... 205.000000 205 201 201 205.000000 203 203 205.000000 205.000000 201
unique NaN 51 22 2 2 2 5 3 2 NaN ... NaN 8 38 36 NaN 59 23 NaN NaN 186
top NaN 161 toyota gas std four sedan fwd front NaN ... NaN mpfi 3.62 3.4 NaN 68 5500 NaN NaN 6229
freq NaN 11 32 185 168 114 96 120 202 NaN ... NaN 94 23 20 NaN 19 37 NaN NaN 2
mean 0.834146 NaN NaN NaN NaN NaN NaN NaN NaN 98.756585 ... 126.907317 NaN NaN NaN 10.142537 NaN NaN 25.219512 30.751220 NaN
std 1.245307 NaN NaN NaN NaN NaN NaN NaN NaN 6.021776 ... 41.642693 NaN NaN NaN 3.972040 NaN NaN 6.542142 6.886443 NaN
min -2.000000 NaN NaN NaN NaN NaN NaN NaN NaN 86.600000 ... 61.000000 NaN NaN NaN 7.000000 NaN NaN 13.000000 16.000000 NaN
25% 0.000000 NaN NaN NaN NaN NaN NaN NaN NaN 94.500000 ... 97.000000 NaN NaN NaN 8.600000 NaN NaN 19.000000 25.000000 NaN
50% 1.000000 NaN NaN NaN NaN NaN NaN NaN NaN 97.000000 ... 120.000000 NaN NaN NaN 9.000000 NaN NaN 24.000000 30.000000 NaN
75% 2.000000 NaN NaN NaN NaN NaN NaN NaN NaN 102.400000 ... 141.000000 NaN NaN NaN 9.400000 NaN NaN 30.000000 34.000000 NaN
max 3.000000 NaN NaN NaN NaN NaN NaN NaN NaN 120.900000 ... 326.000000 NaN NaN NaN 23.000000 NaN NaN 49.000000 54.000000 NaN

11 rows × 26 columns

What data type is price?

In [29]:
auto_data['price'].describe()
Out[29]:
count      201
unique     186
top       6229
freq         2
Name: price, dtype: object

Convert the values in the price column to numeric values

If conversion throws an error set to NaN (by setting errors='coerce')

In [30]:
auto_data['price'] = pd.to_numeric(auto_data['price'], errors='coerce') 
In [31]:
auto_data['price'].describe()
Out[31]:
count      201.000000
mean     13207.129353
std       7947.066342
min       5118.000000
25%       7775.000000
50%      10295.000000
75%      16500.000000
max      45400.000000
Name: price, dtype: float64

Dropping a column which we deem unnecessary

In [32]:
auto_data = auto_data.drop('normalized-losses', axis=1)
auto_data.head()
Out[32]:
symboling make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base length ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 13495.0
1 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111 5000 21 27 16500.0
2 1 alfa-romero gas std two hatchback rwd front 94.5 171.2 ... 152 mpfi 2.68 3.47 9.0 154 5000 19 26 16500.0
3 2 audi gas std four sedan fwd front 99.8 176.6 ... 109 mpfi 3.19 3.4 10.0 102 5500 24 30 13950.0
4 2 audi gas std four sedan 4wd front 99.4 176.6 ... 136 mpfi 3.19 3.4 8.0 115 5500 18 22 17450.0

5 rows × 25 columns

In [33]:
auto_data.describe()
Out[33]:
symboling wheel-base length width height curb-weight engine-size compression-ratio city-mpg highway-mpg price
count 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 205.000000 201.000000
mean 0.834146 98.756585 174.049268 65.907805 53.724878 2555.565854 126.907317 10.142537 25.219512 30.751220 13207.129353
std 1.245307 6.021776 12.337289 2.145204 2.443522 520.680204 41.642693 3.972040 6.542142 6.886443 7947.066342
min -2.000000 86.600000 141.100000 60.300000 47.800000 1488.000000 61.000000 7.000000 13.000000 16.000000 5118.000000
25% 0.000000 94.500000 166.300000 64.100000 52.000000 2145.000000 97.000000 8.600000 19.000000 25.000000 7775.000000
50% 1.000000 97.000000 173.200000 65.500000 54.100000 2414.000000 120.000000 9.000000 24.000000 30.000000 10295.000000
75% 2.000000 102.400000 183.100000 66.900000 55.500000 2935.000000 141.000000 9.400000 30.000000 34.000000 16500.000000
max 3.000000 120.900000 208.100000 72.300000 59.800000 4066.000000 326.000000 23.000000 49.000000 54.000000 45400.000000

Horsepower is also non-numeric...

In [34]:
auto_data['horsepower'].describe()
Out[34]:
count     203
unique     59
top        68
freq       19
Name: horsepower, dtype: object

...so this is also converted to a numeric value

In [35]:
auto_data['horsepower'] = pd.to_numeric(auto_data['horsepower'], errors='coerce') 
In [36]:
auto_data['horsepower'].describe()
Out[36]:
count    203.000000
mean     104.256158
std       39.714369
min       48.000000
25%       70.000000
50%       95.000000
75%      116.000000
max      288.000000
Name: horsepower, dtype: float64
In [37]:
auto_data['num-of-cylinders'].describe()
Out[37]:
count      205
unique       7
top       four
freq       159
Name: num-of-cylinders, dtype: object

Since there are only 7 unique values, we can explicitly set the corresponding numeric values

In [38]:
cylinders_dict = {'two': 2, 
                  'three': 3, 'four': 4, 'five': 5, 'six': 6, 'eight': 8, 'twelve': 12}
auto_data['num-of-cylinders'].replace(cylinders_dict, inplace=True)

auto_data.head()
Out[38]:
symboling make fuel-type aspiration num-of-doors body-style drive-wheels engine-location wheel-base length ... engine-size fuel-system bore stroke compression-ratio horsepower peak-rpm city-mpg highway-mpg price
0 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111.0 5000 21 27 13495.0
1 3 alfa-romero gas std two convertible rwd front 88.6 168.8 ... 130 mpfi 3.47 2.68 9.0 111.0 5000 21 27 16500.0
2 1 alfa-romero gas std two hatchback rwd front 94.5 171.2 ... 152 mpfi 2.68 3.47 9.0 154.0 5000 19 26 16500.0
3 2 audi gas std four sedan fwd front 99.8 176.6 ... 109 mpfi 3.19 3.4 10.0 102.0 5500 24 30 13950.0
4 2 audi gas std four sedan 4wd front 99.4 176.6 ... 136 mpfi 3.19 3.4 8.0 115.0 5500 18 22 17450.0

5 rows × 25 columns

All other non-numeric fields can be made into usable features by applying one-hot-encoding

In [39]:
auto_data = pd.get_dummies(auto_data, 
                           columns=['make', 'fuel-type', 'aspiration', 'num-of-doors', 
                                    'body-style', 'drive-wheels', 'engine-location', 
                                   'engine-type', 'fuel-system'])
auto_data.head()
Out[39]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi
0 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
1 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
2 1 94.5 171.2 65.5 52.4 2823 6 152 2.68 3.47 ... 1 0 0 0 0 0 0 1 0 0
3 2 99.8 176.6 66.2 54.3 2337 4 109 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0
4 2 99.4 176.6 66.4 54.3 2824 5 136 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0

5 rows × 69 columns

Drop rows containing missing values

In [40]:
auto_data = auto_data.dropna()
auto_data.head()
Out[40]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi
0 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
1 3 88.6 168.8 64.1 48.8 2548 4 130 3.47 2.68 ... 0 0 0 0 0 0 0 1 0 0
2 1 94.5 171.2 65.5 52.4 2823 6 152 2.68 3.47 ... 1 0 0 0 0 0 0 1 0 0
3 2 99.8 176.6 66.2 54.3 2337 4 109 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0
4 2 99.4 176.6 66.4 54.3 2824 5 136 3.19 3.4 ... 0 0 0 0 0 0 0 1 0 0

5 rows × 69 columns

Verify that there are no null values in the data set

In [41]:
auto_data[auto_data.isnull().any(axis=1)]
Out[41]:
symboling wheel-base length width height curb-weight num-of-cylinders engine-size bore stroke ... engine-type_ohcv engine-type_rotor fuel-system_1bbl fuel-system_2bbl fuel-system_4bbl fuel-system_idi fuel-system_mfi fuel-system_mpfi fuel-system_spdi fuel-system_spfi

0 rows × 69 columns

Create training and test data using train_test_split

In [42]:
from sklearn.model_selection import train_test_split

X = auto_data.drop('price', axis=1)

# Taking the labels (price)
Y = auto_data['price']

# Spliting into 80% for training set and 20% for testing set so we can see our accuracy
X_train, x_test, Y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0)

Gradient Boosting:

  • Start off by learning a very simple model
  • Take the error residuals from the first model and then try to predict the errors in the next iteration (also with a simple learner)
  • Combine the two simple models to obtain a slightly better overall model
  • At each iteration, the learner tries to reduce the errors (not eliminate it) by a certain learning rate. This is also the gradient of the model
  • Keep iterating over the error residuals until you have an ensemble of simple learners which combine to produce a more complex model

Parameters:

  • n_estimators: Number of boosting stages
  • max_depth: Maximum depth of each estimator tree
  • min_samples_split: Minimum samples in each subset when splitting the data set
  • learning_rate: Defines the rate at which to converge to the optimal value
  • loss: Type of loss function to optimize (ls == least squares)
In [43]:
from sklearn.ensemble import GradientBoostingRegressor

params = {'n_estimators': 500, 'max_depth': 6, 'min_samples_split': 2,
          'learning_rate': 0.01, 'loss': 'ls'}
gbr_model = GradientBoostingRegressor(**params)
gbr_model.fit(X_train, Y_train)
Out[43]:
GradientBoostingRegressor(alpha=0.9, criterion='friedman_mse', init=None,
             learning_rate=0.01, loss='ls', max_depth=6, max_features=None,
             max_leaf_nodes=None, min_impurity_decrease=0.0,
             min_impurity_split=None, min_samples_leaf=1,
             min_samples_split=2, min_weight_fraction_leaf=0.0,
             n_estimators=500, n_iter_no_change=None, presort='auto',
             random_state=None, subsample=1.0, tol=0.0001,
             validation_fraction=0.1, verbose=0, warm_start=False)

Get R-square on training data

In [44]:
gbr_model.score(X_train, Y_train)
Out[44]:
0.9983862634096469

Make predictions on test data and compare with actual values

In [45]:
y_predict = gbr_model.predict(x_test)
In [46]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 6)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('MPG')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

Calculate R-square score on test data

In [47]:
r_square = gbr_model.score(x_test, y_test)
r_square
Out[47]:
0.9348449512584064

Calculate Mean Square Error

In [48]:
from sklearn.metrics import mean_squared_error

gbr_model_mse = mean_squared_error(y_predict, y_test)
gbr_model_mse
Out[48]:
4624286.2835185705

Root of Mean Square Error

In [49]:
import math

math.sqrt(gbr_model_mse)
Out[49]:
2150.4153746470865

num_estimators vs learning_rate:

  • Higher learning rate should result in convergence with fewer estimators
  • High value for learning rate risks skipping the optimal solution
  • Low learning rate equates to high bias, high rate to high variance
  • Need to strike the right balance between num_estimators and learning_rate
In [58]:
from sklearn.model_selection import GridSearchCV

num_estimators = [100, 200, 500]
learn_rates = [0.01, 0.02, 0.05, 0.1]
max_depths = [4, 6, 8]

param_grid = {'n_estimators': num_estimators,
              'learning_rate': learn_rates,
              'max_depth': max_depths}

grid_search = GridSearchCV(GradientBoostingRegressor(min_samples_split=2, loss='ls'),
                           param_grid, cv=3, return_train_score=True)
grid_search.fit(X_train, Y_train)

grid_search.best_params_
Out[58]:
{'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 100}
In [51]:
grid_search.cv_results_
Out[51]:
{'mean_fit_time': array([0.05829946, 0.10128117, 0.2455256 , 0.07828156, 0.15325061,
        0.36646183, 0.10260201, 0.20021176, 0.52303282, 0.04964383,
        0.09661118, 0.2185475 , 0.07928705, 0.16656462, 0.46939683,
        0.14690971, 0.25552615, 0.73224656, 0.09561125, 0.08862257,
        0.28416403, 0.09794339, 0.15191865, 0.48039254, 0.12160238,
        0.20587595, 0.46972442, 0.04530716, 0.08661644, 0.21587571,
        0.0712924 , 0.14159179, 0.31581807, 0.09994221, 0.31082153,
        0.34314084]),
 'std_fit_time': array([0.00654679, 0.00573868, 0.01021691, 0.00532431, 0.00448703,
        0.01595586, 0.00478992, 0.01165992, 0.03218184, 0.00169275,
        0.00262151, 0.00816674, 0.00262221, 0.01985273, 0.02093807,
        0.02237228, 0.04005751, 0.03944316, 0.04279525, 0.0016961 ,
        0.0497536 , 0.00423874, 0.00779112, 0.09398944, 0.01772832,
        0.00647432, 0.05037975, 0.00124649, 0.00376849, 0.0053501 ,
        0.0024832 , 0.01640335, 0.01958797, 0.00647984, 0.15820893,
        0.01086188]),
 'mean_score_time': array([0.00166639, 0.00166575, 0.00233237, 0.00099937, 0.00199906,
        0.00233277, 0.00200534, 0.00233841, 0.00333762, 0.00099945,
        0.00133244, 0.00199922, 0.00166631, 0.00233809, 0.00299795,
        0.00200494, 0.00199882, 0.00566363, 0.00166575, 0.00166512,
        0.00299819, 0.00233324, 0.00198714, 0.00332991, 0.00166647,
        0.00200534, 0.00333142, 0.00166599, 0.00166575, 0.00199914,
        0.00133387, 0.00166504, 0.00299835, 0.00199906, 0.00233277,
        0.00299827]),
 'std_score_time': array([4.71426640e-04, 4.71145571e-04, 4.71089385e-04, 2.97360213e-07,
        2.24783192e-07, 4.71988518e-04, 8.26135571e-06, 4.79462707e-04,
        4.80811248e-04, 8.92080638e-07, 4.71426560e-04, 4.49566384e-07,
        4.71370394e-04, 4.66095185e-04, 8.16049002e-04, 9.78065223e-06,
        5.61957980e-07, 1.69833442e-03, 4.73000122e-04, 4.70864843e-04,
        1.41270620e-03, 4.71988518e-04, 8.09052510e-04, 4.71598391e-04,
        4.71315374e-04, 9.10580037e-06, 4.71201776e-04, 4.71314168e-04,
        4.71145933e-04, 3.37174788e-07, 4.72269527e-04, 4.71145611e-04,
        5.15042996e-07, 4.49566384e-07, 4.71145731e-04, 1.12391596e-07]),
 'param_learning_rate': masked_array(data=[0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                    0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02,
                    0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
                    0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
              mask=[False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False],
        fill_value='?',
             dtype=object),
 'param_max_depth': masked_array(data=[4, 4, 4, 6, 6, 6, 8, 8, 8, 4, 4, 4, 6, 6, 6, 8, 8, 8,
                    4, 4, 4, 6, 6, 6, 8, 8, 8, 4, 4, 4, 6, 6, 6, 8, 8, 8],
              mask=[False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False],
        fill_value='?',
             dtype=object),
 'param_n_estimators': masked_array(data=[100, 200, 500, 100, 200, 500, 100, 200, 500, 100, 200,
                    500, 100, 200, 500, 100, 200, 500, 100, 200, 500, 100,
                    200, 500, 100, 200, 500, 100, 200, 500, 100, 200, 500,
                    100, 200, 500],
              mask=[False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False, False, False, False, False,
                    False, False, False, False],
        fill_value='?',
             dtype=object),
 'params': [{'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 100},
  {'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 200},
  {'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 500},
  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 100},
  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 200},
  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 500},
  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 100},
  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 200},
  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 500},
  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 100},
  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 200},
  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 500},
  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 100},
  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 200},
  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 500},
  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 100},
  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 200},
  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 500},
  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 100},
  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 200},
  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 500},
  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 100},
  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 200},
  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 500},
  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 100},
  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 200},
  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 500},
  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 100},
  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 200},
  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 500},
  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 100},
  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 200},
  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 500},
  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 100},
  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 200},
  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 500}],
 'split0_test_score': array([0.74837347, 0.82812257, 0.82611433, 0.76909701, 0.83446945,
        0.81942997, 0.77746544, 0.84187354, 0.81970051, 0.82752175,
        0.83308408, 0.82589542, 0.82920777, 0.82227061, 0.82232003,
        0.8377833 , 0.82517639, 0.81577606, 0.83385095, 0.83537237,
        0.82926143, 0.81034517, 0.81823015, 0.80980625, 0.82644015,
        0.81557691, 0.81848833, 0.83536149, 0.82916693, 0.82481438,
        0.80913786, 0.81317088, 0.81343852, 0.81603124, 0.81238828,
        0.82863825]),
 'split1_test_score': array([0.72693109, 0.83968945, 0.86771021, 0.72573607, 0.85138575,
        0.87326575, 0.71444563, 0.82697239, 0.85418117, 0.84366892,
        0.86818572, 0.86765514, 0.8465344 , 0.87448476, 0.87326698,
        0.82798323, 0.85004778, 0.85268517, 0.86738326, 0.86033785,
        0.86143041, 0.86959762, 0.87464225, 0.8688638 , 0.8552056 ,
        0.85468188, 0.85762344, 0.84882307, 0.86567037, 0.85413482,
        0.86903295, 0.8710516 , 0.86397489, 0.85625107, 0.85131415,
        0.85771156]),
 'split2_test_score': array([0.6716327 , 0.84485593, 0.89925896, 0.67121203, 0.83352649,
        0.88674743, 0.67409599, 0.83641969, 0.88492517, 0.84338082,
        0.8963377 , 0.90478001, 0.83667441, 0.88238572, 0.88247623,
        0.84260405, 0.88389881, 0.88498716, 0.89693303, 0.9040837 ,
        0.90012311, 0.88839177, 0.88258455, 0.88485887, 0.88966564,
        0.88722935, 0.89025853, 0.90196415, 0.9071301 , 0.91300863,
        0.89158269, 0.89510198, 0.88902682, 0.87045707, 0.88379675,
        0.87411637]),
 'mean_test_score': array([0.71564575, 0.83755598, 0.86436117, 0.72201503, 0.8397939 ,
        0.85981438, 0.72200235, 0.83508854, 0.85293562, 0.8381905 ,
        0.86586916, 0.86611019, 0.83747219, 0.85971369, 0.85935441,
        0.83612353, 0.85304099, 0.85114946, 0.86605575, 0.86659798,
        0.86360498, 0.85611152, 0.85848565, 0.85450964, 0.8571038 ,
        0.85249605, 0.85545677, 0.86204957, 0.86732247, 0.86398594,
        0.8565845 , 0.85977482, 0.85548007, 0.84757979, 0.84916639,
        0.85348873]),
 'std_test_score': array([0.03232961, 0.00699596, 0.02995493, 0.0400479 , 0.00820571,
        0.02908166, 0.04253735, 0.00615576, 0.02664242, 0.00754486,
        0.02587508, 0.03222302, 0.00709603, 0.02667202, 0.02645577,
        0.00608322, 0.02406657, 0.02827617, 0.02577025, 0.0283984 ,
        0.02897   , 0.03325883, 0.02864901, 0.0322775 , 0.02584657,
        0.02929279, 0.02934009, 0.02875394, 0.03184976, 0.03667279,
        0.03478994, 0.03438557, 0.03143797, 0.02304975, 0.02919191,
        0.01880495]),
 'rank_test_score': array([36, 30,  6, 34, 28, 10, 35, 33, 23, 29,  5,  3, 31, 12, 13, 32, 22,
        25,  4,  2,  8, 17, 14, 20, 15, 24, 19,  9,  1,  7, 16, 11, 18, 27,
        26, 21]),
 'split0_train_score': array([0.84277013, 0.9669226 , 0.99554034, 0.85924661, 0.97810248,
        0.99892697, 0.86304902, 0.98045091, 0.99911384, 0.96730126,
        0.99383348, 0.99809333, 0.9785135 , 0.99844208, 0.99914711,
        0.98090896, 0.99881668, 0.99916582, 0.99591465, 0.99811997,
        0.99914093, 0.99894008, 0.99915553, 0.99916582, 0.99912292,
        0.9991658 , 0.99916583, 0.99825353, 0.99911287, 0.99916579,
        0.99915607, 0.99916576, 0.99916583, 0.99916581, 0.99916583,
        0.99916583]),
 'split1_train_score': array([0.84496489, 0.97009297, 0.99733479, 0.8605333 , 0.97882426,
        0.999709  , 0.86579573, 0.98181695, 0.99977725, 0.97060099,
        0.99583137, 0.99904005, 0.97942473, 0.99933608, 0.99981885,
        0.98217867, 0.9995066 , 0.99982236, 0.99728531, 0.99909394,
        0.99980847, 0.99971805, 0.99981962, 0.99982236, 0.99978611,
        0.99982236, 0.99982236, 0.99926815, 0.99978818, 0.99982235,
        0.99981879, 0.99982235, 0.99982236, 0.99982236, 0.99982236,
        0.99982236]),
 'split2_train_score': array([0.83853589, 0.96503188, 0.99589588, 0.85791547, 0.97764538,
        0.99863015, 0.86392962, 0.98037666, 0.99894347, 0.96550042,
        0.99409388, 0.99829897, 0.97824152, 0.99813342, 0.99897943,
        0.98110702, 0.99865261, 0.99899538, 0.99592403, 0.99828623,
        0.99897941, 0.99861736, 0.99897903, 0.99899537, 0.99895368,
        0.99899538, 0.99899539, 0.99844493, 0.99896912, 0.99899538,
        0.99899009, 0.99899536, 0.99899539, 0.99899538, 0.99899539,
        0.99899539]),
 'mean_train_score': array([0.8420903 , 0.96734915, 0.996257  , 0.85923179, 0.97819071,
        0.99908871, 0.86425813, 0.9808815 , 0.99927819, 0.96780089,
        0.99458625, 0.99847745, 0.97872658, 0.9986372 , 0.99931513,
        0.98139822, 0.99899196, 0.99932785, 0.99637466, 0.99850004,
        0.9993096 , 0.99909183, 0.99931806, 0.99932785, 0.99928757,
        0.99932785, 0.99932786, 0.99865554, 0.99929006, 0.99932784,
        0.99932165, 0.99932782, 0.99932786, 0.99932785, 0.99932786,
        0.99932786]),
 'std_train_score': array([0.00266829, 0.00208808, 0.00077581, 0.00106878, 0.0004853 ,
        0.00045505, 0.00114515, 0.00066215, 0.00035968, 0.00211206,
        0.00088683, 0.00040658, 0.000506  , 0.00051   , 0.0003627 ,
        0.00055776, 0.00037002, 0.00035652, 0.00064393, 0.00042539,
        0.00035886, 0.00046199, 0.0003619 , 0.00035653, 0.00035923,
        0.00035652, 0.00035652, 0.00044017, 0.00035708, 0.00035652,
        0.000358  , 0.00035653, 0.00035652, 0.00035653, 0.00035652,
        0.00035652])}

Extract the useful values from the Grid Search results

In [52]:
for i in range(36):
    print('Parameters: ', grid_search.cv_results_['params'][i])
    print('Mean Test Score: ', grid_search.cv_results_['mean_test_score'][i])
    print('Rank: ', grid_search.cv_results_['rank_test_score'][i])
    print()
Parameters:  {'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 100}
Mean Test Score:  0.715645751776125
Rank:  36

Parameters:  {'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 200}
Mean Test Score:  0.837555984966126
Rank:  30

Parameters:  {'learning_rate': 0.01, 'max_depth': 4, 'n_estimators': 500}
Mean Test Score:  0.864361165072617
Rank:  6

Parameters:  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 100}
Mean Test Score:  0.7220150338663449
Rank:  34

Parameters:  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 200}
Mean Test Score:  0.839793899512928
Rank:  28

Parameters:  {'learning_rate': 0.01, 'max_depth': 6, 'n_estimators': 500}
Mean Test Score:  0.8598143846160325
Rank:  10

Parameters:  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 100}
Mean Test Score:  0.722002352600005
Rank:  35

Parameters:  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 200}
Mean Test Score:  0.8350885393063743
Rank:  33

Parameters:  {'learning_rate': 0.01, 'max_depth': 8, 'n_estimators': 500}
Mean Test Score:  0.8529356168862819
Rank:  23

Parameters:  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 100}
Mean Test Score:  0.8381904972953808
Rank:  29

Parameters:  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 200}
Mean Test Score:  0.8658691644274202
Rank:  5

Parameters:  {'learning_rate': 0.02, 'max_depth': 4, 'n_estimators': 500}
Mean Test Score:  0.8661101903930495
Rank:  3

Parameters:  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 100}
Mean Test Score:  0.8374721941037021
Rank:  31

Parameters:  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 200}
Mean Test Score:  0.859713694517364
Rank:  12

Parameters:  {'learning_rate': 0.02, 'max_depth': 6, 'n_estimators': 500}
Mean Test Score:  0.859354412026299
Rank:  13

Parameters:  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 100}
Mean Test Score:  0.8361235260921839
Rank:  32

Parameters:  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 200}
Mean Test Score:  0.8530409945337474
Rank:  22

Parameters:  {'learning_rate': 0.02, 'max_depth': 8, 'n_estimators': 500}
Mean Test Score:  0.8511494643101667
Rank:  25

Parameters:  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 100}
Mean Test Score:  0.8660557471901884
Rank:  4

Parameters:  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 200}
Mean Test Score:  0.8665979754511083
Rank:  2

Parameters:  {'learning_rate': 0.05, 'max_depth': 4, 'n_estimators': 500}
Mean Test Score:  0.8636049822418079
Rank:  8

Parameters:  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 100}
Mean Test Score:  0.8561115211460318
Rank:  17

Parameters:  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 200}
Mean Test Score:  0.8584856478781284
Rank:  14

Parameters:  {'learning_rate': 0.05, 'max_depth': 6, 'n_estimators': 500}
Mean Test Score:  0.8545096417337072
Rank:  20

Parameters:  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 100}
Mean Test Score:  0.8571037972581078
Rank:  15

Parameters:  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 200}
Mean Test Score:  0.8524960481447974
Rank:  24

Parameters:  {'learning_rate': 0.05, 'max_depth': 8, 'n_estimators': 500}
Mean Test Score:  0.8554567685098962
Rank:  19

Parameters:  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 100}
Mean Test Score:  0.8620495721672722
Rank:  9

Parameters:  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 200}
Mean Test Score:  0.8673224651406056
Rank:  1

Parameters:  {'learning_rate': 0.1, 'max_depth': 4, 'n_estimators': 500}
Mean Test Score:  0.8639859449527799
Rank:  7

Parameters:  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 100}
Mean Test Score:  0.8565845001745418
Rank:  16

Parameters:  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 200}
Mean Test Score:  0.8597748193821942
Rank:  11

Parameters:  {'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 500}
Mean Test Score:  0.8554800737507327
Rank:  18

Parameters:  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 100}
Mean Test Score:  0.8475797930502663
Rank:  27

Parameters:  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 200}
Mean Test Score:  0.8491663926761586
Rank:  26

Parameters:  {'learning_rate': 0.1, 'max_depth': 8, 'n_estimators': 500}
Mean Test Score:  0.8534887262871022
Rank:  21

In [59]:
params = {'n_estimators': 100, 'max_depth': 4, 'min_samples_split': 2,
          'learning_rate': 0.05, 'loss': 'ls'}
gbr_model = GradientBoostingRegressor(**params)
gbr_model.fit(X_train, Y_train)
Out[59]:
GradientBoostingRegressor(alpha=0.9, criterion='friedman_mse', init=None,
             learning_rate=0.05, loss='ls', max_depth=4, max_features=None,
             max_leaf_nodes=None, min_impurity_decrease=0.0,
             min_impurity_split=None, min_samples_leaf=1,
             min_samples_split=2, min_weight_fraction_leaf=0.0,
             n_estimators=100, n_iter_no_change=None, presort='auto',
             random_state=None, subsample=1.0, tol=0.0001,
             validation_fraction=0.1, verbose=0, warm_start=False)

Compare predictions vs actual values

In [60]:
y_predict = gbr_model.predict(x_test)
In [61]:
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)

plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('MPG')

plt.legend()
plt.show()
Populating the interactive namespace from numpy and matplotlib

R-square on test data

In [62]:
r_square = gbr_model.score(x_test, y_test)
r_square
Out[62]:
0.9673044004870682
In [63]:
gbr_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(gbr_model_mse)
Out[63]:
1523.3264114728431
In [ ]: