The Whole idea behind decision trees is that these rules are not specialized rules that you are aware of. Instead they are determined by the machine learning model. The rules themselves are important and the order in which these decision variables are applied are also important. Our machine learning model needs to be able to pick the right decisions to make, ask the right questions, and also apply these decisions in the right order. This decision tree model that you see depicted on the right is a CART tree, where cart stands for classification and regression tree.
Decision trees are also non-parametric machine learning models. There isn't a lot of hyper parameter tuning involved. The machine learning model that you get is typically the best possible model. An additional advantage is that it requires little data preparation. You can usually feed in your training data as is.
Often you can have many ML models work together as an ensemble to build a stronger model. We will see how this can be done in gradient boosting regression, which uses several weak decision trees to build a stronger regression model.
import pandas as pd
Download Linkhttps://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
Summary: Predict the price of a vehicle given other information about it
auto_data = pd.read_csv('../data/imports-85.data', sep=r'\s*,\s*', engine='python')
auto_data.head(5)
import numpy as np
auto_data = auto_data.replace('?', np.nan)
auto_data.head()
auto_data.describe()
auto_data.describe(include='all')
auto_data['price'].describe()
If conversion throws an error set to NaN (by setting errors='coerce')
auto_data['price'] = pd.to_numeric(auto_data['price'], errors='coerce')
auto_data['price'].describe()
auto_data = auto_data.drop('normalized-losses', axis=1)
auto_data.head()
auto_data.describe()
auto_data['horsepower'].describe()
auto_data['horsepower'] = pd.to_numeric(auto_data['horsepower'], errors='coerce')
auto_data['horsepower'].describe()
auto_data['num-of-cylinders'].describe()
cylinders_dict = {'two': 2,
'three': 3, 'four': 4, 'five': 5, 'six': 6, 'eight': 8, 'twelve': 12}
auto_data['num-of-cylinders'].replace(cylinders_dict, inplace=True)
auto_data.head()
auto_data = pd.get_dummies(auto_data,
columns=['make', 'fuel-type', 'aspiration', 'num-of-doors',
'body-style', 'drive-wheels', 'engine-location',
'engine-type', 'fuel-system'])
auto_data.head()
auto_data = auto_data.dropna()
auto_data.head()
auto_data[auto_data.isnull().any(axis=1)]
from sklearn.model_selection import train_test_split
X = auto_data.drop('price', axis=1)
# Taking the labels (price)
Y = auto_data['price']
# Spliting into 80% for training set and 20% for testing set so we can see our accuracy
X_train, x_test, Y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
from sklearn.ensemble import GradientBoostingRegressor
params = {'n_estimators': 500, 'max_depth': 6, 'min_samples_split': 2,
'learning_rate': 0.01, 'loss': 'ls'}
gbr_model = GradientBoostingRegressor(**params)
gbr_model.fit(X_train, Y_train)
gbr_model.score(X_train, Y_train)
y_predict = gbr_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 6)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('MPG')
plt.legend()
plt.show()
r_square = gbr_model.score(x_test, y_test)
r_square
from sklearn.metrics import mean_squared_error
gbr_model_mse = mean_squared_error(y_predict, y_test)
gbr_model_mse
import math
math.sqrt(gbr_model_mse)
from sklearn.model_selection import GridSearchCV
num_estimators = [100, 200, 500]
learn_rates = [0.01, 0.02, 0.05, 0.1]
max_depths = [4, 6, 8]
param_grid = {'n_estimators': num_estimators,
'learning_rate': learn_rates,
'max_depth': max_depths}
grid_search = GridSearchCV(GradientBoostingRegressor(min_samples_split=2, loss='ls'),
param_grid, cv=3, return_train_score=True)
grid_search.fit(X_train, Y_train)
grid_search.best_params_
grid_search.cv_results_
for i in range(36):
print('Parameters: ', grid_search.cv_results_['params'][i])
print('Mean Test Score: ', grid_search.cv_results_['mean_test_score'][i])
print('Rank: ', grid_search.cv_results_['rank_test_score'][i])
print()
params = {'n_estimators': 100, 'max_depth': 4, 'min_samples_split': 2,
'learning_rate': 0.05, 'loss': 'ls'}
gbr_model = GradientBoostingRegressor(**params)
gbr_model.fit(X_train, Y_train)
y_predict = gbr_model.predict(x_test)
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 5)
plt.plot(y_predict, label='Predicted')
plt.plot(y_test.values, label='Actual')
plt.ylabel('MPG')
plt.legend()
plt.show()
r_square = gbr_model.score(x_test, y_test)
r_square
gbr_model_mse = mean_squared_error(y_predict, y_test)
math.sqrt(gbr_model_mse)