Working with Categorical and Numeric Data

Categorical variables which take on discrete values may need special treatment and preprocessing before you can feed them into a machine learning module. This is because machine learning modules can only accept numeric data.

There are many ways in which categorical data can be encoded as numbers. Simply mapping categories to numbers is one of them, but another common way is one-hot encoding. Consider the days of the week, Sunday through Saturday. One-hot encoding will assume that each of these days represent one position in an array, with Sunday at position zero and Saturday at position six. If you want to have a numeric vector which represents Monday, it will have the number one at the position Monday. The

In [15]:
import pandas as pd
In [16]:
print(pd.__version__)
0.23.4

Sample data representing student data and exam scores

Download link: http://roycekimmons.com/system/generate_data.php?dataset=exams&n=100

In [18]:
exam_data = pd.read_csv('../data/exams.csv', quotechar='"')
exam_data.head(10)
Out[18]:
gender race/ethnicity parental level of education lunch test preparation course math score reading score writing score
0 male group E associate's degree standard completed 79 75 81
1 female group C associate's degree free/reduced none 56 65 64
2 male group D bachelor's degree standard none 86 68 74
3 female group A bachelor's degree standard none 68 78 76
4 female group D high school free/reduced none 49 68 61
5 male group D some high school free/reduced none 49 47 41
6 male group C some high school standard none 87 78 70
7 male group C associate's degree standard none 74 80 76
8 female group B some high school standard none 50 44 48
9 male group A some college standard none 40 23 23

Check out average score for each exam

In [19]:
math_average = exam_data['math score'].mean()
reading_average = exam_data['reading score'].mean()
writing_average = average = exam_data['writing score'].mean()

print('Math Avg: ', math_average)
print('Reading Avg: ', reading_average)
print('Writing Avg: ', writing_average)
Math Avg:  65.06
Reading Avg:  67.28
Writing Avg:  66.47

Data Standardization:

Apply scaling on the test scores to express them in terms of z-score
Z-score is the expression of a value in terms of the number of standard deviations from the mean
The effect is to give a score which is relative to the the distribution of values for that column

In [20]:
from sklearn import preprocessing

exam_data[['math score']] = preprocessing.scale(exam_data[['math score']])
exam_data[['reading score']] = preprocessing.scale(exam_data[['reading score']])
exam_data[['writing score']] = preprocessing.scale(exam_data[['writing score']])
C:\Users\Vinay\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: DataConversionWarning: Data with input dtype int64 were all converted to float64 by the scale function.
  This is separate from the ipykernel package so we can avoid doing imports until
C:\Users\Vinay\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: DataConversionWarning: Data with input dtype int64 were all converted to float64 by the scale function.
  after removing the cwd from sys.path.
C:\Users\Vinay\Anaconda3\lib\site-packages\ipykernel_launcher.py:5: DataConversionWarning: Data with input dtype int64 were all converted to float64 by the scale function.
  """
In [21]:
exam_data.head(10)
Out[21]:
gender race/ethnicity parental level of education lunch test preparation course math score reading score writing score
0 male group E associate's degree standard completed 0.994557 0.574138 1.049901
1 female group C associate's degree free/reduced none -0.646391 -0.169564 -0.178476
2 male group D bachelor's degree standard none 1.493976 0.053547 0.544099
3 female group A bachelor's degree standard none 0.209756 0.797248 0.688613
4 female group D high school free/reduced none -1.145810 0.053547 -0.395248
5 male group D some high school free/reduced none -1.145810 -1.508227 -1.840397
6 male group C some high school standard none 1.565321 0.797248 0.255069
7 male group C associate's degree standard none 0.637829 0.945989 0.688613
8 female group B some high school standard none -1.074464 -1.731338 -1.334595
9 male group A some college standard none -1.787920 -3.293112 -3.141031

Explore averages after scaling

In [22]:
math_average = exam_data['math score'].mean()
reading_average = exam_data['reading score'].mean()
writing_average = average = exam_data['writing score'].mean()

print('Math Avg: ', math_average)
print('Reading Avg: ', reading_average)
print('Writing Avg: ', writing_average)
Math Avg:  -1.5693175925424186e-16
Reading Avg:  -1.4432899320127036e-16
Writing Avg:  6.217248937900877e-17

Label Encoding:

Convert text values to numbers. These can be used in the following situations:

  • There are only two values for a column in your data. The values will then become 0/1 - effectively a binary representation
  • The values have relationship with each other where comparisons are meaningful (e.g. low<medium<high)
In [23]:
le = preprocessing.LabelEncoder()
exam_data['gender'] = le.fit_transform(exam_data['gender'].astype(str))
In [24]:
type(exam_data['gender'][0])
Out[24]:
numpy.int32
In [25]:
exam_data.head(10)
Out[25]:
gender race/ethnicity parental level of education lunch test preparation course math score reading score writing score
0 1 group E associate's degree standard completed 0.994557 0.574138 1.049901
1 0 group C associate's degree free/reduced none -0.646391 -0.169564 -0.178476
2 1 group D bachelor's degree standard none 1.493976 0.053547 0.544099
3 0 group A bachelor's degree standard none 0.209756 0.797248 0.688613
4 0 group D high school free/reduced none -1.145810 0.053547 -0.395248
5 1 group D some high school free/reduced none -1.145810 -1.508227 -1.840397
6 1 group C some high school standard none 1.565321 0.797248 0.255069
7 1 group C associate's degree standard none 0.637829 0.945989 0.688613
8 0 group B some high school standard none -1.074464 -1.731338 -1.334595
9 1 group A some college standard none -1.787920 -3.293112 -3.141031
In [26]:
le.classes_
Out[26]:
array(['female', 'male'], dtype=object)

One-Hot Encoding:

  • Use when there is no meaningful comparison between values in the column
  • Creates a new column for each unique value for the specified feature in the data set
In [ ]:
pd.get_dummies(exam_data['race/ethnicity'])

Include the dummy columns in our data set

In [28]:
exam_data = pd.get_dummies(exam_data, columns=['race/ethnicity'])
In [30]:
exam_data.head(10)
Out[30]:
gender parental level of education lunch test preparation course math score reading score writing score race/ethnicity_group A race/ethnicity_group B race/ethnicity_group C race/ethnicity_group D race/ethnicity_group E
0 1 associate's degree standard completed 0.994557 0.574138 1.049901 0 0 0 0 1
1 0 associate's degree free/reduced none -0.646391 -0.169564 -0.178476 0 0 1 0 0
2 1 bachelor's degree standard none 1.493976 0.053547 0.544099 0 0 0 1 0
3 0 bachelor's degree standard none 0.209756 0.797248 0.688613 1 0 0 0 0
4 0 high school free/reduced none -1.145810 0.053547 -0.395248 0 0 0 1 0
5 1 some high school free/reduced none -1.145810 -1.508227 -1.840397 0 0 0 1 0
6 1 some high school standard none 1.565321 0.797248 0.255069 0 0 1 0 0
7 1 associate's degree standard none 0.637829 0.945989 0.688613 0 0 1 0 0
8 0 some high school standard none -1.074464 -1.731338 -1.334595 0 1 0 0 0
9 1 some college standard none -1.787920 -3.293112 -3.141031 1 0 0 0 0

Apply one-hot-encoding for remaining non-numeric features

In [31]:
exam_data = pd.get_dummies(exam_data, columns=['parental level of education', 
                                               'lunch', 
                                               'test preparation course'])

The data is now ready to be used to train a model

In [32]:
exam_data.head(10)
Out[32]:
gender math score reading score writing score race/ethnicity_group A race/ethnicity_group B race/ethnicity_group C race/ethnicity_group D race/ethnicity_group E parental level of education_associate's degree parental level of education_bachelor's degree parental level of education_high school parental level of education_master's degree parental level of education_some college parental level of education_some high school lunch_free/reduced lunch_standard test preparation course_completed test preparation course_none
0 1 0.994557 0.574138 1.049901 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0
1 0 -0.646391 -0.169564 -0.178476 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1
2 1 1.493976 0.053547 0.544099 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1
3 0 0.209756 0.797248 0.688613 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1
4 0 -1.145810 0.053547 -0.395248 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1
5 1 -1.145810 -1.508227 -1.840397 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1
6 1 1.565321 0.797248 0.255069 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1
7 1 0.637829 0.945989 0.688613 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1
8 0 -1.074464 -1.731338 -1.334595 0 1 0 0 0 0 0 0 0 0 1 0 1 0 1
9 1 -1.787920 -3.293112 -3.141031 1 0 0 0 0 0 0 0 0 1 0 0 1 0 1