Let's explore ways of fixing some common issues in data. Here's a toy dataset I created for this lesson. It has eleven instances of user-product interactions online, recording whether the user liked the product, how long they viewed the product, whether it was on a website or through a mobile app, and what time they started viewing the product. Can you spot any potential issues in this data?
import pandas as pd
df = pd.read_csv('product_view_data.csv')
df
df.info()
view_duration
columns)timestamp
is represented as a string)In the dataframe above, you can see null values represented as NaN
, which stands for not a number. From the output of df.info()
you can see that there are 8 non-null values, which leaves 3 null values of the 11 entries. Missing data is an issue that should be handled differently depending on several factors, such as the reason those values are missing and whether the occurrences seem random. One way of handling them is imputing them with the mean. You can do this quickly and efficiently with a convenient function from Pandas.
# get the mean of the column with missing data
mean = df['view_duration'].mean()
print(mean)
# replace NaN values with the mean
df['view_duration'].fillna(mean)
Let's look at the dataframe now - did this fix the problem?
df
Instead of making changes to the original column, it just returned a new column with the changes, which we didn't store anywhere. To keep the changes, make sure to assign it to the original like this:
df['view_duration'] = df['view_duration'].fillna(mean)
Alternatively, you can use an extra parameter as shown in the cell below.
# replace NaN values and make changes in place
df['view_duration'].fillna(mean, inplace=True)
df
Success!
There are multiple reasons you may end up with duplicated data, like combined data sources or human error. Here's a simple scenario where two rows (3 and 4) are identical. This toy dataset is small enough for us to count visually. For bigger datasets, you can use this function to see which rows are duplicates.
# By default, this marks duplicates as True excluding the first instance,
# and it considers a row to be a duplicate only if the values in all
# columns match. You can change both of these with its parameters.
df.duplicated()
# For larger datasets, it would probably be more helpful to get a count
# of duplicates in the dataset like this
sum(df.duplicated())
# You can drop duplicated data with this function. Remember to use
# assigned it to the original dataframe or use inplace to keep changes!
df.drop_duplicates(inplace=True)
df
Awesome! You can see we've dropped row 4 - the row marked as a duplicate. This was a simple situation where the entire row was identical. You could imagine duplicated data scenarios that are a bit more complicated.
For example, let's say we had data on patients from a hospital. What happens when you come across two rows with the same patient id but different data on medical exam results? Do you combine them? Only keep the latest one? This is a situation you'd have to investigate more. For this scenario, you'd likely identify duplicates only based on the column recording the patient's id. You can use the subset
paramater in duplicated() and drop_duplicates() to do this.
Incorrect datatypes is also a problem data analysts frequently come across. In this case, the timestamps are represented as strings instead of datetimes. This isn't critical, but datetimes are much more convenient to work with if you want to extract specific information from them or filter them more easily.
# This shows the datatype of timestamp is not yet datetime
df.info()
# Let's use this awesome function to convert this column to datetime
df['timestamp'] = pd.to_datetime(df['timestamp'])
# Now we can see timestamp is represented as a datetime
df.info()
Note that even if you save this to a csv file after making this change, it will be read as a string by default the next time you open it. You'll still have to convert it after opening the csv file, or use parameters like parse_dates
in the read_csv() function. to_datetime() provides parameters for more options if the strings you have to parse are formatted unconventionally.