Extracting Features from Text

Frequency Based Encoding: Count Vectors

In order to preserve frequency information of individual words in text, you can use frequency-based embedding.

  • Count vectors, which tracks how many times a particular word occurred in one document
  • TF-IDF algorithm. This stands for term frequency-inverse document frequency. This captures two pieces of information, how often a particular words occurs within a document, as well as how often it occurs across the entire corpus.
In [1]:
from sklearn.feature_extraction.text import CountVectorizer

Define a corpus of 4 documents with some repeated values

In [2]:
corpus = ['This is the first document.',
          'This is the second document.', 
          'Third document. Document number three', 
          'Number four. To repeat, number four']

Use CountVectorizer to convert a collection of text documents to a "bag of words"

The bag of words is a sparse 4 by 12 matrix, four documents and a total vocabulary of 12 words.

In [3]:
vectorizer = CountVectorizer()
bag_of_words = vectorizer.fit_transform(corpus)

bag_of_words
Out[3]:
<4x12 sparse matrix of type '<class 'numpy.int64'>'
	with 18 stored elements in Compressed Sparse Row format>

View what the "bag" looks like

In [4]:
print(bag_of_words)
  (0, 0)	1
  (0, 1)	1
  (0, 7)	1
  (0, 3)	1
  (0, 9)	1
  (1, 6)	1
  (1, 0)	1
  (1, 7)	1
  (1, 3)	1
  (1, 9)	1
  (2, 10)	1
  (2, 4)	1
  (2, 8)	1
  (2, 0)	2
  (3, 5)	1
  (3, 11)	1
  (3, 2)	2
  (3, 4)	2

Get the value to which a word is mapped

You can access the ID that corresponds to a particular word by calling vectorizer.vocabulary.get method on that word. The word document corresponds to ID zero.

In [5]:
vectorizer.vocabulary_.get('document')
Out[5]:
0
In [6]:
vectorizer.vocabulary_
Out[6]:
{'this': 9,
 'is': 3,
 'the': 7,
 'first': 1,
 'document': 0,
 'second': 6,
 'third': 8,
 'number': 4,
 'three': 10,
 'four': 2,
 'to': 11,
 'repeat': 5}
In [7]:
import pandas as pd

print(pd.__version__)
0.23.4
In [8]:
pd.DataFrame(bag_of_words.toarray(), columns=vectorizer.get_feature_names())
Out[8]:
document first four is number repeat second the third this three to
0 1 1 0 1 0 0 0 1 0 1 0 0
1 1 0 0 1 0 0 1 1 0 1 0 0
2 2 0 0 0 1 0 0 0 1 0 1 0
3 0 0 2 0 2 1 0 0 0 0 0 1

Extend bag of words with TF-IDF weights

Every word in every document is associated with a score. Every document has a unique ID. Every word has a unique ID as well, and a document ID word ID combination is associated with a score.

In [9]:
from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer()
bag_of_words = vectorizer.fit_transform(corpus)

print(bag_of_words)
  (0, 9)	0.43584673254990375
  (0, 3)	0.43584673254990375
  (0, 7)	0.43584673254990375
  (0, 1)	0.5528163151092931
  (0, 0)	0.3528554929793508
  (1, 9)	0.43584673254990375
  (1, 3)	0.43584673254990375
  (1, 7)	0.43584673254990375
  (1, 0)	0.3528554929793508
  (1, 6)	0.5528163151092931
  (2, 0)	0.6191395067937654
  (2, 8)	0.4850008395708102
  (2, 4)	0.3823802326982809
  (2, 10)	0.4850008395708102
  (3, 4)	0.5412799489419371
  (3, 2)	0.6865449812276998
  (3, 11)	0.3432724906138499
  (3, 5)	0.3432724906138499
In [10]:
vectorizer.vocabulary_.get('document')
Out[10]:
0
In [11]:
pd.DataFrame(bag_of_words.toarray(), columns=vectorizer.get_feature_names())
Out[11]:
document first four is number repeat second the third this three to
0 0.352855 0.552816 0.000000 0.435847 0.00000 0.000000 0.000000 0.435847 0.000000 0.435847 0.000000 0.000000
1 0.352855 0.000000 0.000000 0.435847 0.00000 0.000000 0.552816 0.435847 0.000000 0.435847 0.000000 0.000000
2 0.619140 0.000000 0.000000 0.000000 0.38238 0.000000 0.000000 0.000000 0.485001 0.000000 0.485001 0.000000
3 0.000000 0.000000 0.686545 0.000000 0.54128 0.343272 0.000000 0.000000 0.000000 0.000000 0.000000 0.343272

View all the words and their corresponding values

In [12]:
vectorizer.vocabulary_
Out[12]:
{'this': 9,
 'is': 3,
 'the': 7,
 'first': 1,
 'document': 0,
 'second': 6,
 'third': 8,
 'number': 4,
 'three': 10,
 'four': 2,
 'to': 11,
 'repeat': 5}

Hashing Vectorizer

  • One issue with CountVectorizer and TF-IDF Vectorizer is that the number of features can get very large if the vocabulary is very large
  • The whole vocabulary will be stored in memory, and this may end up taking a lot of space
  • With Hashing Vectorizer, one can limit the number of features, let's say to a number n
  • Each word will be hashed to one of the n values
  • There will collisions where different words will be hashed to the same value
  • In many instances, peformance does not really suffer in spite of the collisions
In [13]:
from sklearn.feature_extraction.text import HashingVectorizer

vectorizer = HashingVectorizer(n_features=8)
feature_vector = vectorizer.fit_transform(corpus)
print(feature_vector)
  (0, 0)	-0.8944271909999159
  (0, 5)	0.4472135954999579
  (0, 6)	0.0
  (1, 0)	-0.5773502691896258
  (1, 3)	0.5773502691896258
  (1, 5)	0.5773502691896258
  (1, 6)	0.0
  (2, 0)	-0.7559289460184544
  (2, 3)	0.3779644730092272
  (2, 5)	0.3779644730092272
  (2, 7)	0.3779644730092272
  (3, 0)	0.31622776601683794
  (3, 3)	0.31622776601683794
  (3, 5)	0.6324555320336759
  (3, 7)	0.6324555320336759

There is no way to compute the inverse transform to get the words from the hashed value