import nltk
nltk.corpus.gutenberg.fileids()
emma = nltk.corpus.gutenberg.words('austen-emma.txt')
len(emma)
from nltk.corpus import gutenberg
gutenberg.fileids()
emma = gutenberg.words('austen-emma.txt')
# displays three statistics for each text:
# average word length, average sentence length, and the number of times each vocabulary item appears in the text on average
for fileid in gutenberg.fileids():
num_chars = len(gutenberg.raw(fileid))
num_words = len(gutenberg.words(fileid))
num_sents = len(gutenberg.sents(fileid))
num_vocab = len(set(w.lower() for w in gutenberg.words(fileid)))
print(round(num_chars/num_words), round(num_words/num_sents), round(num_words/num_vocab), fileid)
macbeth_sentences = gutenberg.sents('shakespeare-macbeth.txt')
macbeth_sentences
print(macbeth_sentences[1116])
longest_len = max(len(s) for s in macbeth_sentences)
longest_len
# prints the longest sentence
[s for s in macbeth_sentences if len(s) == longest_len]
from nltk.corpus import webtext
for fileid in webtext.fileids():
print(fileid, webtext.raw(fileid)[:65], '...')
from nltk.corpus import nps_chat
chatroom = nps_chat.posts('10-19-20s_706posts.xml')
print(chatroom[123])
It is a convenient resource for studying systematic differences between genres, a kind of linguistic inquiry known as stylistics.
from nltk.corpus import brown
brown.categories()
brown.words(categories='news')
brown.words(fileids=['cg22'])
brown.sents(categories=['news', 'editorial', 'reviews'])
# end=' ' in order for the print function to put its output on a single line
news_text = brown.words(categories='news')
fdist = nltk.FreqDist(w.lower() for w in news_text)
modals = ['can', 'could', 'may', 'might', 'must', 'will']
for m in modals:
print(m + ':', fdist[m], end=' ')
cfd = nltk.ConditionalFreqDist(
(genre, word)
for genre in brown.categories()
for word in brown.words(categories=genre))
genres = ['news', 'religion', 'hobbies', 'science_fiction', 'romance', 'humor']
modals = ['can', 'could', 'may', 'might', 'must', 'will']
cfd.tabulate(conditions=genres, samples=modals)
90 topics, and grouped into two sets, called "training" and "test"
from nltk.corpus import reuters
# 10,788 documents
reuters.fileids()
print(reuters.categories())
# topics covered by one or more documents
reuters.categories('training/9865')
# topics covered by one or more documents
reuters.categories(['training/9865', 'training/9880'])
# documents included in one or more categories
reuters.fileids('barley')
# documents included in one or more categories
reuters.fileids(['barley', 'corn'])
reuters.words('training/9865')[:14]
['FRENCH', 'FREE', 'MARKET', 'CEREAL', 'EXPORT', 'BIDS',
'DETAILED', 'French', 'operators', 'have', 'requested', 'licences', 'to', 'export']
reuters.words(['training/9865', 'training/9880'])
['FRENCH', 'FREE', 'MARKET', 'CEREAL', 'EXPORT', ...]
reuters.words(categories='barley')
['FRENCH', 'FREE', 'MARKET', 'CEREAL', 'EXPORT', ...]
reuters.words(categories=['barley', 'corn'])
['THAI', 'TRADE', 'DEFICIT', 'WIDENS', 'IN', 'FIRST', ...]
from nltk.corpus import inaugural
# collection of 55 texts, one for each presidential address
inaugural.fileids()
# To get the year out of the filename, we extracted the first four characters, using fileid[:4]
[fileid[:4] for fileid in inaugural.fileids()]
All words in the Inaugural Address Corpus that begin with america or citizen are counted; separate counts are kept for each address; these are plotted so that trends in usage over time can be observed; counts are not normalized for document length
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=[14,5])
cfd = nltk.ConditionalFreqDist(
(target, fileid[:4])
for fileid in inaugural.fileids()
for w in inaugural.words(fileid)
for target in ['america', 'citizen']
if w.lower().startswith(target))
cfd.plot()
Basic Corpus Functionality defined in NLTK:
Example | Description |
---|---|
fileids() | the files of the corpus |
fileids([categories]) | the files of the corpus corresponding to these categories |
categories() | the categories of the corpus |
categories([fileids]) | the categories of the corpus corresponding to these files |
raw() | the raw content of the corpus |
raw(fileids=[f1,f2,f3]) | the raw content of the specified files |
raw(categories=[c1,c2]) | the raw content of the specified categories |
words() | the words of the whole corpus |
words(fileids=[f1,f2,f3]) | the words of the specified fileids |
words(categories=[c1,c2]) | the words of the specified categories |
sents() | the sentences of the whole corpus |
sents(fileids=[f1,f2,f3]) | the sentences of the specified fileids |
sents(categories=[c1,c2]) | the sentences of the specified categories |
abspath(fileid) | the location of the given file on disk |
encoding(fileid) | the encoding of the file (if known) |
open(fileid) | open a stream for reading the given corpus file |
root | if the path to the root of locally installed corpus |
readme() | the contents of the README file of the corpus |
nltk.corpus.cess_esp.words()
nltk.corpus.floresta.words()
nltk.corpus.indian.words('hindi.pos')
# nltk.corpus.udhr.fileids()
nltk.corpus.udhr.words('Javanese-Latin1')[11:]
from nltk.corpus import udhr
plt.figure(figsize=[12,6])
languages = ['Chickasaw', 'English', 'German_Deutsch',
'Greenlandic_Inuktikut', 'Hungarian_Magyar', 'Ibibio_Efik']
cfd = nltk.ConditionalFreqDist(
(lang, len(word))
for lang in languages
for word in udhr.words(lang + '-Latin1'))
cfd.plot(cumulative=True)
The above plot shows most of the English words are 7 letters and less.
from nltk.corpus import PlaintextCorpusReader
corpus_root = 'C:/nltk_data/own'
wordlists = PlaintextCorpusReader(corpus_root, '.*')
wordlists.fileids()
wordlists.words('one.txt')
A conditional frequency distribution needs to pair each event with a condition. So instead of processing a sequence of words [1], we have to process a sequence of pairs [2]:
text = ['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...] [1]
pairs = [('news', 'The'), ('news', 'Fulton'), ('news', 'County'), ...] [2]
# For each genre [2], we loop over every word in the genre [3], producing pairs consisting of the genre and the word [1]
genre_word = [(genre, word) #1
for genre in ['news', 'romance'] #2
for word in brown.words(categories=genre)] #3
len(genre_word)
genre_word[:4]
genre_word[-4:]
cfd = nltk.ConditionalFreqDist(genre_word)
# type the name of the variable to inspect it
cfd
# verify it has two conditions
cfd.conditions()
print(cfd['news'])
print(cfd['romance'])
cfd['romance'].most_common(10)
cfd['romance']['could']
In the plot() and tabulate() methods, we can optionally specify which conditions to display with a conditions parameter.
cfd.tabulate(conditions=['English', 'German_Deutsch'], samples=range(10), cumulative=True)
The bigrams() function takes a list of words and builds a list of consecutive word pairs.
sent = ['In', 'the', 'beginning', 'God', 'created', 'the', 'heaven',
'and', 'the', 'earth', '.']
list(nltk.bigrams(sent))
def generate_model(cfdist, word, num=15):
for i in range(num):
print(word, end=' ')
word = cfdist[word].max()
text = nltk.corpus.genesis.words('english-kjv.txt')
bigrams = nltk.bigrams(text)
cfd = nltk.ConditionalFreqDist(bigrams)
cfd['living']
This program obtains all bigrams from the text of the book of Genesis, then constructs a conditional frequency distribution to record which words are most likely to follow a given word; e.g., after the word living, the most likely word is creature; the generate_model() function uses this data, and a seed word, to generate random text.
generate_model(cfd, 'living')
NLTK's Conditional Frequency Distributions: commonly-used methods and idioms for defining, accessing, and visualizing a conditional frequency distribution of counters.
Example | Distribution |
---|---|
cfdist = ConditionalFreqDist(pairs) | create a conditional frequency distribution from a list of pairs |
cfdist.conditions() | the conditions |
cfdist[condition] | the frequency distribution for this condition |
cfdist[condition][sample] | frequency for the given sample for this condition |
cfdist.tabulate() | tabulate the conditional frequency distribution |
cfdist.tabulate(samples, conditions) | tabulation limited to the specified samples and conditions |
cfdist.plot() | graphical plot of the conditional frequency distribution |
cfdist.plot(samples, conditions) | graphical plot limited to the specified samples and conditions |
cfdist1 < cfdist2 | test if samples in cfdist1 occur less frequently than in cfdist2 |
# not always correct
def plural(word):
if word.endswith('y'):
return word[:-1] + 'ies'
elif word[-1] in 'sx' or word[-2:] in ['sh', 'ch']:
return word + 'es'
elif word.endswith('an'):
return word[:-2] + 'en'
else:
return word + 's'
plural('fairy')
plural('woman')
NLTK's code for processing the Brown Corpus is an example of a module.
NLTK itself is a set of packages, sometimes called a library.
from text_proc import plural
plural('wish')
A lexicon, or lexical resource, is a collection of words and/or phrases along with associated information such as part of speech and sense definitions.
For example, if we have defined a text my_text, then vocab = sorted(set(my_text)) builds the vocabulary of my_text, while word_freq = FreqDist(my_text) counts the frequency of each word in the text. Both of vocab and word_freq are simple lexical resources.
A lexical entry consists of a headword (also known as a lemma) along with additional information such as the part of speech and the sense definition. Two distinct words having the same spelling are called homonyms.
def unusual_words(text):
text_vocab = set(w.lower() for w in text if w.isalpha())
english_vocab = set(w.lower() for w in nltk.corpus.words.words())
unusual = text_vocab - english_vocab
return sorted(unusual)
# unusual_words(nltk.corpus.gutenberg.words('austen-sense.txt'))
print(['abbeyland', 'abhorred', 'abilities', 'abounded', 'abridgement', 'abused', 'abuses',
'accents', 'accepting', 'accommodations', 'accompanied', 'accounted', 'accounts',
'accustomary', 'aches', 'acknowledging', 'acknowledgment', 'acknowledgments', ...])
# unusual_words(nltk.corpus.nps_chat.words())
print(['aaaaaaaaaaaaaaaaa', 'aaahhhh', 'abortions', 'abou', 'abourted', 'abs', 'ack',
'acros', 'actualy', 'adams', 'adds', 'adduser', 'adjusts', 'adoted', 'adreniline',
'ads', 'adults', 'afe', 'affairs', 'affari', 'affects', 'afk', 'agaibn', 'ages', ...])
There is also a corpus of stopwords, that is, high-frequency words like the, to and also that we sometimes want to filter out of a document before further processing. Stopwords usually have little lexical content, and their presence in a text fails to distinguish it from other texts.
from nltk.corpus import stopwords
print(stopwords.words('english'))
# Let's define a function to compute what fraction of words in a text are not in the stopwords list:
def content_fraction(text):
stopwords = nltk.corpus.stopwords.words('english')
content = [w for w in text if w.lower() not in stopwords]
return len(content) / len(text)
content_fraction(nltk.corpus.reuters.words())
puzzle_letters = nltk.FreqDist('egivrvonl')
obligatory = 'r'
wordlist = nltk.corpus.words.words()
[w for w in wordlist if len(w) >= 6
and obligatory in w
and nltk.FreqDist(w) <= puzzle_letters] #3
# It is trickier to check that candidate solutions only use combinations of the supplied letters,
# especially since some of the supplied letters appear twice (here, the letter v).
# The FreqDist comparison method [3] permits us to check that the frequency of each letter in the candidate word is less than or equal to the frequency of the corresponding letter in the puzzle.
names = nltk.corpus.names
names.fileids()
male_names = names.words('male.txt')
female_names = names.words('female.txt')
[w for w in male_names if w in female_names]
print(['Abbey', 'Abbie', 'Abby', 'Addie', 'Adrian', 'Adrien', 'Ajay', 'Alex', 'Alexis',
'Alfie', 'Ali', 'Alix', 'Allie', 'Allyn', 'Andie', 'Andrea', 'Andy', 'Angel',
'Angie', 'Ariel', 'Ashley', 'Aubrey', 'Augustine', 'Austin', 'Averil', ...])
# It is well known that names ending in the letter a and e are almost always female
cfd = nltk.ConditionalFreqDist(
(fileid, name[-1])
for fileid in names.fileids()
for name in names.words(fileid))
plt.figure(figsize=[12,6])
cfd.plot()
A slightly richer kind of lexical resource is a table (or spreadsheet), containing a word plus some properties in each row. NLTK includes the CMU Pronouncing Dictionary for US English, which was designed for use by speech synthesizers.
entries = nltk.corpus.cmudict.entries()
len(entries)
# For each word, this lexicon provides a list of phonetic codes — distinct labels for each contrastive sound — known as phones
for entry in entries[42371:42379]:
print(entry)
Observe that fire has two pronunciations (in US English): the one-syllable F AY1 R, and the two-syllable F AY1 ER0.
The symbols in the CMU Pronouncing Dictionary are from the Arpabet
for word, pron in entries: #1
if len(pron) == 3: #2
ph1, ph2, ph3 = pron #3
if ph1 == 'P' and ph3 == 'T':
print(word, ph2, end=' ')
[2] The above program scans the lexicon looking for entries whose pronunciation consists of three phones.
[3] If the condition is true, it assigns the contents of pron to three new variables ph1, ph2 and ph3.
# Here's another example of the same for statement, this time used inside a list comprehension
syllable = ['N', 'IH0', 'K', 'S']
print([word for word, pron in entries if pron[-4:] == syllable])
[w for w, pron in entries if pron[-1] == 'M' and w[-1] == 'n']
Notice that the one pronunciation is spelt in several ways: nics, niks, nix, even ntic's with a silent t
sorted(set(w[:2] for w, pron in entries if pron[0] == 'N' and w[0] != 'n'))
The phones contain digits to represent primary stress (1), secondary stress (2) and no stress (0). As our final example, we define a function to extract the stress digits and then scan our lexicon to find words having a particular stress pattern.
def stress(pron):
return [char for phone in pron for char in phone if char.isdigit()]
[w for w, pron in entries if stress(pron) == ['0', '1', '0', '2', '0']]
print(['abbreviated', 'abbreviated', 'abbreviating', 'accelerated', 'accelerating',
'accelerator', 'accelerators', 'accentuated', 'accentuating', 'accommodated',
'accommodating', 'accommodative', 'accumulated', 'accumulating', 'accumulative', ...])
[w for w, pron in entries if stress(pron) == ['0', '2', '0', '1', '0']]
print(['abbreviation', 'abbreviations', 'abomination', 'abortifacient', 'abortifacients',
'academicians', 'accommodation', 'accommodations', 'accreditation', 'accreditations',
'accumulation', 'accumulations', 'acetylcholine', 'acetylcholine', 'adjudication', ])
We can use a conditional frequency distribution to help us find minimally-contrasting sets of words. Here we find all the p-words consisting of three sounds [2], and group them according to their first and last sounds [1].
p3 = [(pron[0]+'-'+pron[2], word) #1
for (word, pron) in entries
if pron[0] == 'P' and len(pron) == 3] #2
cfd = nltk.ConditionalFreqDist(p3)
for template in sorted(cfd.conditions()):
if len(cfd[template]) > 10:
words = sorted(cfd[template])
wordstring = ' '.join(words)
print(template, wordstring[:70] + "...")
# Rather than iterating over the whole dictionary, we can also access it by looking up particular words
prondict = nltk.corpus.cmudict.dict()
prondict['fire']
prondict['blog'] = [['B', 'L', 'AA1', 'G']]
prondict['blog']
We can use any lexical resource to process a text, e.g. to filter out words having some lexical property (like nouns), or mapping every word of the text.
For example, the following text-to-speech function looks up each word of the text in the pronunciation dictionary.
text = ['natural', 'language', 'processing']
print([ph for w in text for ph in prondict[w][0]])
NLTK includes so-called Swadesh wordlists, lists of about 200 common words in several languages
from nltk.corpus import swadesh
print(swadesh.fileids())
swadesh.words('en')
print(['I', 'you (singular), thou', 'he', 'we', 'you (plural)', 'they', 'this', 'that',
'here', 'there', 'who', 'what', 'where', 'when', 'how', 'not', 'all', 'many', 'some',
'few', 'other', 'one', 'two', 'three', 'four', 'five', 'big', 'long', 'wide', ...])
fr2en = swadesh.entries(['fr', 'en'])
print([('je', 'I'), ('tu, vous', 'you (singular), thou'), ('il', 'he'), ...])
translate = dict(fr2en)
translate['chien']
de2en = swadesh.entries(['de', 'en']) # German-English
es2en = swadesh.entries(['es', 'en']) # Spanish-English
translate.update(dict(de2en))
translate.update(dict(es2en))
translate['Hund']
translate['perro']
# We can compare words in various Germanic and Romance languages:
languages = ['en', 'de', 'nl', 'es', 'fr', 'pt', 'la']
for i in [139, 140, 141, 142]:
print(swadesh.entries(languages)[i])
Perhaps the single most popular tool used by linguists for managing data is Toolbox, previously known as Shoebox since it replaces the field linguist's traditional shoebox full of file card.
A Toolbox file consists of a collection of entries, where each entry is made up of one or more fields. Most fields are optional or repeatable, which means that this kind of lexical resource cannot be treated as a table or spreadsheet
WordNet is a semantically-oriented dictionary of English, similar to a traditional thesaurus but with a richer structure. NLTK includes the English WordNet, with 155,287 words and 117,659 synonym sets. We'll begin by looking at synonyms and how they are accessed in WordNet.
from nltk.corpus import wordnet as wn
wn.synsets('motorcar')
The entity car.n.01 is called a synset, or "synonym set", a collection of synonymous words (or "lemmas")
wn.synset('car.n.01').lemma_names()
Each word of a synset can have several meanings, e.g. car can also signify a train carriage, a gondola, or an elevator car. However, we are only interested in the single meaning that is common to all words of the above synset.
Synsets also come with a prose definition and some example sentences:
wn.synset('car.n.01').definition()
wn.synset('car.n.01').examples()
Although definitions help humans to understand the intended meaning of a synset, the words of the synset are often more useful for our programs. To eliminate ambiguity, we will identify these words as car.n.01.automobile, car.n.01.motorcar, and so on. This pairing of a synset with a word is called a lemma.
wn.synset('car.n.01').lemmas() #1
wn.lemma('car.n.01.automobile') #2
wn.lemma('car.n.01.automobile').synset() #3
wn.lemma('car.n.01.automobile').name() #4
# Unlike the word motorcar, which is unambiguous and has one synset, the word car is ambiguous, having five synsets:
wn.synsets('car')
for synset in wn.synsets('car'):
print(synset.lemma_names())
# For convenience, we can access all the lemmas involving the word car as follows.
wn.lemmas('car')
nodes correspond to synsets; edges indicate the hypernym/hyponym relation, i.e. the relation between superordinate and subordinate concepts.
WordNet makes it easy to navigate between concepts. For example, given a concept like motorcar, we can look at the concepts that are more specific; the (immediate) hyponyms.
motorcar = wn.synset('car.n.01')
types_of_motorcar = motorcar.hyponyms()
types_of_motorcar[0]
print(sorted(lemma.name() for synset in types_of_motorcar for lemma in synset.lemmas()))
We can also navigate up the hierarchy by visiting hypernyms. Some words have multiple paths, because they can be classified in more than one way. There are two paths between car.n.01 and entity.n.01 because wheeled_vehicle.n.01 can be classified as both a vehicle and a container.
motorcar.hypernyms()
paths = motorcar.hypernym_paths()
len(paths)
[synset.name() for synset in paths[0]]
[synset.name() for synset in paths[1]]
We can get the most general hypernyms (or root hypernyms) of a synset as follows:
motorcar.root_hypernyms()
Hypernyms and hyponyms are called lexical relations because they relate one synset to another. These two relations navigate up and down the "is-a" hierarchy. Another important way to navigate the WordNet network is from items to their components (meronyms) or to the things they are contained in (holonyms).
# the parts of a tree are its trunk, crown, and so on; the part_meronyms().
wn.synset('tree.n.01').part_meronyms()
# The substance a tree is made of includes heartwood and sapwood; the substance_meronyms()
wn.synset('tree.n.01').substance_meronyms()
# A collection of trees forms a forest; the member_holonyms()
wn.synset('tree.n.01').member_holonyms()
To see just how intricate things can get, consider the word mint, which has several closely-related senses. We can see that mint.n.04 is part of mint.n.02 and the substance from which mint.n.05 is made.
for synset in wn.synsets('mint', wn.NOUN):
print(synset.name() + ':', synset.definition())
wn.synset('mint.n.04').part_holonyms()
wn.synset('mint.n.04').substance_holonyms()
There are also relationships between verbs. For example, the act of walking involves the act of stepping, so walking entails stepping. Some verbs have multiple entailments:
wn.synset('walk.v.01').entailments()
wn.synset('eat.v.01').entailments()
wn.synset('tease.v.03').entailments()
Some lexical relationships hold between lemmas, e.g., antonymy:
wn.lemma('supply.n.02.supply').antonyms()
wn.lemma('rush.v.01.rush').antonyms()
wn.lemma('horizontal.a.01.horizontal').antonyms()
wn.lemma('staccato.r.01.staccato').antonyms()
We have seen that synsets are linked by a complex network of lexical relations. Given a particular synset, we can traverse the WordNet network to find synsets with related meanings. Knowing which words are semantically related is useful for indexing a collection of texts, so that a search for a general term like vehicle will match documents containing specific terms like limousine.
Recall that each synset has one or more hypernym paths that link it to a root hypernym such as entity.n.01. Two synsets linked to the same root may have several hypernyms in common. If two synsets share a very specific hypernym — one that is low down in the hypernym hierarchy — they must be closely related.
right = wn.synset('right_whale.n.01')
orca = wn.synset('orca.n.01')
minke = wn.synset('minke_whale.n.01')
tortoise = wn.synset('tortoise.n.01')
novel = wn.synset('novel.n.01')
right.lowest_common_hypernyms(minke)
right.lowest_common_hypernyms(orca)
right.lowest_common_hypernyms(tortoise)
right.lowest_common_hypernyms(novel)
Of course we know that whale is very specific (and baleen whale even more so), while vertebrate is more general and entity is completely general. We can quantify this concept of generality by looking up the depth of each synset:
wn.synset('baleen_whale.n.01').min_depth()
wn.synset('whale.n.02').min_depth()
wn.synset('vertebrate.n.01').min_depth()
wn.synset('entity.n.01').min_depth()
Similarity measures have been defined over the collection of WordNet synsets which incorporate the above insight. For example, path_similarity assigns a score in the range 0–1 based on the shortest path that connects the concepts in the hypernym hierarchy (-1 is returned in those cases where a path cannot be found). Comparing a synset with itself will return 1. Consider the following similarity scores, relating right whale to minke whale, orca, tortoise, and novel. Although the numbers won't mean much, they decrease as we move away from the semantic space of sea creatures to inanimate objects.
right.path_similarity(minke)
right.path_similarity(orca)
right.path_similarity(tortoise)
right.path_similarity(novel)