By Yuli Vasiliev
October 18, 2019
Creating intents and entities is one of the few time-consuming tasks that Oracle Digital Assistant developers may need to accomplish when defining a new skill (chatbot). Of course, rather than creating intent and entity definitions one at a time in the Bot Builder, you can import CSV files containing the intent and entity definitions, respectively.
However, if you are creating a skill from scratch, you most likely don’t have those definitions in advance, even if you have a large volume of utterances—what the users say—gathered from real requests submitted by your customers. You still need to sort the utterances, based on the intent—user intention—behind them. And to create entity definitions, you’ll need to identify entities and look for synonyms for each entity—because an entity modifies an intent.
This is where using natural-language processing (NLP) tools such as spaCy comes in very handy, enabling you to perform these tasks programmatically and, as a result, automating the process of generating entity and intent definitions.
This article discusses NLP techniques you can use in a Python script that employs spaCy—the leading open source library for NLP in Python—to automatically derive the user’s intent from an utterance and then label that utterance accordingly. You will also see how to simplify the process of creating entity definitions, deriving necessary information from the customer input data.
Be warned, however, that the techniques discussed in the article are implemented in their simplest form, leaving you plenty of room for improvement. But even these simple implementations will be able to correctly process perhaps up to 90% of the request utterances in your customer input.
Extracting meaning from raw text can be quite challenging. You can’t rely on the meaning of individual words in a sentence, because the order of words may invert the whole point. Moreover, the same word may have different meanings, depending on the context in which it is used. To address this problem, NLP tools enable you to access linguistic features (also known as linguistic annotations) such as part-of-speech tags and syntactic dependency labels.
In essence, linguistic features are the attributes assigned to an individual token—a word, number, or punctuation mark—in a sentence. Each token in a sentence plays a specific role (syntactic function), which is characterized by the linguistic features associated with it. Thus, the same word can play different roles in different sentences and be assigned a different set of linguistic attributes. For example, the word pizza in the utterance The pizza is delightful is the subject, whereas the same word in I want a pizza is the direct object.
To recognize intent in an utterance, the first things to look at are the syntactic dependency labels assigned to the utterance’s words, searching in particular for those that are labeled as the direct object and the transitive verb.
To follow along with the examples in this article, download and install the following software on your PC:
Refer to the respective sites for general installation instructions. Here are some important additional notes regarding these installations:
python -m spacy download model_name.
With Python, spaCy, and the model installed, download and extract the zip file for this article. The zip file includes the following files and content:
entities_origin.csv
, which contains some entity definitions without synonyms
gen_entities.py
, which processes the utterances in the utterances_ents.txt
file, looking for synonyms for the entities taken from the entities_origin.csv
file, and creates the entities.csv
file to save the updated entity definitions
gen_intents.py
, which extracts user intents from the utterances in the utterances.txt
file and creates the intents.csv
file to save the generated intent definitions
utterances.txt
, which contains some utterances from which to generate intent definitions
utterances_ents.txt
, which contains some utterances from which to obtain synonyms for the entities in the entities_origin.csv
fileOnce you’re done with the software installations, the simplest way to evaluate the readiness of your environment is to run the following script—you can run it either from a Python session or as a separate script.
import spacy
nlp = spacy.load('en')
doc = nlp(u'I want a pizza.')
for token in doc:
print(token.text, token.pos_, token.dep_)
In this script, you start by importing the spaCy library to get access to its functionality. Then you load a model package with the spacy.load()
function, passing in the en
shortcut link to load a statistical model pretrained for English. As a result, spacy.load()
creates a text processing pipeline, which you then apply to a sample sentence (it might also be multisentence text), creating a doc
object instance. The doc
object in spaCy is a container for token
objects. So, you can iterate over a doc
in a loop, processing a subsequent token
on each iteration. In this particular example, you simply output some attributes of a token
, including its text (text
), part-of-speech (pos
) tag, and dependency (dep
) label.
If everything works correctly, you should see the following output (tabulated here for readability):
I PRON nsubj
want VERB ROOT
a DET det
pizza NOUN dobj
. PUNCT punct
As you might expect, there are many more token
attributes that you can use to get insights into a sentence’s grammatical structure. That discussion, however, is beyond the scope of this article. For details, refer to the spaCy API documentation.
The customer input you may have collected in a file—for example, the utterances.txt
file included with this article—may include hundreds of utterances, each of which you need to process programmatically. With spaCy you don’t have to manually perform any preprocessing on input text to shred it into sentences (utterances). You can easily shred the input into sentences with the Doc.sents
property, as illustrated in the code snippet below. (This technique is used in both the gen_entities.py
and gen_intents.py
Python scripts accompanying this article.)
...
f= open("utterances.txt","rb")
contents =f.read()
doc = nlp(contents.decode('utf8'))
for sent in doc.sents:
for token in sent:
#processing each token in a sentence
By default, spaCy performs sentence boundary detection based on the syntactic dependency parse rather than punctuation. For example, the following quick test shows that the submitted text has two sentences—even though the first one ends with no punctuation.
...
doc = nlp(u'I know it You know it.')
print(len(list(doc.sents)))
2
Sentence boundary detection enables you to achieve better accuracy with informal texts, which can be quite useful when processing customer input. Alternatively, you can define sentence segmentation by using a rule-based strategy, specifying a list of punctuation characters to mark sentence ends. For more information, see the “Sentencizer” section in the spaCy API documentation.
Splitting text into sentences (utterances) is a breeze with spaCy, but how do you extract intent from an utterance? This is where using linguistic features comes in. One way to capture the intent of an utterance is to use syntactic dependency labels.
When you need to determine a customer’s intent based on a request utterance, you can start by identifying the transitive-verb/direct-object pair in the utterance, because this syntactic pair is typically best suited to describe what the customer wants.
For example, this intent recognition approach will work perfectly for the utterances I want a pizza and I’d like to order a pizza, giving you the want pizza and order pizza verb/direct-object pairs, respectively.
With spaCy, extracting necessary elements from the syntactic dependency structure of an utterance can be as easy as iterating through the utterance’s tokens and checking their dependency labels. The following code snippet illustrates how you can extract the transitive verb and its direct object from an utterance and then merge them into an intent identifier.
...
for token in sent:
if token.dep_ == 'dobj':
intent = token.head.text.capitalize() + token.text.capitalize()
For example, when given the utterance I'd like to order a pizza
, this code will generate the following string as the value for the intent variable: OrderPizza
. In terms of syntactic dependency parsing, a transitive verb and its direct object are associated in a relationship in which the transitive verb is the syntactic parent, or head, and the direct object is the child. spaCy enables you to obtain the syntactic head of a token through its head attribute.
In English the same intent can be expressed in different ways and with different words. A customer may say, Give me a pie or Make me a pizza, expressing in both cases the same thing and the same intent. However, the intent extraction approach based on using the verb and its direct object will give you different intent identifiers for these utterances.
To work around the challenge of different words’ having the same meaning, you can use predefined lists of synonyms to replace the transitive verb and its direct object in an utterance with the words that can be used to generate an intent identifier recognizable by your application. For example, a list of verbs for the OrderPizza
intent might include want, order, make, and give. When one of them—other than order—is found in an utterance, you can replace it with order in the intent identifier being generated. Another important thing to note about this list is that the verbs are not synonyms in the usual grammatical sense, but in the context of utterances for ordering a pizza, they can be used interchangeably.
The following code snippet from the gen_intents.py
script illustrates how this synonym list strategy might be implemented programmatically:
...
intents = []
transVerbs = [('order','want','give','make'),('cancel', 'drop', 'revoke','annul')]
directObjs = [('pizza','pie','pizzaz'),('cola','soda')]
for sent in doc.sents:
tverb = ''
dobj = ''
intent = ''
for token in sent:
if token.dep_ == 'dobj':
tverb = token.head.text
dobj = token.text
verbSyns = [tpl for tpl in transVerbs if tverb in tpl]
dobjSyns = [tpl for tpl in directObjs if dobj in tpl]
if verbSyns != [] and dobjSyns != []:
intent = verbSyns[0][0].capitalize() + dobjSyns[0][0].capitalize()
if intent != '':
intents.append(tuple((sent.text.rstrip(), intent)))
...
The code defines the synonym lists for transitive verbs and direct objects, respectively, where each list includes a set of tuples to group words related to a specific intent together. If both the verb and its direct object extracted from an utterance are found in a tuple within a corresponding list, the first word in the tuple will become the intent identifier name.
Otherwise, the intent variable remains set to an empty string, indicating that the intent has not been recognized and you may want to try another approach.
So far you have seen how to extract intent from utterances that are positive sentences, but negative utterances also happen. What typically makes an utterance negative is the presence of the negative word not, which follows an auxiliary verb (be, do, have) or a modal auxiliary (can, shall, must, might, will, and so on). Because negative utterances may be quite common in your customer input, you should definitely include them in a training set for your bot.
When it comes to negotiations with an ordering service, clients may use a phrase such as I don’t want (or I do not want) to cancel their orders, as in the following utterance: I don’t want this pizza. As you can see, this utterance has the same verb/direct-object pair as its positive counterpart but carries the reverse meaning, indicating that a customer wants to cancel a previously placed order.
spaCy’s parser marks not with the dependency label neg
, so a simple way to determine whether an utterance is negative is to search for a token labeled neg
and, if it is found, check if its syntactic head is the main verb of the utterance—spaCy’s parser marks such a verb as ROOT
. The following code snippet looks for a negative utterance that will cancel an order:
...
neg = False
for token in sent:
if token.dep_ == 'neg' and token.head.dep_ == 'ROOT':
#it's a negative utterance.
neg = True
if neg and 'Order' in intent:
intent = intent.replace('Order','Cancel')
...
It is interesting to note that it will also work for phrases such as I’m not hungry anymore, which may express an implied intent to cancel a previously placed order.
Customers may also express their intents implicitly. For example, someone may say, “I’m hungry,” in which case your chatbot skill should recognize the intent to order a pizza.
The I’m hungry utterance does not include a verb/direct-object pair. How can you determine the intent expressed in it, then? Well, you can rely on the semantic similarity calculated between each word in an utterance and the words a customer might use to express a specific intent.
For example, when checking for the OrderPizza
intent in an utterance, you might assume that there could be a word in this utterance semantically related to, say, the word eating. Note that the accuracy of such an algorithm can be much lower than that of the algorithm based on the extraction of the verb/direct-object pair discussed previously. Nevertheless, recognizing implied intents by using semantic similarity can still be helpful in reducing your manual efforts when you’re creating intent definitions.
You might be wondering how to calculate the semantic similarity between two words. NLP models used by tools such as spaCy include word vectors that represent semantic meanings of natural-language words. Vectors of semantically related words are located close to each other, enabling you to programmatically measure the semantic similarity between two different words.
The following code from the gen_intents.py
script illustrates how you can use semantic similarity:
...
if intent == '':
doc = nlp(u'eating')
for token in sent:
if token.similarity(doc[0]) > 0.2:
if not neg:
intent = 'OrderPizza'
else:
intent = 'CancelPizza'
In this particular example, you assume that the utterance must include a word whose semantic similarity to the word eating exceeds 0.2 to be considered related to the pizza-ordering (OrderPizza
) intent. For example, in the utterance I’m hungry, only the word hungry exceeds this threshold level. (The highest-possible value is 1.0.)
As you no doubt have realized, the accuracy of this method may be quite low. In this regard, you might not want to mix the definitions generated by this method with the definitions generated by the method based on using dependency labels, saving them in different files.
You may still have utterances that the algorithms discussed have failed to classify, and this may be because an unclassified utterance does not express an intent relevant to your needs. You can mark such an utterance as expressing an unrecognized intent, as shown in this snippet.
...
if intent == '':
intent = 'unrecognizedIntent';
For example, in the context of preparing intent definitions for this article’s examples, the utterance Do you sell tickets? should be classified as expressing an unrecognized intent.
After working through the utterances and generating the intents, I save the generated intent definitions to the intents.csv
file, as follows:
import csv
with open('intents.csv','w') as out:
csv_out=csv.writer(out)
csv_out.writerow(['query','topIntent'])
for row in intents:
csv_out.writerow(row)
After execution of this code, the contents of my intents.csv
file look like this:
query,topIntent
I want a Greek pizza.,OrderPizza
Could you give me a pie?,OrderPizza
I'm hungry.,OrderPizza
I'd like to order a pie.,OrderPizza
I don't want this pizza.,CancelPizza
I want to cancel my pizza.,CancelPizza
Do you sell tickets?,unrecognizedIntent
Your actual content will depend on your input.
When you create entity definitions, you may also need to extract the synonyms your customers use instead of the values you have specified.
Suppose you want to get the synonyms for the types of pizza from the customer utterances you have collected. You have the following entity definitions in the entities_origin.csv
file:
entity,value,synonyms
PizzaType,Mozzarella,
PizzaType,vegetarian,
You need to add synonyms to each entity, so that the definitions look like this after processing:
entity,value,synonyms
PizzaType,Mozzarella,Mozzarela:Mozarela
PizzaType,vegetarian,veg:veggie
For reference, here is the set of utterances from the utterances_ents.txt
file that inspires the synonyms:
I want a veg pizza.
Do you make veggie pizzas?
Could you make me a Mozarela pizza?
I want two veg pizzas.
What kind of pizzas do you make?
Give me a Mozzarela pizza.
Assume that the pizza type is specified as a modifier of the noun pizza(s) found in an utterance. (The other utterances will simply be ignored without any impact on the algorithm’s accuracy.)
For example, you might have the following utterance: I want two veg pizzas, in which two and veg are modifiers of the noun pizzas. The first one represents a number, and the second is an adjective. In this particular example, you are interested in the second. spaCy enables you to distinguish between them by using part-of-speech tags. For this article’s examples, you’ll need to find a pizza’s adjective, noun, and proper noun modifiers. For example, Mozzarella will be detected as a proper noun and vegetarian as an adjective.
Now let’s look at how this approach can be implemented programmatically. The following code from the gen_entities.py
script iterates over each utterance, looking for the word pizza (or pizzas). When the code finds pizza, it obtains pizza’s left syntactic children (its modifiers), picking up only those that are an adjective, noun, or proper noun (if any). The code then sends the chosen words to the pizza_types_syns
list, excluding duplicates.
...
f= open("utterances_ents.txt","rb")
contents =f.read()
doc = nlp(contents.decode('utf8'))
pizza_types_syns = []
for sent in doc.sents:
for token in sent:
if token.lemma_ == 'pizza':
pizza_types_syns = pizza_types_syns +
[modifier.text for modifier in token.lefts
if modifier.pos_ == 'ADJ'
or modifier.pos_ == 'NOUN'
or modifier.pos_ == 'PROPN']
pizza_types_syns = list(dict.fromkeys(pizza_types_syns))
All that is left is to find synonyms for the pizza types in the just created pizza_types_syns
list. For simplicity in this example, consider two words synonyms if their first three characters are the same. Also, exclude full matches to avoid sending the names of pizza types to a synonym list. Finally, send an updated entity definition to a CSV file. Here is how this is implemented in the gen_entities.py
script:
import csv
with open('entities_origin.csv') as csvfile,
open('entities.csv','w') as out:
csv_out=csv.writer(out)
csv_out.writerow(['entity','value','synonyms'])
entityreader = csv.reader(csvfile, delimiter=',')
headers = next(entityreader, None)
for row in entityreader:
synonyms = [synonym for synonym in pizza_types_syns
if synonym[0:3] == row[1][0:3] and row[1] != synonym]
row[2] = ':'.join(synonyms)
csv_out.writerow(row)
As a result, entity definitions for pizza types—with synonyms—should be saved in the entities.csv
file.
In this article, you’ve seen how you can use the customer inputs you have at your disposal to your advantage, automatically generating the intent and entity definitions for your bot. You saw how to use spaCy—the leading open source library for NLP—to gather valuable semantic information from plain text, with just a few lines of Python code.
LEARN more about spaCy.
LEARN more about Oracle Digital Assistant.
DOWNLOAD the zip file with the data and code for this article.
Illustration by Wes Rowell
Yuli Vasiliev is a programmer, freelance author, and consultant currently specializing in open source development; Oracle database technologies; and, more recently, natural-language processing (NLP).