Skip to main content

Natural language understanding

This page goes more in-depth in Narratory's NLU (Natural language understanding) capabilities, today largely resting on the shoulders of giants (Dialogflow/Google is used under the hood).


Intents is an important concept in dialog design and typically refers to what a user intends when he/she says something. For example, both a "yes" and a "I want ice cream" and "why not" likely maps to the intent "I want to buy icecream" if the bot just have asked the question "Do you want icecream?". This highlights the fact that intents almost always are context-dependent since a Yes would mean something completely different if the bot asked "Do you hate icecream?" or "Have I seen you before?".

Defining intents can be done in two ways, as simple arrays of strings or as Intent instances. In both cases, the intent really only consists of a selection of phrases that means the same thing - or at least should be perceived as the same thing by the bot. To explain this ambiguity, you might not actually care if a user says "yes" or "maybe" to the question if they had a good day in which case you could include phrases for both these in an intent.

// As an array of stringsconst yes = ["Yes", "I have"]
// As an instance of type Intentconst no: Intent = {  examples: [    "No",    "I have not",    "never",    "nope",    "you are my first",    "not until today"  ]}

One of the special gifts of Narratory is that intents always have a context since you use them in a UserTurn. The exception is the globally available intents in the User Initiatives which always are active for Narratory apps.

A note on intent classification: Machine learning is used by to determine if something a user says matches an intent. This process is called intent classification and means that you don't have to provide the exact same phrases that the user says, but enough for the machine learning classifier to understand. Usually this means at least 5-15 examples for each intent as a starting point. A production-ready app might have up to 50 examples per intent since in testing you will for sure realize that users use phrases that you haven't thought about in the design-phase. This is the main reasons why testing is crucial to build a robust conversational application.

Intents are very powerful constructs in branching dialog, but to really try to understand users we have to also include entities, which always are used as part of intents. See the next section.


Entities can, a bit simplified, be seen as data points inside of intents. For example, to really capture the meaning of the sentence "I want a banana", a good way of modelling it would be to create an intent called "IWantFruit" and an entity "Fruit", which in this case would be a banana.

Entities typically enumerations (lists) of nouns, verbs or adjectives that are used to capture data to create a meaningful dialog. If intents serve to determine the intention of the user with a specific phrase, entities are the data points the bot needs to continue the dialog in a way that makes sense. For example, if an intent captures users attempts at ordering a flight, the relevant entities are typically a destination, a departure city, a date and so on.

Typical examples are fruits (with options or enums "banana", "apple", "pear", "orange" etc), city names (with enums "New York", "Stockholm", "Paris" etc) and sizes (with enums "small", "medium", "large").

Note: currently only enum entities (i.e. entities with a fixed list of words that needs to match exactly) are supported. Entities based on regular expressions and composite entities - i.e. entities built up by other entities - will be supported in a near future.

Defining your own entities#

You define entities in Narratory as shown below. An entity is an object of type Entity with a name and a list of enums, with each enum having a name and an array of alternatives/synonyms.

import { Entity } from "narratory"
const virtualAssistant: Entity = {  name: "virtualAssistant",  enums: [    { name: "alexa", alts: ["Amazon Alexa", "The amazon one"] },    {      name: "Google home",      alts: ["Google home", "Google assistant", "assistant from google"]    },    { name: "siri" },    { name: "Cortana" }  ]}

As is shown above, for the virtualAssistant entitiy, there are three different alternatives for saying Alexa, but only one to say Siri.

Using entities in intents and in speech output#

As described above, entities are capturing data from user utterances matching intents, and they are linked to an Intent as shown below, in an entities object.

import { Entity, Intent, BotTurn } from "narratory"
// Definition of our entityconst virtualAssistant: Entity = {  name: "virtualAssistant",  enums: [    { name: "Alexa", alts: ["Amazon Alexa", "The amazon one"] },    {      name: "Google home",      alts: ["Google assistant", "assistant from google"]    },    { name: "Siri" },    { name: "Cortana" }  ]}
// Definition of our intent, with a reference to the entity aboveconst favAssistant: Intent = {  entities: {    myFavAssistant: virtualAssistant  },  examples: [    "I love siri",    "I talk to alexa at home",    "I have a google home",    "I like _myFavAssistant", // This example uses the entity parameter name instead of one of the examples    "my computer has cortana",    "google assistant on my phone"  ]}
// A bot turn where we expect a user to trigger the above intent, where we use the entity value in our reply to the userconst favVirtualAssistant: BotTurn = {  say: [    "Which is your favorite virtual assistant?",    "Who is your favorite virtual assistant?"  ],  user: [    { intent: favAssistant, bot: `Oh, I love _myFavAssistant` },    {      intent: ["I don't have a favorite", "No one", "none", "They all stink"],      bot: "Oh, well. That's okay. Maybe you'll like me more!"    }  ]}

Above, you first see our entity, then you see our favAssistant intent that has an entity object with a key myFavAssistant (this is what you name the entity parameter) and value virtualAssistant (this refers to the entity type you defined above). The parameter name myFavAssistant is needed for you to be able to use the data from the captured entity, which we use in the followup reply in the BotTurn. The captured value for parameters are persisting for the rest of the interaction, unless you reset them.

Beta features: All variables gets attached to the user object for easy reference. You could access them in a response as the captured entity is ${user.myEntityParamName}. This requires you to run npm run watch in your terminal to autopopulate the user object.

In the favAssistant intent, it is also important to notice that all of the examples of utterances include one alternative of any virtualAssistant. This means that the Narratory system considers the entity as mandatory for this intent to be classified. The next section describes optional entities.

Resetting captured entity values#

Sometimes you might want to reset values captured. This can for example be if a user wants to do a subsequent request, for example they might have booked a flight but then they want to book another flight right after. Since captured entities are saved as Variables, you reset them as you would reset any variable. See Resetting variables.

Built in entities#

Thanks to using Dialogflow under the hood, Narratory comes with a set of built-in entities that allows you to extract everything from numbers and currencies to cities, music artists, countries and (for some countries) addresses. These can be accessed from the entities object that can be imported from the narratory library.

import { entities, Intent, BotTurn } from "narratory"
const peopleTravelling: Intent = {  examples: ["_tickets", "_tickets people", "_tickets tickets"],  entities: {    tickets: entities.numberInteger  }}
const askTickets: BotTurn = {  say: "How many people are travelling?",  user: [    { intent: intents.peopleTravelling, bot: `How nice, _tickets people.` }  ]}

Above, you see that all of the peopleTravelling intent's examples have references to "_tickets". This refers to the tickets key in the entities object just below and has a prefix "_" to separate it from the text "tickets". In other words, the "_" is used here to signal to the system that you want to catch the numberInteger entity, for example the utterances "5", "5 people" and "5 tickets".

In the askTickets BotTurn in the bottom where we use our intent, you can see that in the followup we use the same notation _tickets. This is an alternative to using ${} (a beta feature). The benefit of using the latter is that if your application grows, it might be hard to remember all variables set.

Important: Not all built-in entities are available in all languages. See Dialogflow's list of System entities to learn what entities are available in your selected language.

Wildcard entitites#

A special entity is the built-in entitiy any which captures anything. This could be handy in several cases, for example if you want your backend to parse the answer, if you are capturing content that are hard to parse using intents and entities (such as free text feedback for example) or if the built in system entities are not supported in the preferred language.

Please be careful using this entity together with other entities in the same intent, i.e I think that _myAnyEntity _myOtherEntity can be problematic in capturing the _myOtherentity entity.

Below, the built in entities.any is used to capture an address in Swedish, since the built-in address entity unfortunately does not support Swedish at this point.

import { Intent, entities } from "narratory"
export const orderIntent: Intent = {  examples: ["Jag vill beställa ett bud till _toAddress"],  entities: {    toAddress: entities.any  }}

Optional entities#

Sometimes an entity is not mandatory in an intent. For example, it could be a data point that you only use to enrich the dialog, or that you want to populate it at a later point in the dialog (for an advanced use of this, see the section on slot-filling). Entities does not have to match every example in an intent. For example, you might want to accept that users say "I want to book a flight", "I want to book a flight to Paris" and "I want to book a flight to Paris from Stockholm" and handle the lacking data points at a later point if users don't provide all information. Or, the application might not need all data points, but the dialog might feel more alive if you catch the data and use it in replies etc.

const travel: Intent = {  examples: [    "book a flight",    "book tickets to _toCity",    "I want to book a flight",    "I want to book tickets",    "I want to book _tickets tickets to _toCity",    "I want to book _tickets tickets from _fromCity to _toCity",    "I want to fly from _fromCity to _toCity",    "I want to fly to _toCity",    "I want to fly from _fromCity",    "I wanna fly to _toCity",    "I wanna fly from _fromCity to _toCity"  ],  entities: {    tickets: entities.numberInteger,    fromCity: entities.geoCity,    toCity: entities.geoCity  }}
const intro: BotTurn = {  say: "Hi! What can I help you with?",  user: [    {      intent:,      bot: {        say: ["Excellent", "Sounds great"]      }    }  ]}

In the above example, the travel intent has many examples with or without entities. As you can see, the entities.geoCity (a built in entity for cities) is used twice here, both for the parameter fromCity and toCity. This highlights the need to give each parameter a unique name. Naturally, you DO probably want users to fill in all the missing slots here, for more on this see slot-filling below.

ListEntities - more of the same entity#

It's a common use-case to want to be able to capture lists of entities, for example a user might say "I want apples and bananas" or "I have problems with my stomach and I have a fever" which should trigger a list of fruits and symptoms, respectively.

Adding ListEntities#

Narratory handles lists of entities automatically based on the examples you provide to an intent. If any of your examples have more than one instance of an entity enum, or the entity handle, the entity will be considered a ListEntity for that intent.

Let's say that we have a product entity defined like this:

const product: Entity = {  name: "product",  enums: [    { name: "apple" },    { name: "banana" },    { name: "milk" },    { name: "egg" },    { name: "couscous" }  ]}

Now, let's say we want users to be able to list several products at once. Here is an example using entity handles, i.e. _variableName. The product entity in this intent will be considered a ListEntity since there is at least one example that has two occurances of _product.

const addProductsToList: Intent = {  entities: {    product: product  },  examples: [    "I want _product",    "I want _product and _product",    "add _product and _product",    "_product, _product and _product"  ]}

For removing products we do the same, but here we list the actual product values in the examples instead for illustration purposes. The product entity in this intent will be considered a ListEntity since there is at least one example that has two occurances of the enum values of product.

// An intent to remove productsconst removeProductsFromList: Intent = {  entities: {    product // We can omit the ": product" here since the parameter name and the const product have the same name  },  examples: [    "Remove apple",    "Remove banana and egg",    "I don't want egg",    "Take away egg"  ]}

Using captured ListEntities in bot speech/text output or API calls#

When it comes to speech/text output of ListEntities, for example if you want to bot to say "Oh, apples and bananas sounds delicious" or "I see, stomach problems and fever", ListEntities are handled the same way as lists of variables (see Lists of variables)

Populating entities dynamically#

Sometimes you want to create entities dynamically instead of hardcoding all examples. This is done by using the type DynamicEntity instead of the normal Entity.

Defining a DynamicEntity#

When creating a DynamicEntity, you can choose when the entities should be dynamically populated, either BUILD, i.e. when you build your agent by running the start or build commands, SESSION, i.e. once per session with the agent or TURN which will fetch entities on every turn that uses an intent with the Dynamic entity in question present.

You can always combine preset enums with dynamically populated ones. If you don't want to have any preset enums you just provide an empty array instead.

import { DynamicEntity } from "narratory"
// This entity is populated on agent creation/build and has one preset valueconst dynamicGameEntity: DynamicEntity = {  name: "game",  enums: [{ name: "hockey", alts: ["ice hockey"] }],  url: "https://URL-TO-FETCH-GAME-ENUMS-FROM",  type: "BUILD"}
// This entity is populated at runtime once per session and has no preset valuesconst dynamicAddressEntity: DynamicEntity = {  name: "address",  enums: [],  url: "https://URL-TO-FETCH-ADDRESSES-FROM",  type: "SESSION"}

Like any entity, the above dynamic entities can then be used in Intents like below:

const whatGamesIntent: Intent = {  entities: { game: dynamicGameEntity },  examples: ["How did it go in _game", "Who won in _game?"]}
const queryDeliveryAddress: Intent = {  entities: { address: dynamicAddressEntity },  examples: [    "Can you send a package to _address?",    "Can you deliver to _address?"  ]}

Finally, we could have two user questions that answers these either statically or (likely) dynamically (as shown in Advanced Turns, DynamicBotTurn) based on the caught entities.

// Static answerconst whatGamesQuestion: UserTurn = {  intent: whatGamesIntent,  bot: {    say: "Oh, I don't know how it went in _game unfortunately."  }}
// Dynamically fetched answer, the say populated by the API as shown above.const whatGamesQuestion: UserTurn = {  intent: queryDeliveryAddress,  bot: {    url: "https://URL-TO-FETCH-RESPONSE",    params: ["address"]  }}

Setting up an endpoint for a dynamic entity#

To serve your dialog with dynamic data for an entity, you have to provide a publically available endpoint that returns an array of Enums (formatted the same as when declared in the dialog script - i.e. JSON objects with a name string and an optional alts array of strings.

Narratory will always pass on the sessionId when Dynamic Entities are called during runtime.

This can for example be done using Express, nodeJS and Typescript, like below:

export const todaysSpecialsEndpoint = async (req, res) => {  const { sessionId } = req.body
  // Likely fetch from a DB, maybe personalizing the offering based on the sessionId  const specials: Array<{    name: string    synonyms: string[]  }> = await fetchSpecialsFromDb(sessiodId)
  res.send( => {      return {        name:, // First synonym becomes our name (main identifier, used in speech)        alts: item.synonyms // Any other synonyms are used only for NLU in this app      }    })  )}