This page is a great start to read when you want to learn the basics of how Narratory works. You will find that creating your first app is only a little more complex than writing the type of scripts showed in the Introduction page.
Note: We are using Typescript for this since it is a very dynamic language, is easy to get started with, works great for declarative code (which is exactly what we will be doing - we will declare what the bot should say and listen to and let the Narratory system to all the heavy lifting to get it all working) and gives good support and autocomplete for creators.
Hello world - a BotTurn
The Narratory example of a Hello world application is a simple BotTurn, i.e. an interaction where the bot just says "hello world", It is literally not harder than defining a variable in Typescript:
The difference between the top two alternatives is that for the latter, Narratory will randomize one of the three phrases. Variation is extremely important to not bore and irritate users (it turns out that most humans have an allergy to repeating ourselves literally) so make sure you add variation down the line.
You can also define statements like these using BotTurn objects. Here, we import the BotTurn interface from the narratory library to get help with formatting.
These two snippets are equivalent to the above, but allow us to add user answers, conditions and other parameters. Curious? Keep reading!
Asking questions - adding a UserTurn
The next step is to add answers to our BotTurn. To do this, we add an array of UserTurns to the user parameter of our BotTurn object. Each UserTurn consists of two parts, an intent containing information about what the user might say and one or several followup BotTurns - which are defined in the previous section.
Note that Typescript automatically expects a BotTurn when you start you are defining a followup BotTurn so you only have to define the types of top-level turns, i.e.
const myTopLevelTurn : BotTurn.
Below, we add two potential answers to a question:
Here, you can see that the user parameter takes an array (arrays are defined with ) containing two UserTurn answers. Each UserTurn has an array of phrases (the phrases that the user might say - see the Natural language understanding docs for more on this - and a followup BotTurn, defined in the first case as a string and in the second one as a BotTurn (with a say parameter).
Now, there are several reasons why you want to save the phrases that the user says, the intent, as a separate variable. By doing this, you;
- Can use it in more places.
- Get a cleaner and easier to read script since you usually want to have at least 5-15 examples per intent to cover for user variation.
- Can use Entities (see Entities on NLU page).
Defining intents can be done in two ways, as simple arrays of strings or as Intent instances:
This allows us to write our question in a bit neater way (with the 3rd answer removed for clarity) and to reuse the intents for a second question.
If you want, you can also define your answer UserTurns as variables that then can be reused. This can make sense in a bigger application when you might want to reuse the same behavior (it could be a complex subdialog with many layers of BotTurns and UserTurns) and don't repeat yourself.
Narrative - a sequence of BotTurns
Now that we know how to create both BotTurns and answers, it's time to put them together to a narrative. This is easy, similarly to how you can group together strings and answers in an array, you group together your BotTurns to an array of BotTurns, which is your narrative.
User Initiatives - globally available UserTurns
As you might have read in the introduction, in addition to your narrative you can add a set of User Initiative UserTurns that are active at all times. In other words, these are UserTurns that can consist of questions and triggers that the users should be able to say at any time in the dialog. Once these turns are completed (typically, this would be that the bot answers an out-of-narrative-question that the user asked), the bot will say one of the bridge-phrases defined in the Agent (see adding it all together to your first Agent) below) and will then continue where it was in the narrative.
Here, we show how to create UserTurns using inline examples, with intents and with followup BotTurns respectively.
As you see, once you learn how the two building blocks, BotTurn and UserTurn, works, it is possible to combine them in any way you would like. On the last row above you see how you can add the questions together by creating an array of all UserTurns.
BotInitiatives - BotTurns outside of Narrative
Sometimes you want to add BotTurns that aren't part of your Narrative. These turns will not automatically be executed as all narrative BotTurns do, but instead you manually have to use
goto("BOT_INITIATIVE_LABEL") to execute the turn. Like all BotTurns, a bot initiative BotTurn can be nested structures of dialog, i.e. have subsequent BotTurns or UserTurns and use goto to move to other turns. A BotInitiative BotTurn will return to the narrative if you don't have a goto defined.
Bridges - when returning to narrative
When you are done executing out-of-narrative turns - userInitiatives or botInitiatives - it is usually a good idea to let the user know that you are returning to the Narrative. This usually depends on how long the detour from the narrative has been. This is done by adding bridges to your agent. A bridge could either be a list of strings or BotTurns, allowing you to build more complex behavior if you want.
A simple examples with a list of strings:
A more advanced example, where the bot asks if the user has more questions if the last response was from a userInitiative, is shown here:
Agent - adding all turns together
Now that we have all the building blocks figured out, we can create our first Agent. The Agent connects the narrative, userInitiatives and botInitiatives as well as informs the Narratory system which language should be used (currently Narratory supports only one language per Agent, but multi-lingual support will be released in the near future) and sets other settings for your bot.
To create an agent, you write as follows:
All Agent parameters
All agent parameters are described below:
|agentName||string||A name used to idenfify your agent|
|language||any of the supported languages available on the Language class||The language of your agent|
|narrative||an array of the various types of BotTurns||The sequence of BotTurns that represent the main path through your dialog|
|userInitiatives||an array of UserTurns, optional||UserTurns that will be active anytime in your application|
|botInitiatives||an array of the various types of BotTurns, optional||BotTurns that aren't part of the narrative and that you manually have to goto|
|bridges||an array of strings or an array of BotTurns, optional||Bridges that will be executed before returning to the Narrative after having a detour in userInitiatives or botInitiatives|
|narratoryKey||string||the credentials for Narratory that you can get by signing up. Paste the key into the referenced file, i.e |
|googleCredentials||Google Service Account JSON||The credentials for the Google project that the Narratory agent should be built to. Follow the Setup guide to create a Dialogflow project, create a JSON key and then paste it into the referenced file, i.e |
|defaultFallbacks||an array of strings||Strings to be used to override the default fallbacks. See docs|
|skipQueryRepeat||boolean (default false)||Skips adding default intent handlers for users saying "Sorry, can you repeat?" that will repeat what was just said. See docs|
|logWebhook||a valid url||A url that will receive logs according to the set logLevel|
|logLevel||"NONE", "FALLBACKS" or "ALL"||Deciding when to log to your set logWebhook|
maxMessagesPerTurn | 1, 2. Optional | Will compress all bot messages to one message, i.e the response of one turn and the next bot initiative would be made one sentance. This setting is only effective for Google Assistant. |
General design recommendations
Designing for chat and voice is a fairly new field so we provide a few tips to keep the dialog engaging and functional:
Keep it snappy
Being concise and to-the-point is key when you build spoken interfaces, but largely so also for chat. The reason is that users have short attention spans and, specifically for voice, the synthetic voices are not always great at keeping attention. Rather have several shorter turns than a long monologue!
Humans tend to never repeat ourselves word-by-word, and probably by good reason since it drives many of us mad. Few things breaks the illusion of intelligence as much as when the bot repeats the same thing over and over. The counter-measure here is to add variation. It might seem tedious, but it will pay of already during testing since YOU will not be driven mad (at least not as fast as with no variation ;-)).
Prepare to iterate on user input
It is impossible to add all variation to user-input from day one so prepare to iterate fast especially during the first days of an app's life.
Do you really need to know?
Sometimes in conversation, it is acceptable to move on without fully understanding each other. This can definitely be exploited in chat and voice-apps by silently accepting answers if it is not crucial to get an actual measurable answer. For example, it might not be crucial to know if the user had a good day or not if your apps job is to sell flights. So, asking "Sorry, what did you say" if you get unknown input on an introductory "Hi, how are you doing?" question might be unnecessary and a more smooth tactic could be to say "I see", "Interesting" or something else generic and then move on. See error handling for more info and other tactics on handling errors.