Skip to main content

Basic turns

This page is a great start to read when you want to learn the basics of how Narratory works. You will find that creating your first app is only a little more complex than writing the type of scripts showed in the Introduction page.

Note: We are using Typescript for this since it is a very dynamic language, is easy to get started with, works great for declarative code (which is exactly what we will be doing - we will declare what the bot should say and listen to and let the Narratory system to all the heavy lifting to get it all working) and gives good support and autocomplete for creators.

Hello world - a BotTurn#

The Narratory example of a Hello world application is a simple BotTurn, i.e. an interaction where the bot just says "hello world", It is literally not harder than defining a variable in Typescript:

const greeting = "Hello world" // As a string
const withVariations = ["Hello world", "Hi world", "Howdy world"] // As an array of strings

The difference between the top two alternatives is that for the latter, Narratory will randomize one of the three phrases. Variation is extremely important to not bore and irritate users (it turns out that most humans have an allergy to repeating ourselves literally) so make sure you add variation down the line.

You can also define statements like these using BotTurn objects. Here, we import the BotTurn interface from the narratory library to get help with formatting.

import { BotTurn } from "narratory" // We import the type BotTurn from our library to get help with formatting
// As an instance of type BotTurnconst greeting: BotTurn = {  say: "Hello world"}
// As an instance of type RobotTurn, with an array of stringsconst withVariations: BotTurn = {  say: ["Hello world", "Hi world", "Howdy world"]}

These two snippets are equivalent to the above, but allow us to add user answers, conditions and other parameters. Curious? Keep reading!

Asking questions - adding a UserTurn#

The next step is to add answers to our BotTurn. To do this, we add an array of UserTurns to the user parameter of our BotTurn object. Each UserTurn consists of two parts, an intent containing information about what the user might say and one or several followup BotTurns - which are defined in the previous section.

Note that Typescript automatically expects a BotTurn when you start you are defining a followup BotTurn so you only have to define the types of top-level turns, i.e. const myTopLevelTurn : BotTurn.

Below, we add two potential answers to a question:

const question: BotTurn = {  say: "Have you talked to a bot before?",  user: [    { intent: ["Yes", "I have"], bot: "Oh, fun!" },    {      intent: ["No", "I have not"],      bot: {        say: [          "Wow, I'm glad to be your first!",          "I can't believe I'm your first!"        ]      }    }  ]}

Here, you can see that the user parameter takes an array (arrays are defined with []) containing two UserTurn answers. Each UserTurn has an array of phrases (the phrases that the user might say - see the Natural language understanding docs for more on this - and a followup BotTurn, defined in the first case as a string and in the second one as a BotTurn (with a say parameter).

Now, there are several reasons why you want to save the phrases that the user says, the intent, as a separate variable. By doing this, you;

  1. Can use it in more places.
  2. Get a cleaner and easier to read script since you usually want to have at least 5-15 examples per intent to cover for user variation.
  3. Can use Entities (see Entities on NLU page).

Defining intents can be done in two ways, as simple arrays of strings or as Intent instances:

// As an array of stringsconst yes = ["Yes", "I have"]
// As an instance of type Intentconst no: Intent = {  examples: [    "No",    "I have not",    "never",    "nope",    "you are my first",    "not until today"  ]}

This allows us to write our question in a bit neater way (with the 3rd answer removed for clarity) and to reuse the intents for a second question.

const metBeforeQuestion: BotTurn = {  say: "Have you talked to a bot before?",  answers: [    { intent: yes, bot: "Oh, fun!" },    {      intent: no,      bot: [        "Wow, I'm glad to be your first!",        "I can't believe I'm your first!"      ]    }  ]}
const queryGoodDay: BotTurn = {  say: "Did you have a good day?",  answers: [    { intent: yes, bot: "That makes me glad!" },    { intent: no, bot: "I'm sad to hear" }  ]}

If you want, you can also define your answer UserTurns as variables that then can be reused. This can make sense in a bigger application when you might want to reuse the same behavior (it could be a complex subdialog with many layers of BotTurns and UserTurns) and don't repeat yourself.

const haveMetBefore: UserTurn = {  intent: yes,  bot: "Oh, fun!"}
const haventMetBefore: UserTurn = {  intent: yes,  bot: ["Wow, I'm glad to be your first!", "I can't believe I'm your first!"]}
const metBeforeQuestion: BotTurn = {  say: "Have you talked to a bot before?",  answers: [haveMetBefore, haventMetBefore]}

Narrative - a sequence of BotTurns#

Now that we know how to create both BotTurns and answers, it's time to put them together to a narrative. This is easy, similarly to how you can group together strings and answers in an array, you group together your BotTurns to an array of BotTurns, which is your narrative.

/* Bot turns */
const greeting = "Hi there"
const metBeforeQuestion: BotTurn = {  say: "Have you talked to a bot before?",  user: [    { intent: yes, bot: "Oh, fun!" },    { intent: no, bot: "Wow, I'm glad to be your first!" }  ]}
const goodbye = ["good bye", "bye"]
// Creating the narrative of the three BotTurnsconst narrative = [greeting, metBeforeQuestion, goodbye]

User Initiatives - globally available UserTurns#

As you might have read in the introduction, in addition to your narrative you can add a set of User Initiative UserTurns that are active at all times. In other words, these are UserTurns that can consist of questions and triggers that the users should be able to say at any time in the dialog. Once these turns are completed (typically, this would be that the bot answers an out-of-narrative-question that the user asked), the bot will say one of the bridge-phrases defined in the Agent (see adding it all together to your first Agent) below) and will then continue where it was in the narrative.

Here, we show how to create UserTurns using inline examples, with intents and with followup BotTurns respectively.

// A userturn with inline examplesconst nameQuery: UserTurn = {  intent: ["What is your name", "who are you", "what can I call you"],  bot: "I don't have any name yet, unfortunately."}
// A userturn with an intentconst ageQuestionIntent: Intent = {  examples: ["how old are you", "what is your age"]}
const ageQuery: UserTurn = {  intent: ageQuestionIntent,  bot: "Oh, age is a complex question for a bot"}
// With a followup questionconst costQuery: UserTurn = {  intent: ["How much does it cost?", "What is the price?"],  bot: {    say: "Which model do you mean?",    user: [      {        intent: ["The large one", "The big one"],        bot: ["It is 50€", "the price is 50€"]      },      { intent: ["The small one", "The little one"], bot: "It is 30€" }    ]  }}
// A UserTurn used to do branching in Narrativeconst talkAboutSomethingElse: UserTurn = {  intent: [    "I want to talk about food instead",    "I am hungry, can we talk about food"  ],  bot: {    say: "Absolutely. Never to anything on an empty belly",    goto: "FOOD_SECTION"  }}
// Create your user initiatives using the above UserTurns/questionsconst userInitiatives = [nameQuery, costQuery, ageQuery]

As you see, once you learn how the two building blocks, BotTurn and UserTurn, works, it is possible to combine them in any way you would like. On the last row above you see how you can add the questions together by creating an array of all UserTurns.

BotInitiatives - BotTurns outside of Narrative#

Sometimes you want to add BotTurns that aren't part of your Narrative. These turns will not automatically be executed as all narrative BotTurns do, but instead you manually have to use goto("BOT_INITIATIVE_LABEL") to execute the turn. Like all BotTurns, a bot initiative BotTurn can be nested structures of dialog, i.e. have subsequent BotTurns or UserTurns and use goto to move to other turns. A BotInitiative BotTurn will return to the narrative if you don't have a goto defined.

// In botInitiatives.tsimport { BotTurn } from "narratory"
const outOfDomainTurn: BotTurn = {  label: "OUT_OF_DOMAIN_TURN",  say:    "This will only be executed if you manually goto OUT_OF_DOMAIN_TURN from another turn. Since there is no goto defined in this turn, the bot will go back to the narrative after saying this."}
const myBotInitiatives = [outOfDomainTurn]

Bridges - when returning to narrative#

When you are done executing out-of-narrative turns - userInitiatives or botInitiatives - it is usually a good idea to let the user know that you are returning to the Narrative. This usually depends on how long the detour from the narrative has been. This is done by adding bridges to your agent. A bridge could either be a list of strings or BotTurns, allowing you to build more complex behavior if you want.

A simple examples with a list of strings:

const bridgesAsStrings = ["So", "Where were we", "Now"] 

A more advanced example, where the bot asks if the user has more questions if the last response was from a userInitiative, is shown here:

import { BotTurn, ANYTHING } from "narratory"import * as nlu from "./nlu"
const bridgeBotTurns: BotTurn[] = [  {    cond: {        lastTurnType: "userInitiative" // A system variable that is set based on the last BotTurn    },    say: "Do you have any other questions for me?",    user: [      {        intent: nlu.Yes,        bot: {          say: "Ok, what is your question?",          repair: true // Using repair here to stay in this turn, allowing our userInitatives to catch our questions        }      },      { intent: nlu.No, bot: "Okay, great. Let's get back" },      { intent: ANYTHING, bot: "Sorry, I didn't get that. Let's move on" }    ]  },  {    say: ["Let's get back", "So, where were we"]  }]

Agent - adding all turns together#

Now that we have all the building blocks figured out, we can create our first Agent. The Agent connects the narrative, userInitiatives and botInitiatives as well as informs the Narratory system which language should be used (currently Narratory supports only one language per Agent, but multi-lingual support will be released in the near future) and sets other settings for your bot.

To create an agent, you write as follows:

import { narrative } from "./narrative"import { userInitiatives } from "./userInitiatives"import { botInitiatives } from "./botInitiatives"import { bridges } from "./bridges"
const agent: Agent = {  agentName: "My first app",  language: Language.English,  narrative: narrative,  userInitiatives, // Shorthand syntax when the variable is named the same as the parameter  botInitiatives,  bridges,  narratoryKey: require("./narratory_credentials.json"),  googleCredentials: require("./google_credentials.json")}

All Agent parameters#

All agent parameters are described below:

agentNamestringA name used to idenfify your agent
languageany of the supported languages available on the Language classThe language of your agent
narrativean array of the various types of BotTurnsThe sequence of BotTurns that represent the main path through your dialog
userInitiativesan array of UserTurns, optionalUserTurns that will be active anytime in your application
botInitiativesan array of the various types of BotTurns, optionalBotTurns that aren't part of the narrative and that you manually have to goto
bridgesan array of strings or an array of BotTurns, optionalBridges that will be executed before returning to the Narrative after having a detour in userInitiatives or botInitiatives
narratoryKeystringthe credentials for Narratory that you can get by signing up. Paste the key into the referenced file, i.e narratory_credentials.json in the root directory of your app.
googleCredentialsGoogle Service Account JSONThe credentials for the Google project that the Narratory agent should be built to. Follow the Setup guide to create a Dialogflow project, create a JSON key and then paste it into the referenced file, i.e google_credentials.json in the root directory of your app.
defaultFallbacksan array of stringsStrings to be used to override the default fallbacks. See docs
skipQueryRepeatboolean (default false)Skips adding default intent handlers for users saying "Sorry, can you repeat?" that will repeat what was just said. See docs
logWebhooka valid urlA url that will receive logs according to the set logLevel
logLevel"NONE", "FALLBACKS" or "ALL"Deciding when to log to your set logWebhook

maxMessagesPerTurn | 1, 2. Optional | Will compress all bot messages to one message, i.e the response of one turn and the next bot initiative would be made one sentance. This setting is only effective for Google Assistant. |

General design recommendations#

Designing for chat and voice is a fairly new field so we provide a few tips to keep the dialog engaging and functional:

Keep it snappy#

Being concise and to-the-point is key when you build spoken interfaces, but largely so also for chat. The reason is that users have short attention spans and, specifically for voice, the synthetic voices are not always great at keeping attention. Rather have several shorter turns than a long monologue!

Add variation#

Humans tend to never repeat ourselves word-by-word, and probably by good reason since it drives many of us mad. Few things breaks the illusion of intelligence as much as when the bot repeats the same thing over and over. The counter-measure here is to add variation. It might seem tedious, but it will pay of already during testing since YOU will not be driven mad (at least not as fast as with no variation ;-)).

Prepare to iterate on user input#

It is impossible to add all variation to user-input from day one so prepare to iterate fast especially during the first days of an app's life.

Do you really need to know?#

Sometimes in conversation, it is acceptable to move on without fully understanding each other. This can definitely be exploited in chat and voice-apps by silently accepting answers if it is not crucial to get an actual measurable answer. For example, it might not be crucial to know if the user had a good day or not if your apps job is to sell flights. So, asking "Sorry, what did you say" if you get unknown input on an introductory "Hi, how are you doing?" question might be unnecessary and a more smooth tactic could be to say "I see", "Interesting" or something else generic and then move on. See error handling for more info and other tactics on handling errors.