Since chat and voice-apps by their nature are more personal than most graphical applications through mimicing human speech, working on expressiveness of your apps is very important.
If you chat will be used with voice, you have to care about pronunciation (which, despite the improvements of text-to-speech the last years still sometimes leaves a lot to wish for), tempo, pause, how to handle errors (see error handling), variation (add it from the start!).
In graphical chat you have to adapt to the screen-estate available and can enrich the user experience using rich content like images, cards, lists and buttons.
This page walks through different ways of tweaking the expressiveness of your bot and how you can give it the illusion of a personality.
A crucial step in the design-process, and the easiest way to personalize your app is selecting what your agents language should be (talk in and listen for). Currently, Narratory agents speak one language, but multi-lingual support will come in the future. The language selection is done in the Settings tab.
The full list of languages are:
When choosing language, it is important to be mindful that not all built-in entities (see built-in entities in NLU docs) are available in all languages. See Dialogflow's list of System entities to learn what entities are available in your selected language.
In order to add expressiveness and rich content to your dialog we provide you the Rich say block which looks like this:
The RichSay block is the key to adding suggestions (also called "quick replies" or "quick buttons"), content (such as Images, Cards, Lists, Buttons), SSML (special markup for pronounciation for spoken interaction) and conditions (See Conditionals on Rich says).
To add these options to your RichSay you press the cog-wheel on the block and add the respective inputs you want as shown here:
For clients with a GUI, for example websites, Facebook messenger and Google Assistant on mobile phones, adding Graphical content is usually adding to the user-experience - both by enriching the conversation but also by making it easier for users to navigate the conversation.
To add graphical content, you use the content input of the
RichSay block and then add the appropriate content block from the output menu.
Note: Each platform will render rich content differently. Some platforms may not support all types of rich content, in which case it will be ignored. Contact us if you have questions about the compatibility for a specific platform.
Suggestions (also referred to as "quick replies" or "quick buttons") are buttons that show up in graphical chat clients and allow users to navigate a chat-app in a menu-like fashion.
Important If you click a suggestion button it will be treated the same way as if the user said exactly what the button says. In order to act on this, you have to have an intent with example phrases that map to the button texts.
Here is an example with two suggestions for a simply yes/no question. Note that the intents have to include examples that are similar enough to the button texts "Yes of course" and "No I don't" respectively.
Optionally, if you have only one button, for example "Continue", you can use the
Anything block to move on.
Images can be added using an Image URL and an alt-text:
Image upload will be added in the future
Buttons are currently supported as part of Cards but not stand-alone. This will be added shortly!
Cards has a title, an optional image and optionally one or several buttons.
This example has a title, an image and two buttons each with a link to a website.
To be added shortly!
Not a Narratory specific option, but your selection of gender an voice is an important one when it comes to designing a persona that fits your brand and/or communication. Each platform has different voices that you configure separately. For Google's ecosystem you can configure it straight in Dialogflow.
For voice-apps, pronunciation is naturally key in making a good impressions with your users. Usually the text-to-speech does a good job, but sometimes it is necessary to tweak how certain words are pronounced using SSML, which stands for Speech Synthesis Markup Language (SSML). Using this, you can make your agent sound more natural by altering how words are said.
For example, if you want to pronounce "1" as "first" instead of "one" you can use the SSML:
<speak>I like the <say-as interpret-as="ordinal">1</say-as></speak>
Note In SSML, you always typically has to add
before andafter your text. This can be omitted in Narratory for your convenience
If your bot is voice only you can use SSML directly in the normal text outputs like this:
We recommend using the SSML option in Rich says however since it makes it easier to review logs to not have these SSML-tags in the text. Here, you provide a normal text for readability OR text-chat devices and a SSML-input for spoken devices:
This SSML will pronounce "bar" as characters, i.e "B A R" on spoken devices, but when the chat is shown in text it will show "BAR".
Each platform has their own voice synthesis engines, which unfortunately means they support different subsets of the SSML standard. For this purpose, it is currently recommended to review each platforms own SSML reference material, for example Google's SSML reference and Amazon's SSML reference