An exploration of settings delivery ways for hearing aid users.
Concurrent think-aloud testing
5 Act Interview from Day 5 in Design Sprint
Rasa&RasaX, ngrok
Hearing aid users have a hard time adapting to a new device and usually experience hard sounds in certain environments. There are some things that have to be mentioned here:
Oticon in partnership with Eriksholm and Denmark Technical University have investigated the need for personalised hearing aid fitting.[1] They want to build an application that determines the environment in which the user is and tweaks the settings of the hearing aids accordingly. They discovered that hearing aid users prefer the settings learned in real world listening environments. In order to take the project further, they want to understand what is the best interface between the application and the users. Manual parameter tweaking - or using a visual interface with sliders - a tedious and demanding task due to the very complex search space that the user has to traverse. [2] This is the reason why creating a conversational interface was suggested.
The goal of the project was to research whether a conversational interface is suitable for delivering personalised fitting to hearing aids users.
We had no prior domain knowledge, therefore we started by reading scientific papers on hearing aids fittings and what do users say mapped to what actions audiologists make for each case. Subsequently, we had interviews with two audiologists in order to verify our findings and understand their opinion about the process.
Based on initial research, we wrote the:
The language related to hearing is quite abstract and it would be almost impossible to create all the possible user intents, without the environment detection feature. Therefore, we decided to restrict the number of intents the user can have to the most common 2. This did not affect our goal because the goal was not building a fully functional conversational interface. The two chosen intents are: “my own voice is too loud” and “the environment sounds too loud”.
Following this, we created the chatbot flows. We took into consideration all the options the audiologist could answer. If no good hearing aid fitting is found for the current problem during the chat, the bot would make an appointment to the audiologist and save the current environment characteristics (in the ideal case).
NOTE: When building a chatbot from scratch Interactive Mockups are needed in order to test the chatbot without actually building it. However, Rasa X allows easy deployment of the chatbot and early testing with users (because it offers annotations and interactive learning). Therefore, we chose to use Rasa&RasaX for the development, in order to easily achieve the goal.
Rasa is an open source framework that offers the infrastructure needed for building an AI assistant. It has a NLU (natural language understanding) component - where the user intents are defined - and a Core component - made of the domain where all the utterances(responses, actions etc) of the bot are defined and stories that are the flows of the chat. Rasa also offers Rasa X, “a tool designed to make it easier to deploy and improve Rasa-powered assistants by learning from real conversations.” - Rasa Blog.
The flows were implemented as stories in Rasa. The user intents were described with training examples to show the model how to interpret user messages. In the domain, the actions and utterances (templates, answers), of the chatbot were defined. An action example is that when the chatbot does not understand what the user is saying (the intent recognition threshold is lower than a set value) a fallback action is called that in this case replies with a default message (eg: “Sorry, I didn't understand. Can you say that again, please?”).
After the chatbot was ready, we deployed it using RasaX and ngrok and we asked people to test it. In this way, we didn’t have to think about all the situations that can happen and train the model extensively before deploying; we could review all the conversations and use interactive learning to improve the assistant.
This step was one full of iterations and training, where we tested, reviewed and repaired mistakes extensively until the chatbot was ready to test with real users.
Next step was to validate the flow from the audiologists point of view (is the way that the bot is constructed similar to the way the audiologists usually answer to patients?) and to understand in which phase of the hearing aid journey the assistant would be needed.
The validation with audiologists is valuable because the chatbot will act as a “virtual audiologist”, therefore the experience should be similar to the one received from a professional audiologist.
Audiologists testing.
Findings:
The last step was to validate the chatbot with real hearing aid users. The goal of this testing was to find out if the chatbot helps them reach a good setting, how they perceived the interaction with the chatbot and if they would use a tool like this.
Hearing aid users testing.
How did the test happen? The tester was sitting in the middle of the soundlab, having the phone with the chatbot in his hand, while one of the interviewers was sitting next to him. The other interviewer was playing, from the control room, the sentences one by one, in batches of 5. The tester was asked to repeat the sentences. After each batch, the user could talk to the bot if there was a problem. The interviewer sitting next to the user was changing the settings of the hearing aid according to what the bot was saying. Tested in how many of the situations the chatbot helped the patients to reach the good setting (from what they hear and what we rank). 2 of the situations were designed in a way that the user does not hear almost anything except background noise in order to test the fallback to the audiologist.
Findings:
Hearing aid users would use the chatbot to help them in difficult situations. They think a chatbot is a better solution compared to an interface with sliders. However, they would use the chatbot mainly in the beginning, in the adaptation; after that they would expect the hearing aids to work.
The go-to-market strategy would be for audiologists to suggest to the new hearing aid users (or users that switched to a new hearing aid) to try the chatbot in the adaptation phase.
Some of the features that would need to be added in the chatbot are:
Plan your interviews way ahead of time. Although we planned them some time before, some of the users cancelled at the last moment. We aimed at having 5 hearing aid users to test with because, as Jakob Nielsen also says, the number of found usability problems decreases drastically after the fifth tester.