How often have you found yourself frustrated when answering a survey, because you were not presented with an option that covered your case or enabled you to raise your concern? Have you ever wished you could provide more detailed information?
If you answered yes to either of these, then that firm could not get useful information to improve your experience with them.
Why Should I Include More Open-Ended Survey Questions?
While multiple choice responses are straight-forward to analyze with clear trends in responses, it only leaves room for answers to questions that the survey writer anticipated. This is okay for some questions, such as yes or no, how many times, Likert ratings, or questions with only a few possible responses.
For other questions, like “how do you feel about our product?”, it’s nearly impossible to anticipate any adjective a person would want to use.
Furthermore, with multiple choice for such a question you are limiting responses in a way that manipulates data. You could lead the survey taker into submitting a misleading response by forcing their selection into predetermined categories.
Multiple choice questions can help you identify a problem, but they rarely provide enough insight to help you solve the problem.
Open-ended questions allow respondents to provide answers in their own words, focusing on what is important to them. With no restrictions on their response, you can identify new issues that you would not have thought to include in your questions.
In addition, this kind of open text feedback will often contain information about context (in which circumstances an event occurred) and additional detail (exactly what happened).
The Challenge With Open-Ended Question Analysis
While open-ended questions can provide a wealth of meaningful information, it takes a great deal of time to analyze them properly. In fact, User Researcher and founding partner of Adaptive Path Indi Young, plans for 8 to 10 hours of analysis time for every hour of recorded interviews or text read at natural speed. We have found this estimate to be realistic.
Why does it take so long? It takes time because you don’t know what you are looking for – you will know the valuable nuggets when you see them, but only analyzing all the data will provide the patterns to reveal them. To do this, you have to:
- Go through every word in the responses
- Identify the topics that are mentioned
- Identify the labels people are using to distinguish those topics
- Map different labels people use for the same things
- Repeat the process for adjectives and modifiers
- Identify how they feel about these topics, positive, negative, or neutral
- Discern contexts that clarify the meanings
- Extract relevant details that can be used in developing solutions
This process may seem like overkill – if you have a dozen or two short responses most people can read through them and take away one or two key points. However, if you have hundreds of responses, or the respondent can go into detail and provide longer answers, then you rapidly obtain more information than can be usefully processed merely by reading through them.
A structured analysis, aggregating the detailed responses from many participants, can reveal insights that might easily be missed in small samples. However, few firms have the resources to provide that kind of analysis on hundreds or even thousands of responses.
When to Incorporate NLP for Surveys
Fortunately, machine-learning enabled algorithms have developed to the point where much of this analysis can be automated. The process is called Natural Language Processing, or NLP for short. While it can’t do everything listed above, NLP can be of great assistance in two major areas: 1) Topic Analysis (what people are talking about), and 2) Sentiment Analysis (how they feel about those topics).
Using NLP to perform that preliminary work of topic and sentiment analysis can give the research team a great head start and allow them to instead focus on what human experts do best – assimilate those results and then look at the contextual information and details to glean valuable insights. Furthermore, it reduces human error and bias.
A Real-World Example With Amazon Comprehend
During the Discovery phase of projects, Atlantic BT frequently uses surveys to conduct user research. Recently, we needed to analyze responses in a survey performed as a part of brand research for a pharmacy school.
In this instance, Atlantic BT was working with 800 responses from hundreds of participants. At an average of one minute per response, simply reading through all these would take 13.5 hours, or two full days. And that’s before performing any analysis – remember the point above about proper analysis taking 8 to 10 times longer? That would mean that a fully manual analysis of that content would take three weeks!
Instead, we chose to use NLP to perform the basic topic and sentiment analysis, which allowed our research team to rapidly identify key areas to focus on and research more fully. We chose Amazon Comprehend as the NLP tool to use.
Why We Chose Amazon Comprehend
Amazon Comprehend is a service that uses machine learning to draw insights from text. You could use this tool to identify positive or negative connotation or to pick out specific phrases within responses. According to Amazon, full capabilities include:
- Identifying the language of text
- Extracting key phrases, places, people, brands, or events
- Understanding how positive or negative text is
- Analyzes text using tokenization and parts of speech
- Automatically organizes a collection of text files by topic
- Building custom sets of entities or text classification models that are unique to your organization
As Atlantic BT is an Amazon partner, we find that Amazon Comprehend is compatible with our other toolsets, is continually being improved, and is very cost effective.
What We Learned Through NLP Analysis
Once the full analysis was complete, Atlantic BT’s user research team was able to draw conclusions that helped drive a website redesign and content strategy.
Eight major topics were identified as reasons for wanting to attend this pharmacy school. Further research, such as cross-validating these insights with other sources such as search terms, Reddit and other methods, enabled us to refine our insights around these topics. Understanding the motivation behind prospective students in selecting a school and program is critical to boosting the conversion rate of these low-volume, high-value transactions of both applying to a school and finally selecting that school from those that approved their application.
Just a few examples of the insights gained include:
- Deep Motivations: While things such as national rankings are of obvious importance, we learned more about how motivations and decisions were shaped by a key influencer in the applicant’s life; the stories related in the responses were extremely helpful in identifying content topics which would resonate with and reinforce those motivations. These factors often influence decisions around programs and schools to which they will apply.
- Natural Environment: While not necessarily something one would think about in selecting a pharmacy school, the comments made it clear that proximity to a lake and other outdoor activities was a differentiator for many applicants. Factors like this can make a large difference in turning an offer into an acceptance – which is very important when most applicants have been accepted by multiple schools.
- Multiple Value Propositions: Students must now make a complex return on investment calculation when considering their career options against student debt. Things such as dual-degree programs could save a year of education, a variety of programs can offer opportunities to improve specialization in the field of pharmacy and thus expand career opportunities. Responses identified these and more as important decision points.
These types of themes were leveraged to create engaging content, matching the needs and motivations of prospective students towards the end goal of increasing quality applications and acceptance into the pharmacy school.