Mobile Surveys: Breaking the Paradigms of Market Research’s Past

01.05.19

By Bob Yazbeck, Vice President, Digital Methods & Tonya Jiles, Senior Project Director, Digital Methods, Gongos, Inc.

Cultural trends and the high rate of smartphone adoption require a deeper understanding of our mobile world and how it impacts marketing research in the 21st century.  In order to lay out an enduring case for mobile in our world—specifically with the almighty survey—we have to methodically build a story one chapter at a time.  Let’s review the work that has helped usher in 2013. In doing so, we may be able to rewrite the future. But before beginning that task, we dug deep into market research’s past to figure out why the heck our industry crowned the 20-minute survey king.

With global smartphone penetration projected to reach between 60-70% in the next five years (Taipei Times, January 2013), the expectation of gathering information in a speedy fashion has impacted society’s willingness to devote a significant amount of time to any one task.  Yet researchers still spend a healthy amount of time trying to guide clients toward making pivotal decisions using data collected from lengthy, static, and linear online surveys.

In writing this article, we set out to answer the question – will mobile be the great enabler to help us move beyond the traditional market research survey?  But first we need to explore what got us here in the first place.

What popularized the 20-minute online survey?

Our survey mindset was developed when the vast majority of survey research was conducted via mail, making it rooted in decades-old thinking.  The strategy of that era was to take advantage of every single penny it cost to produce the survey—from typesetting and printing to postage and delivery. The vehicle was often optimized in an eight-page double-sided survey crammed tight with every possible question.  By doing this, we pretty much ensured an average of 20 dedicated minutes for a respondent to complete.

When the phone survey came along in the 1950s, we “reached out and touched someone” because everyone had a phone, but the idea of squeezing out every last bit of information remained.  We held tight to the notion that once someone was on the phone and willing to complete a survey, they’d willingly endure 20 minutes or more of verbal inquires. Phone surveys continued to foster the “ask everything now because we can’t go back and ask it later” mentality.

Fast forward to the online revolution and the ability to quickly and conveniently collect data through bits and bytes. This digitized version of the mail survey brought us closer-to-the-vest control over questions and shorter fieldwork duration.  As researchers, we quickly adapted to the power of the Internet, pushing the survey length to 30—and sometimes 40—minutes, with little downside or risk of dropout, as the novelty was well…new and the sample was both plentiful and relatively inexpensive.

We now live in a world where iOS overshadows OS2, and we have to address the eventuality of incompatibility between smart devices of the future and the almighty 20-plus-minute survey. In fact, our sample suppliers are indicating that 20-25% of participants choose to participate in surveys via mobile devices.

Does 20 minutes still fly in 2013?

Seismic changes are occurring in the way we communicate.  Modern-day interaction is speedier, shorter, and depends heavily on symbols and images. As human conversations condense from emails to instant messages to texts to tweets, we’ve adopted a 140-character-or-less mindset. The extraordinary popularity of social media platforms Pinterest and Instagram are resounding examples of this.  Sharing and conversing using pictures eliminates language barriers and brings a whole new meaning to the phrase “a picture is worth a thousand words.”

Consumer attention deficit also shapes the way we express ourselves. Today’s consumer juggles friends, family, marketers and researchers who demand their attention. So multitasking becomes a way of life, forcing them to offer snippets of time to multiple duties rather than focusing their energy on any one thing.

The effects of these twin factors—condensed conversations and consumer attention deficit—are visible throughout our culture in two ways:

So while we can adapt online surveys to capture data on mobile devices, how can researchers expect today’s time-crunched consumers to spend 20 uninterrupted minutes providing their opinions to us?  If our society is changing, then the way we ask questions needs to change too.

We need to step away from the notion that the 20-minute survey is still king.

Putting mobile to the test

With this thought in mind, we set out to test the validity and viability of smartphone-enabled surveys and the role mobile devices play in authenticity.  Let’s take a look at where the first chapter began:

1) Mobile surveys: viability and data validity

In 2010, we put mobile survey-taking to the test and pushed it to its boundaries.  We were more than pleased to realize that mobile surveys did, in fact, yield similar results to traditional online surveys.  Additionally, we defined scale usage to max out at five points, noted that respondents provided “full” responses in open-ended verbatims as measured by character length, and discovered that response rates are comparable. We even found mobile surveys to be fit for complex quantitative studies such as segmentation, including a max diff and a short/simple conjoint.

While this was excellent news, our validation was tempered by respondent endurance.  It seemed 10 minutes was the maximum limit for a mobile survey, hardly ideal in an industry where surveys of 20 minutes have been crowned king.

2) Mobile: anonymity and authenticity

In early 2012, we looked at history through another dimension. Taking our cues from the principles of Existential Psychology, we explored the steps that human beings take to develop and express their “version of self.” Under this discipline, the idea of conformity comes heavily into play. During this study, we worked from a hypothesis that social networking exacerbates the idea of censorship as it relates to social conformity. But more importantly, we wanted to examine if the smartphone altered the equation.

We designed a study that broke social networking into two domains:

1)  Public platforms (i.e. Facebook and Twitter)
2)  Private platforms (i.e. online research communities)

Both qualitatively and quantitatively, our results showed that people tend to censor themselves on public platforms.  In other words, their “Facebook version of self” may not always convey their “real (or full) version of self.”  We also found that public forum postings tended to skew toward big-ticket items, i.e. cars and travel, versus candy and its wrapper.

Yet our research also indicated that people censored themselves less when using smartphones to interact on these same public platforms.  They were prompted to act on their thoughts and impulses (hence, becoming more authentic) in their communications about their surroundings—and more importantly, all products and services.

The influence of the smartphone through the lens of private and private social behavior reaffirmed for us the corresponding and serious implications of the smartphone for marketing research.

3) Mobile: can it break the 20-minute survey paradigm?

Most of us have already unwittingly witnessed the success of snack-size research.  For nearly a decade now, online research communities have fielded short, focused polls and discussions for members to respond to when and how they are available.  The consumer insights gleaned from these bite-sized research efforts have proven invaluable and potent.

Armed with the confidence of “bite-sized” research and the confirmation of mobile viability/validity and authenticity under our belts, we set out to see if we could break the century-old monarchy of the 20-minute survey—the so-called benchmark of research.

Yet how do we take one long and linear survey, section it into bite-size chunks, and still draw the same valid conclusion and story for our clients?

The quick scientific answer: data fusion, which combines multiple partial data files into one complete set.

The study:

We separated a previously conducted study into three individual “snack-sized” survey modules to be completed on mobile devices. A series of questions overlapped between all modules to assure continuity across the survey and to provide a series of attitudinal “hook” questions.  The sample was divided into three test groups:

  1. Online Control
  2. Online Module
  3. Mobile Module

The control sample completed a traditional 25-minute online survey composed of questions asked in all three survey modules, while each modular group was assigned one of three 10-minute survey modules.

Modular respondents were given the option to complete a second module (and subsequently a third, essentially completing the entire survey) after completing each module. To our surprise, 72% of respondents ultimately completed all three modules, which proved helpful during the analysis and suggested that modular research may not actually require three times the amount of sample to conduct.

The analysis:

Both Gongos Research and SSI independently put the data through rigorous analysis, using two distinct types of data fusion methods:  data stitching and hot decking.

Following is a brief description of each method and their advantages.

Hot decking imputes, or scientifically estimates, how a respondent might have replied to an unanswered question based on the characteristics of the individual and corresponding similarities to the remaining sample. This approach creates a complete set of data based on a combination of actual responses and statistically estimated responses.

Hot decking’s main benefit over data stitching involves the relatively smaller sample necessary for analysis. We can’t underestimate this benefit, as it potentially limits any hesitancy due to sampling budget issues.

Because new data points are estimated, sample size is artificially inflated. This could result in skepticism on the part of researchers regarding the validity of this technique, since it artificially reduces error in statistical significance testing.  However, when properly executed with careful sampling and modular survey design, this approach provides reasonable results and descriptive statistics. Additionally, we ascertained the effectiveness and accuracy of this approach using K-means cluster analysis, an approach that will likely be unaffected by the sample size inflation.

Data stitching examines modular respondents, identifying those who are most similar on the attitudinal hook variables. Two respondents determined to be most similar are combined, merging their data together to form one complete respondent.

This method is void of imputation, estimation, or prediction and exclusively uses data that has been organically collected from consumers. This opens the door for us to conduct basic statistical tests such as t-tests or chi-squared tests, in addition to the cluster analysis done using hot deck imputation.

However, data stitching requires that we have enough similar respondents to stitch together, in order to create a robust set of complete data. This approach likely requires a larger sample size than hot decking, since “unmatched” respondents will need to be dropped altogether.

Interestingly, both approaches found similar data and yielded the same overall conclusions, even down to the segments created from a K-means cluster analysis.

The “hook variable” factor

Over the past year, other industry colleagues have independently attempted data fusion (imputation and data stitching) using demographic and behavioral attributes. Since demographics and behaviors are more situational and subject to change, we focused on *attitudes to stitch the data together, thus creating a more cohesive data set and allowing us to draw the same conclusions across all three samples.

While the concept of data fusion makes many market researchers nervous because fusion relies on filling in “missing” data, we need to respect the ever-increasing time constraints of our respondents and leverage data fusion to reduce the time burden of survey participation.

Have we overthrown the king?

Not yet, though the revolution has begun whether we like it or not.

We hope that our own modest and iterative successes, combined with the work of others, challenge the marketing research industry to move forward with confidence into the constantly evolving world of mobile.  We have already gained significant ground and recognize the immediate need to take a multi-modal approach to research, continuing to safely push the boundaries while reaping opportunities and possibilities.

If we do not, we will find ourselves playing catch-up or worse—left behind—when human beings are no longer willing to share their own stories and collaborate with us from their already condensing worlds.

As published in Survey Magazine.