Online market research panels: 3 key quality challenges and how to conquer them

 

By Katie Gross, Chief Customer Officer
• • • • • •

Quality is a challenge for all panel providers and online market research tools. Learn what Suzy is doing about it on our research cloud.

The market research industry has a quality problem on its hands. 

As the Chief Customer Officer at Suzy and someone who has been in the industry for 20 years and counting, I recognize that getting high-quality responses from online survey takers has always been a challenge. 

Some respondents simply aren’t set up for success with poorly designed surveys (for example, forgetting to add “none of the above”). Others regularly try to cheat the system by attempting to rush through surveys, providing incorrect or misleading information, or attempting to register multiple times as the same person to gain incentives. And some use technology to help them mine rewards from online survey platforms through bot generation. 

Given the trying economic times we’re facing right now, it’s safe to say that we can expect bad actors to ramp up their activity. 

It all impacts the accuracy of survey results, and can ultimately destroy real survey respondents and brand faith in online market research. 

So, what can be done about low-quality data and poor survey responses?

Right now, there are three key challenges for online panels:

  1. Real human beings providing poor-quality answers due to poor survey design

  2. Real human beings cheating the system

  3. Bots 

Let’s explore each of these in turn—and what Suzy is doing to solve these issues.

Real humans providing poor-quality data

First, some people genuinely want to take part in the survey and join the panel. They want to answer questions accurately—but they’re often faced with poor survey design. 

Either the survey is too long (during my career, I’ve observed that surveys run on average around 23 minutes, with some all the way up to 45 minutes long) or the questions are confusing or unanswerable. Confusing speed traps and red herring questions also make things difficult for consumers, showing the respondent that we don’t trust their answers. It creates a cycle where respondents are asked time and time again to prove they are trustworthy respondents.

Why are the surveys so long?

Respondents are often asked to verify their demographics and given behavioral screeners in every single survey. Consumers have to answer these questions each time to ensure they qualify for the survey as that data is often not passed from panel provider to survey platform. Plus, panel providers charge their clients by the survey complete, not by question. As clients try to get as much bang for their buck, they throw as many questions as they can into one survey, leading to lengthy surveys.

By the time respondents reach minute 18 of the survey, they’re fatigued. With fatigue comes low-quality answers. By this point, they may not be reading the questions correctly and may be bored of answering the same questions with slight variations. Take sequential monadic studies, for example, which are almost always chosen over monadic when surveys are charged at the survey complete unit economic, not per question. Even though people started surveys with good intentions, the length and design of the survey eventually had a negative impact on their responses, leading to bad data quality. 

Real humans cheating the system

Another contributing factor is the industry standard of paying respondents based on survey completion, encouraging shady behavior and overstatement to make it to the end. 

First, there’s the issue with screeners at the beginning of every survey. Respondents often spend five minutes of their time on demographic and screener questions only to find themselves screened out of the survey—and ineligible for their reward. It’s a frustrating waste of time for respondents, and the process restarts when they begin another survey.

Routers try to solve this, sending respondents to surveys they are more likely to qualify for. But consumers end up having to answer the same questions over and over again until they get a payout, causing survey fatigue. 

So, some human respondents speed their way through surveys without reading the questions and selecting every option available to them so they don’t get screened out of the survey. The data isn’t accurate—and making decisions based on it can be a big pitfall for brands. 

Bots 

Helping humans game the system, bots can impact survey data quality in a few ways. Typically, a human creates a bot and can get smarter over time. 

Some bots can replicate themselves inside a survey or multiple different surveys using an exit link for a survey. Basically, bots cheat the exit link. It looks like they’ve made it through the end of the survey, but they actually skipped the entire thing. 

Other bots replicate the exact same answer options. As the bot takes the survey, it learns what answers will result in a reward at the end so that the bot owner qualifies for every survey. Then, the bot can answer the survey multiple times. 

What makes Suzy different

To solve each of these issues, Suzy has taken a few approaches. 

Our surveys are short and simple

Instead of charging per survey, Suzy charges per question. That means that our clients are very thoughtful about the questions they include in each survey. They don’t have to stuff extra questions into each survey, and respondents don’t get as fatigued. We also cap our surveys to 60 questions to keep respondents engaged. Most Suzy surveys are about seven questions long. And respondents spend just a few minutes answering them. 

Our clients screen respondents quarterly

With our question cap, we’ve created a unique screener set to build a reliable and trustworthy audience panel. Respondents in Suzy's online panel are only screened once every quarter, which reduces the likelihood of non-commitment and low-quality survey responses. We store that information for our clients, so consumers don’t have to waste time answering the same demographic questions multiple times. Then, our clients can retarget their screened consumers for continuous learning.

Our audience always gets paid

We pay respondents for every single question they answer, regardless of whether they qualify for the full survey or not. Our panel is paid to give honest answers—they don’t have to select every option to avoid getting screened out.

We don’t use exit links

Since our panel is built straight into our platform, Suzy doesn’t use exit links. That means there aren’t Dark Web and YouTube tutorials on how to cheat our system, which cuts down on bot activity and ghost completes 

We constantly check for quality

Since Suzy owns our panel, we have built-in quality checks and proprietary technology from the moment consumers attempt to create an account. With quality control through the entire life cycle, Suzy can weed out bad actors and bots as they are trying to join the panel—before they even attempt a survey.

The future of online panels and data quality

By taking steps now to help our industry and clients properly design highly engaging surveys, incentivize respondents, and manage bot activity, online panels can ensure accurate and actionable data for their clients. With the right steps in place, online panels can become more reliable sources of data for businesses and researchers alike.

At Suzy, we take proactive measures to ensure our panel stays ahead of the curve regarding data accuracy and quality. We’re constantly updating our methods to be able to discern good survey takers. To learn more about Suzy Audiences and the impact a quality online panel can have on your consumer insights, book a demo now.

 
Previous
Previous

Faithful or Fickle? Financially-Driven Consumer Shopping Choices

Next
Next

State of the Consumer: Weathering the Financial Storm