Online market research panels: 3 key quality challenges and how to conquer them

 

• • • • • •

Quality is a challenge for all panel providers and online market research tools. Learn what Suzy is doing about it on our research cloud.

By Katie Gross

The market research industry is currently grappling with a significant quality issue.

As the Chief Customer Officer at Suzy, with over two decades of industry experience, I've witnessed firsthand the challenges of securing high-quality responses from online surveys. The issues range from poorly designed surveys that fail to include options like "none of the above," to respondents who try to game the system by speeding through surveys, providing inaccurate data, or even using bots to mine rewards from survey platforms.

In these economically challenging times, the incentive for such deceptive practices only increases, jeopardizing the accuracy of survey results and eroding trust in online market research both from real respondents and brands.

So, how do we tackle these issues of low-quality data and respondent fidelity?

Currently, the online panel industry faces three major challenges:

  1. Real human respondents provide poor-quality answers due to flawed survey design.

  2. Genuine respondents attempting to cheat the system.

  3. The use of bots to manipulate survey results.

At Suzy, we are committed to upholding the highest standards of data integrity. Our approach includes a best-in-class audience quality system that utilizes a proprietary screening process. This system effectively filters out bots and fraudulent respondents, ensuring that every piece of data collected is from a verified and reliable source. Trust in Suzy not just for data, but for the peace of mind that comes with knowing your insights are built on a foundation of quality and integrity.

Let's explore each of these issues and explore how Suzy is addressing them with robust solutions.

Real humans providing poor-quality data

First, some people genuinely want to take part in the survey and join the panel. They want to answer questions accurately—but they’re often faced with poor survey design. 

Either the survey is too long (during my career, I’ve observed that surveys run on average around 23 minutes, with some all the way up to 45 minutes long) or the questions are confusing or unanswerable. Confusing speed traps and red herring questions also make things difficult for consumers, showing the respondent that we don’t trust their answers. It creates a cycle where respondents are asked time and time again to prove they are trustworthy respondents.

Why are the surveys so long?

Respondents are often asked to verify their demographics and given behavioral screeners in every single survey. Consumers have to answer these questions each time to ensure they qualify for the survey as that data is often not passed from panel provider to survey platform. Plus, panel providers charge their clients by the survey complete, not by question. As clients try to get as much bang for their buck, they throw as many questions as they can into one survey, leading to lengthy surveys.

By the time respondents reach minute 18 of the survey, they’re fatigued. With fatigue comes low-quality answers. By this point, they may not be reading the questions correctly and may be bored of answering the same questions with slight variations. Take sequential monadic studies, for example, which are almost always chosen over monadic when surveys are charged at the survey complete unit economic, not per question. Even though people started surveys with good intentions, the length and design of the survey eventually had a negative impact on their responses, leading to bad data quality.

Real humans cheating the system

Another contributing factor is the industry standard of paying respondents based on survey completion, encouraging shady behavior and overstatement to make it to the end. 

First, there’s the issue with screeners at the beginning of every survey. Respondents often spend five minutes of their time on demographic and screener questions only to find themselves screened out of the survey—and ineligible for their reward. It’s a frustrating waste of time for respondents, and the process restarts when they begin another survey.

Routers try to solve this, sending respondents to surveys they are more likely to qualify for. But consumers end up having to answer the same questions over and over again until they get a payout, causing survey fatigue. 

So, some human respondents speed their way through surveys without reading the questions and selecting every option available to them so they don’t get screened out of the survey. The data isn’t accurate—and making decisions based on it can be a big pitfall for brands.

Bots 

Helping humans game the system, bots can impact survey data quality in a few ways. Typically, a human creates a bot and can get smarter over time. 

Some bots can replicate themselves inside a survey or multiple different surveys using an exit link for a survey. Basically, bots cheat the exit link. It looks like they’ve made it through the end of the survey, but they actually skipped the entire thing. 

Other bots replicate the exact same answer options. As the bot takes the survey, it learns what answers will result in a reward at the end so that the bot owner qualifies for every survey. Then, the bot can answer the survey multiple times.

What makes Suzy different

To solve each of these issues, Suzy has taken a few approaches.

Our surveys are short and simple

Instead of charging per survey, Suzy charges per question. That means that our clients are very thoughtful about the questions they include in each survey. They don’t have to stuff extra questions into each survey, and respondents don’t get as fatigued. We also cap our surveys to 60 questions to keep respondents engaged. Most Suzy surveys are about seven questions long. And respondents spend just a few minutes answering them.

Our clients screen respondents quarterly

With our question cap, we’ve created a unique screener set to build a reliable and trustworthy audience panel. Respondents in Suzy's online panel are only screened once every quarter, which reduces the likelihood of non-commitment and low-quality survey responses. We store that information for our clients, so consumers don’t have to waste time answering the same demographic questions multiple times. Then, our clients can retarget their screened consumers for continuous learning.

Our audience always gets paid

We pay respondents for every single question they answer, regardless of whether they qualify for the full survey or not. Our panel is paid to give honest answers—they don’t have to select every option to avoid getting screened out.

We don’t use exit links

Since our panel is built straight into our platform, Suzy doesn’t use exit links. That means there aren’t Dark Web and YouTube tutorials on how to cheat our system, which cuts down on bot activity and ghost-completes.

We constantly check for quality

At Suzy, we are working to set a new gold standard for the industry to ensure quality, actionable data for enterprise brands that want to focus on their consumers. 

Since Suzy owns our panel, we have built-in quality checks and proprietary technology from the moment consumers attempt to create an account. 

But we’re going even further by developing a patent-pending innovation called Biotic, our AI-Powered Bot Recognition Assurance Technology. This technology takes advantage of both what we might consider human weaknesses and computer strengths to deploy tests for members on our platform that determine if they are bots or not.

With quality control through the entire life cycle, Suzy can weed out bad actors and bots as they are trying to join the panel—before they even attempt a survey.

The future of online panels and data quality

By taking steps now to help our industry and clients properly design highly engaging surveys, incentivize respondents, and manage bot activity, online panels can ensure accurate and actionable data for their clients. With the right steps in place, online panels can become more reliable sources of data for businesses and researchers alike.

At Suzy, we take proactive measures to ensure our panel stays ahead of the curve regarding data accuracy and quality. We’re constantly updating our methods to be able to discern good survey takers. To learn more about Suzy Audiences and the impact a quality online panel can have on your consumer insights, book a demo now.

 
Previous
Previous

Lights, camera, insights: Using video showreels to bring market research to life

Next
Next

What is iterative market research and why is it important for enterprise brands?