High-quality responses are essential for the accuracy and reliability of surveys and studies. They provide genuine insights into participants’ opinions, beliefs, traits, and behaviors. Poor-quality responses—such as random, inattentive, or malicious answers—can skew results, leading to false conclusions and flawed decisions. This compromises the validity of research and can result in wasted resources and ineffective strategies.
Positly is committed to enhancing survey quality by applying state of the art methods to poor-quality respondents. Our advanced QualityGuard© system uses sophisticated algorithms as well as a proprietary machine learning approach to detect spammers and reduce unreliable data. This helps ensure that the insights derived from studies and surveys run using Positly are accurate and dependable, supporting better decision-making and research outcomes.
Overview of Bad Responders
Bad responders are participants who negatively impact the quality of survey data through careless, deceptive, or disruptive behaviors. Their actions distort results, reduce the reliability of insights, and undermine the effectiveness of surveys and studies. Understanding the different types of bad responders is crucial for maintaining data integrity and ensuring accurate, actionable insights.
There are various motivations and behaviors that define bad responders, and it’s useful to think of them as having different types based on these motivations and behaviors. Here’s the approach we like to use for categorizing bad responders into types:
Type 1: Random Responders
Random responders are individuals who click or answer randomly. Random responses can happen for a variety of reasons, including participants who are curious about the task but who don’t want to participate, participants who don’t speak the language that the task is in, and unsophisticated scammers or bots that aren’t optimizing to avoid getting caught.
- Characteristics:
- May be curious about the survey but unwilling to put in the effort to answer seriously.
- Include unsophisticated bots and scammers aiming to collect earnings without an effort to avoid detection.
- Example Scenarios:
- Randomly selecting options without regard to the questions.
- Lack of coherent patterns in answers, leading to chaotic data.
Type 2: Sneaky Money-Maximizers
Sneaky Money-Maximizers are individuals who aim to maximize their earnings from paid surveys while avoiding detection. That means they don’t care about providing high-quality answers except in situations where they think that low-quality answers may get them caught.
- Characteristics:
- Answer quickly and carelessly, except during attention checks or other situations where they feel they might get caught.
- Adapt their responses dynamically to avoid getting flagged and kicked off the platform.
- Example Scenarios:
- Speed through the survey and provide unreliable responses overall but slow down at known attention checks.
- Provide inconsistent or contradictory answers throughout the survey on questions that are not likely to be perceived as a test of attention.
Type 3: Trolls
Troll responders are Individuals who deliberately provide strange or provocative answers, especially on questions they find funny or off-putting or where they have an emotional reaction.
- Characteristics:
- Aim to disrupt the results of a survey or amuse themselves by giving unusual, fake, or offensive responses.
- May provide honest answers on some questions, flipping between normal and trollish responses.
- Example Scenarios:
- Providing intentionally misleading or controversial answers, especially on sensitive topics.
- Mixing genuine and nonsensical or fake answers.
Type 4: Inattentive Responders
Inattentive responders are individuals who attempt to answer honestly but do not read carefully, skim quickly over important instructions without understanding them properly, or lack adequate language skills to fully-understand the nuances of the instructions and questions.
- Characteristics:
- Misunderstand or misclick due to inattention or language barriers.
- Higher rate of errors and irrelevant or false answers.
- Example Scenarios:
- Misinterpreting questions leads to incorrect answers.
- May give bad responses on attention check questions.
Type 5: Sophisticated AIs
Advanced AI systems (using modern methods, such as ChatGPT from OpenAI, Google Gemini, or Claude from Anthropic), designed to mimic human responses, including passing attention checks. These kinds of bad responders are expected to increase as AI continues to advance.
- Characteristics:
- Attempt to answer questions in a human-like manner.
- Capable of adapting responses to avoid detection as AI, even on attention checks.
- Example Scenarios:
- Providing answers that seem plausible and coherent but that don’t reflect an actual human.
- Successfully passing basic attention checks but may act un-humanlike in some other ways.
Each type of bad responder presents unique challenges in maintaining survey and study quality. By identifying and addressing these behaviors, like Positly does, participant recruitment platforms can improve the accuracy and reliability of their data, leading to more valid and actionable insights.
Attention Checks in Surveys
What is an Attention Check?
Attention checks are specific questions or instructions within a survey designed to help ensure that respondents are paying attention and providing thoughtful, genuine answers. They act as a partial safeguard against poor-quality responses by identifying participants who are not engaged, responding randomly, or not reading questions carefully.
Attention checks typically fall into two categories:
- Direct Attention Checks: These checks instruct respondents to select a specific response or perform a simple task to verify their attention. For example, a question might ask, “Please select ‘Strongly Agree’ to show that you are paying attention.”
- Indirect Attention Checks: These checks are more subtle and require respondents to recall or respond to information provided earlier in the survey. For instance, after a brief story or scenario, a question might ask, “What color was the car mentioned in the story?”
Examples of Attention Checks
- Direct Attention Check Example
- Question: “To ensure you’re reading carefully, please select ‘Neutral’ for this statement.”
- Purpose: Filters out respondents who are not paying attention, answering randomly or rushing through the survey.
- Question: “To confirm you are paying attention, please type ‘bingo’ in the box below”
- Purpose: Directly assesses if the respondent is following the instruction, filtering out those who are inattentive or responding randomly.
- Indirect Attention Check Example
- Question: “In the previous section, we described a scenario involving a shopping trip. What was the main item purchased?”
- Purpose: Ensures respondents are retaining information and reading the content thoroughly.
- Question: “In the paragraph you just read, what was the primary topic discussed?”
- Purpose: Ensures the respondent is engaging with and retaining the survey content, validating their attentiveness through context recall.
- Question: “On the prior page, we mentioned a statistic about smartphone usage. What percentage of people use their phones for online shopping?”
- Purpose: Tests the respondent’s attention to specific details provided earlier in the survey, catching inattentive responders who did not carefully read the content.
Effectiveness of Attention Checks
Attention checks are a valuable tool for improving survey data quality by catching certain types of bad responders:
- Random Responders: Individuals or bots that select answers at random are likely to fail attention checks because they do not engage with the content or instructions.
- Inattentive Responders: Participants who do not read questions carefully or misunderstand them due to language barriers will often miss attention check cues, leading to detection.
However, attention checks have limitations:
- Money-Maximizing Responders: These participants may recognize and correctly answer attention checks while still providing low-quality responses to other questions. They are adept at avoiding detection by slowing down at critical moments.
- Sophisticated AIs: Advanced bots programmed to mimic human behavior can often recognize and respond correctly to attention checks, evading simple detection methods.
Positly’s Approach to Enhancing Survey Quality
At Positly, we understand that traditional attention checks, while useful, are not sufficient to catch all types of bad responders. To address this, we created our comprehensive QualityGuard© system that integrates multiple layers of verification and filtering:
- Advanced Machine Learning Algorithm: Our system uses machine learning to analyze response patterns and detect anomalies that indicate bad responder behavior, including those not easily caught by attention checks.
- Sophisticated Algorithms: In addition to our machine learning approach, we also employ sophisticated algorithms that we’ve developed through years of experience to help detect different types of bad respondents and stop them in their tracks before they cause problems in your survey or study.
Try Positly for Great Results
If you are looking to improve the reliability of your survey data, Positly offers innovative solutions to enhance data quality and integrity. Our advanced QualityGuard© system helps filter out poor-quality responses, helping to ensure that your research is based on genuine and accurate insights.
Learn More: Visit our website to explore our range of tools and services designed to recruit the best participants and enhance survey quality.
Request a Demo: Contact us for a demonstration of how our system can improve your survey results and provide you with high-quality data you can trust.
Start a Project: Start your project today with Positly to recruit participants for actionable insights. Our team is ready to help you achieve your research goals.