The ProofPilot Blog - Design, Launch & Participant in Research Studies

What The Unbreakable Kimmy Schmidt Tells Us About Research Study Participant Experience

In the Netflix show, The Unbreakable Kimmy Schmidt, the character Titus joins a research study to earn a little extra money. In less than 30 seconds, the multiple Emmy nominated series issues a cringe-worthy critique of clinical research. ProofPilot’s 2015 and 2016 participants engagement rates show we’re solving the problem.

Season 2 Episode 11 of the Netflix series, The Unbreakable Kimmy Schmidt

In a scene set in a what is obviously a clinical study visit, Titus complains, “Mr. Doctor, we have been standing here for an hour. Give us our clothes, or turn up the heat!”

To which the doctor replies, not to Titus, but to another research professional dumping participant’s clothing on the floor, “Take note, Meat Slab 35 is sensitive to temperature in addition to the obvious bloating and shoulder loss.”

Obviously, the Unbreakable Kimmy Schmidt TV writers exaggerate for comedic effect. No researcher would ever treat participants this poorly. Though, in real life, researchers often cite “scientific fidelity” or “privacy” as reasons they can’t adopt techniques to improve participants’ experiences. Participants can still end up feeling like anonymous cows jabbed with treatments and milked for data.

ProofPilot and Participant Engagement

Those of us who started ProofPilot began our research careers engaging gay men in HIV prevention studies. Getting and keeping young gay guys in HIV studies is a challenge. Twenty-year-old men consider themselves healthy and even invincible. Some gay men are in the closet and going to an HIV clinic could out them.

At the same time, gay men are technology, early adopters. They had broad access to Internet-connected mobile and desktop devices well before the wider population. And even years ago, answering some personal questions online was no big deal. They shared more intimate details via dating sites and social media networks.

Everyone figured, we can just put our studies online or on a mobile phone.

Many recent mobile health research studies have learned, adding technology only exacerbates problems.

A research study should be more than impersonal treatments and data collection. A research study could be a piece of engaging online media. YouTube is more than video distribution functionality. Kickstarter is more than a financial collection system. AirBnB is more than a reservation tool. These successes turned both the demand and supplier experiences into enjoyable customer experiences.

ProofPilot has always focused on improving participant experience. But if you look at our marketing material, you might not know it. We realized early on that the research design experience needed adjusting to meet our ultimate engagement goals. Researchers need to focus on their research questions. They aren’t trained in marketing or customer service. Few have the resources or time to consider customer experience issues. We needed to take the guesswork out of study engagement so researchers can focus. We’ve spent a good portion of the past three years making it easy to design a study that a participant might actually enjoy engaging in.

In 2018, we’ll begin marketing our participant experience more aggressively now that the design experience is fairly stable.

This starts today, by sharing some of our participant engagement stats from 2015 and 2016.

ProofPilot 2015 and 2016 Participant Engagement Analysis

That focus doesn’t mean we haven’t had major participant engagement successes. We just completed an analysis of 11 studies who enrolled 28,846 healthy participants between February 2nd 2015, and November 1st, 2016.

The eleven studies included in this analysis have various study designs. Some require ongoing engagement for six or more months and required weekly study task activities. Others require only minimal interactivity beyond a set of initial study activities. Of the eleven studies, the longest was almost a year, the shortest required several days of activity. All are longitudinal in nature.

Even within each study, different study arms have dramatically different study experiences. To further complicate issues, most ProofPilot studies are “adaptive,” meaning study tasks are assigned based on prior behaviors and activities, creating unique experiences even within structured study arms.

To provide a more effective and continuous comparison across studies and across different experiences within studies, instead of focusing on a specific study end-point or survival analysis, our focus is on complete rates for specific study assignments. If 100% of study assignments are completed, the participant has satisfied all the required elements of the study.

The team ran simple descriptive and correlational statistics across 234 unique tasks assigned in various combinations over various time periods to participants in the eleven studies.The recruitment funnel and task engagement results were correlated with specific demographic groups to identify differences by race and gender. Likewise the collected engagement and retention rates represented by study task completes were also correlated with race and gender. To further define the impact of certain messaging strategies, we correlated study task completion rates by form of reminder, race and gender. For this analysis only US based ProofPilot participants are included.

Engagement Results

Demographics. After a large number of years focusing on young gay men, ProofPilot wanted to switch things up a bit and see if our success with that audience would apply to others. So we pushed from gay men’s sexual health to young women’s sexual health. Most studies running during this time on ProofPIlot were looking for young women. Our sample shows we were just as able to engage young women. 92 percent (n=26,538) were female, and 8 percent male (n=2,308). The average study participant is relatively young at 22 years old, with a minimum of 10 and the eldest being 78.

We’re particularly happy that forty-seven percent reported being an ethnic or racial minority (non-caucasian) (n=13,673) with the largest non-white group being Hispanic at 18.8% (n=5,423).

Study Join Rates. If you make it easy, people do want to participate in studies. A marketing company would jump up and down seeing conversion rates (the rate of visitor vs engaged visitor) around 10%. At ProofPilot, 29.8% of visitors to any ProofPilot page converted to a registered ProofPilot user.

Of all participants, 14.06% (n=3,965) were not eligible for the original study they tried to join. Of those initially ineligible participants, 91.8% immediately engaged in the recruitment and screening process for another study that was more appropriate for their demographic and situation. This strongly suggests one of the best study recruitment techniques is engaging participants for other studies they weren’t eligible for.

Study Complete and Engagement Rates. As researchers set up their studies, they define a set of study tasks. For participants these study tasks are known as “DO IT” tasks that assign automatically based on a study experience logic. Of all eligible participants, 76,44% (n=17,739) completed at least 1 study task. 61% (n=10,828) of those that completed that first assigned study task, completed every study task assigned to them. Of the 23.6% of individuals who did not complete that first task, 60.8% started the task, but did not finish it.

A 61% study complete rate isn’t groundbreaking for clinical research studies. However, this number is achieved within a healthy population without any interpersonal interaction. All reminders were automated via SMS message or e-mail notifications. When other online longitudinal studies are showing engagement rates around 2%, this is significant.

What’s Working?

  • The research study as media. As we reviewed the eleven tasks, we can see that some researchers put effort into thinking about their unique populations. They added relevant visuals, and used language appropriate to the population. The range of completion rates on a per study basis range from a low of 18% for those studies that just clearly didn’t care about the participant, to a high of nearly 100%, That shows an extremely high correlation between complete rates and specific studies. We can say with certainty that the complete rates weren’t skewed by the amount of effort required of participants. Completion rate does not correlate significantly with amount of tasks (r=-0.17) or length of study. Even length of engagement period isn’t correlated with engagement rates as we hypothesized. Some studies retained a large number of participants over time, while others lost.
  • E-mail is still king. The beta version of ProofPilot allowed participants to choose how they wanted to be reminded when new study tasks were assigned to them, and when those tasks were about to expire. Participants in studies were required to choose at least one technique, and if they unsubscribed from all, they were warned that their participation in the study could be terminated. Participants could choose e-mail, SMS, or Facebook notification reminders.While much has been made over “the death of e-mail” among millennials heavily represented in this sample, e-mail and SMS had equally powerful effects on completion rates. A Facebook notification actually seemed to have a negative effect.
  • A new approach to rewards: Not every research study can afford financial incentives or reimbursements for participation. ProofPilot has been experimenting extensively with non-financial rewards (more on that in a future post). ProofPilot classifies a reward as some sort of token of appreciation provided to the participant thanking them for their time. Some studies on ProofPilot did not include a reward. Others included non-financial rewards in the form of a free product (sent to the participant by mail), a promotion-code (various discounts unlocked to participants online), and by far the most common, a financial reward in the form of a gift card. Some rewards are issued to all participants. Some rewards are a chance to enter a lottery for a relatively high value prize. Task completion rates have a strong dependency on rewards. Because we have limited data associated with non-financial rewards (a vast majority of rewards were gift cards), we are not able to separate out the impact of different reward types. Those people, who received a gift card or other reward, have 100 median complete rate, those, who do not received a reward have 50 complete rate (U (z) = -38.025; p<0.001).
  • Federated Login: ProofPilot allows users to join with federated login functionality from Facebook or Google. This is the feature thats become largely ubiquitous across websites. It’s also known as “Join with Facebook” or “Join with Google.” It means that users don’t have to remember another password. Over 39% of participants used one of these tools to join ProofPilot reducing friction at initial engagement, and every time they returned over time to complete additional tasks.

What’s Not Working?

  • Racial minorities do not complete tasks at the same rates. While our recruited sample seems to mirror the general US population, race has a strong influence on completion rate (Chi-squared Kruskal-Wallis test = 48.084; p<0.001). The complete rate is 50% for African descent and Native American, the lowest of any racial group. East Asians and Hispanic and/or Latino for the highest completion rates at 100% to 75%. Caucasian and/or European descent, the most common race value had an average completion rate of 66.67%.

What’s Next

ProofPilot’s focus on democratizing who can conduct studies is key to our engagement strategy. Many non-researcher organizations have strong brand relationships with participant groups. Research institutions often don’t. We believe this could have a dramatic impact on retention rates. If participants feel that they know and trust an organization involved in research, they are more likely to engage and remain engaged.

A look at the brands running studies, anecdotally, it’s clear brands that have a key mission focused in a specific topic area fared better from engagement rates versus those who were from general research organizations. It’s to early to say if it’s the brand awareness of participants, or the cultural knowledge on behalf of the brand in creating relevant studies created the lift. It’s likely a bit of both.

ProofPilot has recently released a major overhaul of our participant experience to further lift engagement rates and support in person studies experiences. We’re also embarking on several major media partnerships in 2018. We’ll look forward to sharing results with you on those new initiatives.

You'll Also Be Interested In

Other Stories in the ProofPilot Blog