The ProofPilot Blog - Design, Launch & Participant in Research Studies

How ProofPilot Sets Development Priorities

ProofPilot believes, almost religiously, that designing a research study should be as easy as managing a blog. It should be inexpensive. Participation in that study should be an entertainment activity. (Or, at least mitigate the stress of participation for those that are ill.)

ProofPilot How ProofPilot Sets Development Priorities

Those are lofty goals. While we have a good foundation, we haven’t figured everything out yet. We’re not building another clinical trial management system. Nor are we satisfied with a “mobile research app.” None solve the problems endemic in human subjects research. None make it fun or stress free to join a study.

Those goals have attracted a diverse set of customers. Some are experienced researchers doing unique study designs. Some are novices that don’t know how to design a study at all. Some are digital health innovators. Others are folk medicine practitioners.

All this means we’re often learning with our users as they create studies in new ways. A lot of ProofPilot is ready “out of the box.” But we still treat each customer as a partner.

However, ProofPilot is not a development consulting firm. We’re a software as a service application. That means we don’t do custom development for a customer.

So how do determine what features and bug fixes to prioritize? We have an internal list of almost 500 “nice to have” fixes and features. Nearly 90 items are ready for development. But, our dev team can handle only about 10 to 20 of those issues in a two-week sprint.

Setting development priorities is a challenge for any company. Here’s how we do it in an industry where custom development and consulting are part of the culture. We hope this sets expectations for our customers and guides others.

Identifying and Tracking Needs
We listen and watch — constantly. We watch the competition. We look at technologies and methods used in other industries. We review academic grant awards. We talk with professionals in real life and via social media. We attend conferences. We have internal brainstorming discussions.

I spend a lot of time with new customers. Everyone in the company, including me, the CEO, is part of the customer support team. We record a lot of data on how individuals use our platform, what kinds of studies they are designing. We look at how participants move through studies. We watch for things that trip users up. We involve ourselves in some unique study designs.

We use ProofPilot ourselves. We create our own studies. We replicate others that interest us in the academic literature. We test these studies internally. We’re pretty sure issues that irritate, limit or otherwise affect us will also be an issue for our users.

Through this process, our primary goal is to anticipate our customer needs. Though, sometimes we are reacting to issues.

How we track things that come up

Many new ideas start their lives as a quick sketch or a couple of lines of text in Evernote. Customer reported issues, and requests get a spell tag in Zendesk. We sync these items with Github to keep a master record of everything we want ProofPilot to be.

We’ve architected our development and planning process to adjust and respond quickly. We’re not piecing together open source solutions. Nor are we building anything from scratch. We’re making adjustments to a system designed to be flexible and support the next generation of research studies.

How embarrassing is it?

Hey, we’re human. We make mistakes. There are and will be bugs. The more potential embarrassment it could cause, the faster we’re likely to address something.

Now, to be serious …

Can we address with current functionality?

ProofPilot has a lot of functionality. We know that each research study is likely to be an edge case. We built ProofPilot to be flexible. So, we often ask ourselves, can existing functionality fill this need? Is the result an easy to use elegant solution to the need? Alternatively, is it a hack?

A new use for an existing feature set becomes a support article. We close the issue quickly. Sometimes that new use isn’t as elegant as we’d like, but it meets the need. We still write the support article. But it’s likely we’ll keep the issue open for consideration at a later date. A “hack” that’s super awkward gets priority.

How we manage the issues.

We organize the remaining items in GitHub into queues. We use a tool called Zenhub.io to make that organization process visual. Each queue represents a stage in our priority decision and development process. We drag items across queues and reorder to define priorities.

ProofPilot’s development cycle runs on sprints. Sprints are a period of focused work on a defined series of features and goals. The highest priority issues and features go in the queue labeled “Current.” Tasks in current are part of the current one to two-week sprint.

So, setting priorities is an every two to three-week exercise. We drag and order those items into current that will be worked on and addressed within the next two weeks.

How many current and potential users will benefit?

Requirements that will benefit a large group of users will rise to the top. When defining the specifications for each need or bug fix, we try to do so with many potential use cases in mind. We want current customers to create new studies without the issue coming up again. Moreover, we want new customers not to realize the bug ever happened or the feature was ever unaddressed.

We do consider growing and potential markets in this calculation. Researchers are always experimenting with new study designs. Likewise, the tech industry is exploding with new technologies. We make decisions to position ourselves today and for the future.

How soon will the issue be a blocker for any user in meeting their goals? 
Sometimes we get a group of customers all at the same time. Sometimes we identify needs that won’t affect that group for several months. Those issues that will be a blocker tomorrow will get precedence.

For a long time, ProofPilot tended to focus on issues related to designing a study. We put less focus on data presentation and analytics. Our feeling was if it was hard to create a study … if participants don’t like joining, then data and analytics were pointless.

Will it have on training and customer support?

There’s a lot that goes into this category. Will an update need us to update all our security documentation? Will many studies need an IRB update? Will a fix or feature allow more users to leverage ProofPilot without training? Will the issue reduce customer service ticket time, or the tickets entirely?

There are four important tasks at the beginning of every study. The study intro page, eligibility, main consent, and arm assignments. In the first version of ProofPilot, we had to tell people to add these tasks. We added fill in the blank placeholders as a default for every new study to clarify and reduce questions. Was this a request from anyone? No. Does ProofPilot work without it? Yes. However, it freed customer support time to deal with more unique issues.

Does it get us where we want to be?

Again, ProofPilot tends to be proactive. We identified more than ninety percent of the open issues and features we track. When we listen, we’re often considering how we address a broad problem in the industry. To do so, we’re often identifying whole sets of features few others are considering. 
 
This combination approach allows us to pursue our broad goals. We learn from supporting our customers. We maintain the scalability of a software as a service infrastructure.

You'll Also Be Interested In

Other Stories in the ProofPilot Blog