The 6-step product pilot launch plan checklist

Setting up a successful product pilot plan

Have you ever realized that you need to test some live capabilities before investing more time and effort in the product lifecycle? Running a pilot allows product teams to test capabilities on a small scale without a large investment, yet ensuring the capabilities are viable. However, without a clear plan, you could walk away with inaccurate conclusions. Before you set out to pilot a product, consider using the following checklist to plan and launch products successfully.

The pilot launch checklist

Step 1: Set clear goals.

The goals and objectives for a pilot are different than the goals set for a product. Pilot goals should be specific to what you want to learn or what you are trying to accomplish during the test. A goal may be to increase adoption by 50 percent. You run a pilot for a select time to determine if you see an increase in adoption and by how much. Another goal might be to determine if a feature meets the needs of your customers. You then pilot the feature and use analytics or seek feedback on adoption and usability to determine if needs are met or how they can be improved.

Step 2: Define scope.

Outline the use cases you want to test to ensure those facilitating the pilot and stakeholders have a shared understanding of what is in and out of scope. For example, pick a few features or workflows you’ve built and want to test (not the entire system). Having parties aligned helps avoid scope creep or testing unrelated capabilities. Be specific on scope with actions tying back to the pilot goals, as well as defining how many and the type of participants you should include.

Step 3: Establish a realistic timeline.

Set a fixed duration of time to execute the pilot. Consider the use cases, and when applicable, onboarding or set up time before determine your timeline. A pilot could run anywhere from a few days to a few weeks, depending on the upfront assumption. For example, if you expect daily usage during the test, a week or two is plenty, but you may want to consider a longer duration if you only expect one or two days per week. When running any pilot for longer than a few weeks, set key checkpoints to determine if the pilot should continue or if a decision to pivot needs to be made.

Step 4: Define the target audience.

When testing a few capabilities, having enough users testing the system on a regular basis is necessary. Furthermore, having different user types provides more well-rounded results. For example, a system admin and a few associates with different responsibilities and roles would be an ideal user testing pool. While the number of users varies, a best practice is to try to get five to eight percent of your expected user base involved at the start as you’ll likely see a drop off of users throughout the pilot. Target having at least three to five percent using the system regularly (which factors in the drop-off) to gather enough data. When the user group identified includes different roles, there is a better chance of obtaining usable insights to make decisions on how to proceed.

Step 5: Measure success.

Consistent measuring of progress, including successes and failures, is critical for determining how or when to move beyond the pilot phase. Obtain both qualitative and quantitative feedback to help measure success. Forms of qualitative feedback could be a survey, individual feedback via interview, or focus group. Quantitative feedback could be in the form of using analytics, in which you can derive insights through data captures behind the scenes. It is important to have both qualitative and quantitative data. Having one and not the other paints an incomplete picture with only half of the story revealed and may lead to inaccurate conclusions.

Step 6: Create a clear communication plan.

Stakeholders, participants, and those running the pilot are all vested in the process and results. For each group, determine how and when you plan to communicate, including frequency during the pilot and the level of information you intend to share. For example, email stakeholders every other day with the status updates outlining actions from the previous two days or schedule a call with those running the pilot on a daily basis to discuss what each person intends to address and potential blockers. Once the communication plan is outlined, make sure each group is aware and bought into the plan. Set expectations at the start of a pilot to avoid a lack of or miscommunication during the testing. After data is collected based on the goals set up and success metrics, it is critical to communicate the information to key stakeholders.

Crafting informed product plans

At the end of the pilot, you should have enough data to determine whether to consider pivoting or moving forward on your current path. Either way, with a strong sense of what to do next and why use the pilot data as a steppingstone to tee up additional funding/investment rationale and discussions.

While the steps outlined do not guarantee success or hitting the desired outcome, having clear goals, defined scope, timeline, target audience, success metrics, and a communication plan will help reduce ambiguity throughout the execution. The outcome may ultimately lead you to keep moving forward in the direction you originally set, to pivot, or to stop investing in the capabilities altogether.

Legacy software modernization

Legacy software modernization

Download white paper