Following analysis of more than 1,000 tests conducted in 2021 and 2022 by retailers and many of the world’s up-and-coming footwear and apparel brands that engaged more than 13,000 product testers, MESH01 provides this State of Product Testing report, focused on footwear and apparel.
This report uses the analysis of these more than 1,000 tests to detail how the world’s best retail footwear and apparel brands are testing their products, and how these findings can demystify product testing for brands that need to refine or have not yet integrated the practice into their product development process.
Want this report as a PDF?
Download Report PDF
Focus:
Product testing average length of time in weeks.
Product testing seasonality. Visual based on subset of analyzed brands.
The average product test is about 33 days long.
of product testers brands use are from the MESH01 tester community.
Over 60% of tests use 2 or more surveys per test.</strong>
feedback channels are used per product test.
questions per survey is most common.</strong>
Average Length of Surveys:
This chart shows the number of surveys administered on the vertical axis with the number of questions per survey on the horizontal. Outliers beyond 30 questions were omitted.
While some brands are conducting as few as 10 tests per year, others are running more than 1,000 product tests annually using the MESH01 product testing platform.
Brands are running from 10 to more than 1,000 product tests annually on MESH01.
The number of product tests conducted annually depends on the level of need and the size of your overall assortments and product lines. Some brands are testing thousands of products annually. Brands testing in the low double-digits are typically small or up-and-coming, or producing materials or ingredients for other brands/products.
While products from fragrances to automotive components, home goods to consumer electronics are tested on MESH01, the insights in this report are focused on footwear and apparel brands and retailers using our platform.
Brands test everything from fragrances and home goods to consumer electronics and automotive components on MESH01.
Activewear and sports footwear were common, making up most of the tested products, followed by casual wear and footwear, then work apparel and footwear.
In 2021, 58% of product testing occurred during the spring and summer months. However, while more products were tested during these seasons (spanning from April through September), most brands tested products for more than 10 months out of the calendar year despite the seasonality of product development processes.
These brands represent a mix of footwear and apparel, illustrating the variability in testing programs from brand to brand despite similarities in assortment sizes and/or revenues.
60% of testing occurred in the spring and summer months, while 40% of testing happened in the fall and winter.
The graph below depicts a subset of analyzed brands’ testing its seasonality. As shown, some months are particularly busy testing months for brands, which can be driven by factors such as testing conditions or sample availability.
Product testing seasonality. Visual based on subset of analyzed brands.
The average product test on the MESH01 platform is about 33 days.
The average product test is about 33 days long.
While 3-6 weeks is the most common product testing duration, shorter and longer timelines occur to help brands support widely varying test objectives, including quick feedback sessions for iterative prototype development or longer-term product testing for validation of durability performance.
Product testing average length of time in weeks.
The average product test engages more than 12 product testers. This roster size can vary dramatically depending on both the brand and the types of products being tested as well as the testing objectives and design.
For example, brands and retailers like L.L.Bean produce thousands of styles per year, including apparel and home goods, while brands like Jetboil manufacture a small assortment of products focused on backpacking stoves. For these reasons, some brands may run product tests with only a single tester, while other brands use hundreds of testers.
Brands engage anywhere from 1 to 200+ testers per product test, with an average of 12+ testers per test.
One of the biggest challenges brands face when generating actionable product testing feedback within the product development cycle is access to reliable, unbiased testers.
For this reason, the integrated product tester community on MESH01 makes up more than 85% of testers engaged by brands for pre-market product testing. The additional 15% are brought in by brands and often include colleagues, brand ambassadors, and even customers. However, most brands are cautious when including customers from their own customer files as they look to avoid an “echo chamber” that notoriously produces positive feedback but misses critical opportunities for pre-market product improvement.
of product testers brands use are from the MESH01 tester community.
While tester criteria will have a significant impact on incidence rate, brands initially invite around three times as many testers as they need to ensure they have enough of the right testers to select from for their final “rosters.”
Since tester criteria are typically addressed during the invitation and selection process, this over-recruiting is often done to ensure that there are enough testers in each product size (or other variable) available to test.
Stringent criteria such as detailed demographics, buying behavior, specific activity participation levels, and more – all available on MESH01 – are handled through an established process that utilizes tester profile surveys, activity surveys, and screening surveys (about 40% of tests), to ensure brands are talking to the right people.
Brands invite 3X as many testers as they need to be sure they have enough to select from.
An analysis across 30 select, representative brands shows the tested product itself is by far the most common form of test “compensation.” While gift cards and promo codes are the next most common forms of tester compensation, the opportunity to simply keep the tested product is more popular at almost 6-to-1.
Additional tester compensation types include combinations of keeping the tested product plus gift cards or promo codes, as well as receiving a future product after returning the tested product. However, these other forms of tester compensation make up less than 5% of all compensation types from more than 1,000 recent product tests analyzed.
Simply keeping the tested product is the most common tester compensation at about 6-1.
Just as different test designs and objectives drive variations in test duration and roster sizes, the number of surveys included in a test is variable as well. However, more than 60% of product tests use 2 or more surveys per test. This number is driven by two very common types of test designs run on MESH01:
Tests requiring more than 4 surveys are rare, though some tests include upwards of 8 surveys, and tests with more than 4 surveys average more than 80 days in test duration.
Over 60% of tests use 2 or more surveys per test.
The average product test on MESH01 uses more than three feedback channels, meaning at least two feedback channels are typically used in addition to product surveys.
While 99% of product tests use the survey feedback channels, other channels including tester media, heat-mapping, and logs are popular with each being used on more than half of all tests.
feedback channels are used per product test.
An analysis of more than 1,000 surveys revealed that surveys average around 16 questions in length and 60% of surveys had 15 or fewer questions. The most common survey length is about 10 to 15 questions, with the next most common being slightly shorter or slightly longer.
The longest survey fielded on MESH01 in the last 12 months was more than 50 questions long, and was the only survey of this length. It’s clear that most brands observe best survey practices and look to minimize tester fatigue through concise, targeted surveys.
Lastly, with many tests including more than one survey, this approach allows brands to break up their questions according to the testing stage, which improves engagement and question relevance.
questions per survey is most common.
"*" indicates required fields