Discrimination panels are used in the CPG industry for routine assessments in product maintenance, quality control, and shelf life, as well as for product improvement and innovation initiatives. Discrimination programs work best when the project team has deep understanding of 1) consumer-relevant criteria for test design (often based on historical knowledge of products and current business imperatives) and 2) panel capability and overall performance.
In this blog post we present strategies for monitoring performance of discrimination panels that participate in a variety of test types – overall- and attribute-specific tests, difference tests, and similarity tests. The focus is on techniques for examining performance in forced-choice discrimination tasks (triangle, tetrad, 2-AFC, etc.) over a period of time (monthly, quarterly, etc.). Other methods, more frequent monitoring, and formal validation tests are outside the scope of this post.
Implementation of both panel- and panelist-performance strategy leads to high performing panels that provide sensitive, powerful, and reliable test results.
Panelist performance monitoring can be applied to:
Frequent data quality checks
Establishment of performance criteria
Deeper understanding of panel capability
Bringing panel back after a hiatus (e.g., post-COVID)
Panel performance monitoring is only one piece of a Panel Quality Maintenance Program, which may also include:
Regular re-validation studies
Training, orientation, and practice
Community building and fun
The content of this post is based on research originally presented at Sensometrics 2020.
Download Poster Here:
Katie Osdoba is the Director of Client Services – Technical Development. She has been with Sensory Spectrum since completing a PhD at the University of Minnesota in 2015. Katie will talk about discrimination testing for hours if you let her.