How Synthetic Respondents Shape Early-Stage Innovation
To illustrate how synthetic respondents can accelerate early-stage learning, here’s a recent project example. Details are adjusted for confidentiality, but the dynamics and takeaways are true to the original work.
Challenge
Early in the design process for a new coffee container, the team needed to efficiently evaluate a range of visual concepts to identify the most promising options. Traditional human research would be time-consuming and costly at this stage, but the team also wanted to understand how synthetic respondents compared to real human feedback, especially as this was an early test of synthetic adoption.
Solution
To address this, the team conducted parallel research tracks:
- Synthetic respondents were used to quickly evaluate preliminary design concepts, providing ratings on various attributes, as well as open-ended rationale for top and bottom choices.
- A small human sample from Bellomy’s SmartLab panel was fielded simultaneously to provide a benchmark for comparison and to highlight areas where synthetic and human responses aligned or diverged.
The study evaluated multiple visual concepts with variations in color, transparency, and design, including ratings on appeal, durability, style, and other attributes; overall preference ranking; and open-ended rationale for top and bottom choices.
Results
Insight 1: Synthetics Didn’t Mirror Humans, and That’s Fine
Synthetics and humans produced similar structures but not identical results. That difference wasn’t a flaw; it was useful.
- The same designs rose to the top for both groups, though the exact order shifted.
- The lowest performers aligned almost exactly, including identical ordering at the very bottom.
- Synthetics compressed differences among the top few but clearly identified weaker options.
This pattern allowed the team to eliminate non-viable concepts immediately. The goal wasn’t for synthetics to match human judgment; it was to use them intentionally as a fast, directional filter so human testing could focus on the strongest design spaces.
Insight 2: Differences Led to Insights (A “See-Through” Effect)
When we examined attribute ratings and open-ended feedback, a meaningful divergence emerged around transparency.
- Synthetics evaluated transparency as part of visual design.
- Humans applied lived experience, quickly pointing out the convenience of seeing when the container was empty.
- This functional, annoyance-reducing benefit did not register for synthetics.
This gap surfaced a human-centered advantage that might otherwise have been overlooked. It also guided refinements to synthetic modeling, reinforcing when lived experience matters more than aesthetics alone.
Insight 3: So Much Learning, So Little Time
The speed advantage was significant.
- Synthetic data was available within hours.
- Human responses arrived within a day, allowing for near-real-time comparison.
What typically would require weeks happened in a matter of days — without sacrificing human context or decision confidence.
Summary
This example highlights how synthetics and humans complement each other when used intentionally: synthetics narrow the field quickly, humans illuminate experience-driven nuances, and the comparison between them surfaces insights neither could provide alone.
Curious about this approach?
Schedule a consultation with our experts to learn more about integrating synthetic and human insights into your research process.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your details will be held in accordance with our Privacy Policy.



