In this article, we will illustrate how to design a monadic study (with two concepts) and with a Direct comparison or Preference question at the end of the monadic questions.
The method is similar to designing a monadic study using Alternative Method 2 - Use URL Metadata (see article here). The only difference is how to control which concept to show as the 2nd concept for comparison using the pre-condition set based on the URL metadata.
Here are some guidelines to follow when programming your study in this setup.
Step 1: After the Greeting Question, create a Virtual Question that will allow you to select the two Concepts as options using URL Metadata:
- Create a Virtual Question for example "Concept Cell"
- Add a variable for Concept A, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text] A. The Variable Concept A will be TRUE (or auto-selected) when we have "&CONCEPT=A" appended to the survey URL.
- Similarly, add a variable for Concept B, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text] B. The Variable Concept B will be TRUE (or auto-selected) when we have "&CONCEPT=B" appended to the survey URL.
Step 2: Create two multimedia questions to show Concept A and Concept B as the FIRST ad seen or exposed, with the pre-condition to show the question only when the relevant Concept is selected in the Virtual Question from Step 1.
Step 3: Add all the relevant questions that are shared by the two concepts (usually these are the close-ended metrics).
Step 4: Add concept-specific questions and set up the pre-condition to show the question only when the relevant Concept is selected in the Virtual Question from Step 1.
Note: For open-ended questions, we suggest creating them as concept-specific questions thus making one question for each concept so that we can view the results for each Concept specifically in the Dashboard.
Step 5: After the monadic questions are all asked, create two multimedia questions again to show Concept A and Concept B as the SECOND ad seen or exposed. To control which concept to show as the 2nd concept, the pre-condition set for each question should be switched from those in Step 2.
For those who will be seeing Concept A as the 2nd concept, the pre-condition set should be those that were exposed to Concept B first (or those that were auto-selected for Concept B in Virtual Question in Step 1)
While for those who will be seeing Concept B as the 2nd concept, the pre-condition set should be those that were exposed to Concept A first (or those that were auto-selected for Concept A in Virtual Question in Step 1).
Step 6: Add the Direct Comparison or Preference question and all the relevant follow-up questions for each concept preferred.
Step 7: After you have launched the study, please remember to append the URL meta data in the survey link(s) you share with the panel to indicate the concept for them to target.
E.g. If the survey link you get for the study is https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID], then the link(s) for each Concept should be as follows where the changes are highlighted in yellow.
- https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=A
- https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=B
In Summary:
- If we design the Monadic Test in this way, data is collected all together for the questions that are shared by the concepts and also we have created a Virtual Question ("Selected Concept" in this example) to differentiate the data by concepts. This Virtual Question can be used in the Report Filter and Crosstab Header on the dashboard to compare the results.
- This Virtual Question can also be used for Quotas. E.g. If we require to collect n=100 responses for each concept, then we can add this Virtual Question to the Audience page and set a quota for n=100 for each concept (Please see more details about how to add Quota here). Also as we can target audience by different links with different URL metadata, we have a bit more controls over there as we can stop the sample pushing for the link where the quota is met, which can help improve the survey Incidence Rate.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article