How to set up a Monadic Test?

Modified on Wed, 20 Nov, 2024 at 8:51 PM

What is a Monadic Test?


In a monadic test, the participants are presented with a single concept, and follow-up questions are asked to evaluate the concept -- likeability, likes/dislikes, rating of specific attributes etc. The monadic survey design is used when you only need to expose a single concept to a target audience.



How can I use Group Questions to design a Monadic survey?


For example: 

In this case, you have 2 concepts and you only need to expose only 1 concept for your participants and you want to show each concept randomly. For each concept, you have 4 questions (Show Concept, Appeal, Reason for Appeal, Text Highlighter) and you want to ask each question in chronological order. 


Step 1: Select sub-questions for the first concept labeled as Group A then select "Show in all order" in the Question Display Option. In this setting, the questions will be asked in this order Concept A -> Appeal A -> Reason A -> Text Highlighter A .




Similarly, select sub-questions for the second concept labeled as Group B and then select "Show in all order" respectively. In this example, the questions will be asked in this order Concept B -> Appeal B -> Reason B -> Text Highlighter B.




Here is a sample video for creating Step 1:  





Step 2: Add a new Group question and then select the formed Group A and Group B.





Step 3: This time, select "Randomize all and randomly pick some" and specify the number of randomly selected question in the box. In this case, just 1 ( e.g. input 1). In this last step, the system will randomly pick which concept to ask -- either Group A or Group B.


 

Here is a sample video for creating Steps 2 and 3:





In summary, this is how the Group questions will show up on the Question Tree in the above example



Please note:

  • If we design the Monadic Test in this way, data is collected in two separate sets of questions for each stimulus, even though we basically ask the same questions across stimuli. For example, we ask the same question Appeal for the two stimuli, but the data are recorded in the two different questions Appeal_1 (for stimulus 1) and Appeal_2 (for stimulus 2). As a results, if we want to compare results between the two stimuli using Crosstab, then it's not directly available on inca dashboard as of now. It may require downloading the raw data, merging or restructuring the relevant data, and then comparing the results in SPSS/EXCEL or other tabulation tools.
  • Group question can not be used to set quotas. We will make sure each stimulus to be selected randomly, but we cannot ensure fulfillment of any quota needs.




Based on the above notes, there are two alternative methods to design a Monadic Test. The two alternative methods are a bit more complicated to set up, but can potentially overcome the constrains mentioned above in a certain way.



Alternative Method 1 - Randomly Select Stimulus ONLY


In the above method, we put each stimulus with all the relevant questions in one group and then randomly choose one of the groups. As a result, the questions for each stimulus are separated. If we only include the stimulus in the Group for random selection, and create ONE set of questions after it, then all the data will be collected together for the analysis or comparison purposes.


Let's use the same example as above to illustrate how to do it.


Step 1: Add a new Group question and then select Concept A and Concept B, the two multimedia questions to show the stimulus.


Step 2: Select "Randomize all and randomly pick some" and specify the number of randomly selected question in the box. In this case, just 1 ( e.g. input 1). In this step step, the system will randomly pick which concept to ask -- either Concept A or Concept B.




Step 3: Create a Virtual Question indicating which Concept has been randomly picked. This step is important as it allows us to have this information in the data for further analysis and also can support any other needs for logics and quotas. More specifically,

  1. Create a Virtual Question named Selected Concept
  2. Add a variable for Concept A, along with the logic rule that the multimedia question Concept A IS DISPLAYED
  3. Add another variable for Concept B, along with the logic rule that the multimedia question Concept B IS DISPLAYED




Step 4: Add all the relevant questions that are shared by the two concepts, including Appeal and Reason.


Step 5: Add concept specific questions, which is Text Highlighter in this case. As we need to include the stimulus in the Text Highlighter, we should create two questions for Text Highlighter, one for each Concept. For each Text Highlighter, set up pre-condition that to show the question only when the relevant Concept is selected.




In Summary:

  • If we design the Monadic Test in this way, data is collected all together for the questions that are shared by the concepts and also we have created a Virtual Question ("Selected Concept" in this example) to differentiate the data by concepts. This Virtual Question can be used in the Report Filter and Crosstab Header on the dashboard to compare the results. 
  • This Virtual Question can also be used for Quotas. E.g. If we require to collect n=100 responses for each concept, then we can add this Virtual Question to the Audience page and set a quota for n=100 for each concept (Please see more details about how to add Quota here. However, please keep in mind that it may not be the most ideal way to control the quota as we will terminate a person simply because the concept that's randomly picked for this person is quota full, and we have no control over it.



Alternative Method 2 - Use URL metadata


In the Alternative Method 1, we create ONE set of questions across concepts for the analysis or comparison purposes. However, as the the concept is selected randomly each time, we have limited control if we were to target it for certain quota. We can potentially improve it by using URL metadata instead.


Let's use the same example as above to illustrate how to do it. Before going into the details, please see more details about URL metadata here.



Step 1: Create a Virtual Question with each option depends on the URL metadata. To do this,  

  1. Create a Virtual Question named Selected Concept
  2. Add a variable for Concept A, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text] A. The Variable Concept A will be TRUE (or auto selected) when we have "&CONCEPT=A" appended to the survey URL.
  3. Similarly, add a variable for Concept B, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text] B. The Variable Concept B will be TRUE (or auto selected) when we have "&CONCEPT=B" appended to the survey URL.



Step 2: Create two multimedia questions to show Concept A and Concept B, with the pre-condition that to show the question only when the relevant Concept is selected in the Virtual Question from Step 1.



Step 3: Add all the relevant questions that are shared by the two concepts, including Appeal and Reason.


Step 4: Add concept specific questions, which is Text Highlighter in this case. As we need to include the stimulus in the Text Highlighter, we should create two questions for Text Highlighter, one for each Concept. For each Text Highlighter, set up pre-condition that to show the question only when the relevant Concept is selected.



Step 5: After you have launched the study, please remember to append the URL meta data in the survey link(s) you share with the panel to indicate the concept for them to target.

E.g. If the survey link you get for the study is https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID], then the link(s) for each Concept should be as follows where the changes are highlighted in yellow.

  • https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=A
  • https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=B



In Summary:

  • If we design the Monadic Test in this way, data is collected all together for the questions that are shared by the concepts and also we have created a Virtual Question ("Selected Concept" in this example) to differentiate the data by concepts. This Virtual Question can be used in the Report Filter and Crosstab Header on the dashboard to compare the results.
  • This Virtual Question can also be used for Quotas. E.g. If we require to collect n=100 responses for each concept, then we can add this Virtual Question to the Audience page and set a quota for n=100 for each concept (Please see more details about how to add Quota here). Also as we can target audience by different links with different URL metadata, we have a bit more controls over there as we can stop the sample pushing for the link where the quota is met, which can help improve the survey Incidence Rate.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article