![]() Decieux, Jean Philippe Pierre ![]() ![]() Scientific Conference (2017) Abstract: Recent studies have shown that the use of the forced answering (FA) option in online surveys results in reduced data quality. This response behavior has often been interpreted as psychological ... [more ▼] Abstract: Recent studies have shown that the use of the forced answering (FA) option in online surveys results in reduced data quality. This response behavior has often been interpreted as psychological reactance reaction. However, no study researched the psychological mechanism behind the correlation of FA on dropout and data quality before. By using online survey-experiments with forced and non-forced answering instructions, our study offers statistical evidence for the often proposed reactance effect influencing response behavior. Relevance: Recent studies have shown that the use of the forced answering (FA) option in online surveys results in reduced data quality. They especially examined that forcing respondents to answer questions in order to proceed through the questionnaire leads to higher dropout rates and lower answer quality. However, no study researched the psychological mechanism behind the correlation of FA on dropout and data quality before. This response behavior has often been interpreted as psychological reactance reaction. So, the Psychological Reactance Theory (PRT) predicts that reactance appears when an individuals’ freedom is threatened and cannot be directly restored. Reactance describes the motivation to restore this loss of freedom. Respondents could experience FA as a loss of freedom, as (s)he is denied the choice to leave a question unanswered. According to PRT, possible reactions in this situation might be to quit survey participation, to fake answers or to show satisficing tendencies. Research content: This study explores the psychological mechanism that effects response behavior in FA condition (compared to non-FA- condition). Our major hypothesis is that forcing respondents to answer will cause reactance, which turns into increasing dropout rates, decreasing answer quality and a satisficing behavior. Methods and Data: We used online survey-experiments with forced and non-forced answering instructions. Reactance was measured with a four-item reactance scale. To determine answer quality, we used self-report for faking as well as the analysis of answers to open ended questions. Results: Zero-order effects showed that FA increased state reactance and questionnaire dropout as well as it reduced answer length in open-ended questions. Mediation analysis supported the hypothesis of reactance as an underlying psychological mechanism behind negative FA effects on data quality. Added Value: This is the first study which offers statistical evidence for the often proposed reactance effect influencing response behavior. This offers a base for a deeper psychological reflection of the use of the FA-option. [less ▲] Detailed reference viewed: 139 (7 UL)![]() Sischka, Philipp ![]() ![]() in In-Mind Magazin - Psychologie für Alle (2016), (4), Gesundheitskampagnen (z. B. Anti-Drogen-, Anti-Raucher- oder Kondomnutzungskampagnen) haben zum Ziel, Menschen zu Einstellungen und Verhalten zu motivieren, die gesundheitsförderlich oder präventiv sind ... [more ▼] Gesundheitskampagnen (z. B. Anti-Drogen-, Anti-Raucher- oder Kondomnutzungskampagnen) haben zum Ziel, Menschen zu Einstellungen und Verhalten zu motivieren, die gesundheitsförderlich oder präventiv sind. Beispielsweise versuchen Anti-Drogen-Kampagnen auf die Gefahren und Folgen von Drogenkonsum aufmerksam zu machen. Anti-Raucher-Kampagnen stellen hingegen die Sensibilisierung für die gesundheitsschädlichen Folgen des Rauchens in den Vordergrund und versuchen, das von den Zigarettenfirmen aufgebaute positive Image des Rauchens zu entwerten. Solche Kampagnen laufen Gefahr, ihr Ziel – eine Einstellungs- und Verhaltensänderung bei ihren AdressatInnen zu bewirken – zu verfehlen, wenn ihre zentrale Botschaft von den RezipientInnen als zu aufdringlich empfunden wird. In der Sozialpsychologie wird dieses Phänomen häufig mit der Theorie der psychologischen Reaktanz erklärt. [less ▲] Detailed reference viewed: 608 (8 UL)![]() Sischka, Philipp ![]() ![]() Presentation (2016, March 03) Too often online surveys are conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each ... [more ▼] Too often online surveys are conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each question in order to proceed through the questionnaire. The avoidance of missing data is often the idea behind the use of the forced answering option. There has been a tremendous increase in the use of this option; however, the inquirers are often not aware of possible consequences. Currently, only a few studies have researched the impact of forced answering on different quality parameters (e.g. dropouts, item nonresponse), with inconsistent results. To date no study has systematically examined effects of forced answering formats on answer quality. Given the rise in the popularity of online surveys in general and the frequent use of the forced answering option in particular, the impact of forced answering on data quality needs to be addressed. Our study assesses the consequences of the implementation of the forced answering option on dropouts as well as on answer quality. Our major hypothesis is that forcing respondents to answer will cause reactance, which in turn will decrease answer quality and increase dropout rates. To analyse the consequences of the implementation of forced answering option on response behaviour, we use split-ballot-field-experiments. We already conducted two studies (n=1056 & n=615) with differing experimental conditions and a third is ongoing. To determine answer quality, we use instructed response items, self-report for faking and other self-reports. Our results show a significant increase in dropouts and higher percentages of fakers under the forced answering condition. Both can be interpreted as reactance behavior arising from the force to answer each question in this condition. So far, no study has systematically examined effects of forced answering formats on answer quality. Our Paper address this issue. [less ▲] Detailed reference viewed: 214 (13 UL)![]() Decieux, Jean Philippe Pierre ![]() Scientific Conference (2015, August 28) Due to the low costs and the ability to reach many people in a short time, online-surveys have become an important resource of data for research. As a result, many non-professionals gather their data ... [more ▼] Due to the low costs and the ability to reach many people in a short time, online-surveys have become an important resource of data for research. As a result, many non-professionals gather their data through online questionnaires, which are often of low quality or operationalised poorly. A popular example for this is the ‘forced-response-option‘, whose impact will be analysed within this research-project. The forced-response-option is commonly described as a possibility to force the respondent to give an answer to each question that is asked. In most of the online-survey computer software, it is easily achieved by enabling a checkbox. There has been a tremendous increase in the use of this option, however, the inquirers are often not aware of possible consequences. In software-manuals, this option is praised as a strategy that reduces item-non-response. In contrast, authors offer many doubts that counter this strategy. They base on the assumption that respondents typically have plausible reasons for not answering a question (not understanding; absence of appropriate categories; privacy). Our thesis is that forcing the respondents to select an answer might cause two scenarios: - Increasing unit-non-response/dropout-rates. - Decreasing validity of the answers (lying/random answers). To analyse the consequences of the implementation of forced-response-option, we use split-ballot-field-experiments. We especially focus on dropout-rates and response behaviour. Our first split-ballot-experiment was carried out last July (n=1056) and we plan a second experiment for March, so that we will be able to present our results based on strong data evidence. [less ▲] Detailed reference viewed: 180 (13 UL)![]() Decieux, Jean Philippe Pierre ![]() Poster (2015) Due to the low cost and the ability to reach thousands of people in a short amount of time, online surveys have become well established as a source of data for research. As a result, many non ... [more ▼] Due to the low cost and the ability to reach thousands of people in a short amount of time, online surveys have become well established as a source of data for research. As a result, many non-professionals gather their data through online questionnaires, which are often of low quality due to having been operationalised poorly (Jacob/Heinz/Décieux 2013; Schnell/Hill/Esser 2011). A popular example for this is the ‘forced response‘ option, whose impact will be analysed within this research project. The ‘forced response’ option is commonly described as a possibility to force the respondent to give an answer to each question that is asked. In most of the online survey computer software, it is easily achieved by enabling a checkbox. Relevance: There has been a tremendous increase in the use of this option, however, the inquirers are often not aware of the possible consequences. In software manuals, this option is praised as a strategy that significantly reduces item non-response. In contrast, research studies offer many doubts that counter this strategy (Kaczmirek 2005, Peytchev/Crawford 2005, Dillman/Smyth/Christian 2009, Schnell/Hill/Esser 2011, Jacob/Heinz/Décieux 2013). They are based on the assumption that respondents typically have plausible reasons for not answering a question (such as not understanding the question; absence of an appropriate category; personal reasons e.g. privacy). Research Question: Our thesis is that forcing the respondents to select an answer might cause two scenarios: - Increasing unit non-response (increased dropout rates) - Decreasing validity of the answers (lying or random answers). Methods and Data: To analyse the consequences of the implementation of ‘forced response’ option, we use split ballot field experiments. Our analysis focuses especially on dropout rates and response behaviour. Our first split ballot experiment was carried out in July 2014 (n=1056) and we have planned a second experiment for February 2015, so that we will be able to present our results based on strong data evidence. First results: If the respondents are forced to answer each question, they will - cancel the study earlier and - choose more often the response category “No” (in terms of sensitive issues). [less ▲] Detailed reference viewed: 296 (34 UL)![]() Decieux, Jean Philippe Pierre ![]() ![]() in Psihologija (2015), 48(4), 311-326 Online surveys have become a popular method for data gathering for many reasons, including low costs and the ability to collect data rapidly. However, online data collection is often conducted without ... [more ▼] Online surveys have become a popular method for data gathering for many reasons, including low costs and the ability to collect data rapidly. However, online data collection is often conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each question in order to proceed through the questionnaire. The avoidance of missing data is often the idea behind the use of the forced answering option. However, we suggest that the costs of a reactance effect in terms of quality reduction and unit nonresponse may be high because respondents typically have plausible reasons for not answering questions. The objective of the study reported in this paper was to test the influence of forced answering on dropout rates and data quality. The results show that requiring participants answer every question increases dropout rates and decreases quality of answers. Our findings suggest that the desire for a complete data set has to be balanced against the consequences of reduced data quality. [less ▲] Detailed reference viewed: 440 (69 UL) |
||