Reference : Higher response rates at the expense of validity? Consequences of the implementation ... |
Scientific congresses, symposiums and conference proceedings : Poster | |||
Social & behavioral sciences, psychology : Multidisciplinary, general & others | |||
http://hdl.handle.net/10993/19854 | |||
Higher response rates at the expense of validity? Consequences of the implementation of the ‘forced response‘ option within online surveys | |
English | |
Decieux, Jean Philippe Pierre ![]() | |
Mergener, Alexandra [] | |
Neufang, Kristina [] | |
Sischka, Philipp ![]() | |
2015 | |
Yes | |
International | |
General Online Research 2015 | |
18-20 March 2015 | |
DGOF | |
Cologne University of Applied Sciences | |
Germany | |
[en] Experiments ; Measurement ; Nonresponse ; Validity ; Survey Methodology ; Online-Survey ; Forced-Answering ; Split-Ballot-Experiments | |
[en] Due to the low cost and the ability to reach thousands of people in a short amount of time, online surveys have become well established as a source of data for research. As a result, many non-professionals gather their data through online questionnaires, which are often of low quality due to having been operationalised poorly (Jacob/Heinz/Décieux 2013; Schnell/Hill/Esser 2011).
A popular example for this is the ‘forced response‘ option, whose impact will be analysed within this research project. The ‘forced response’ option is commonly described as a possibility to force the respondent to give an answer to each question that is asked. In most of the online survey computer software, it is easily achieved by enabling a checkbox. Relevance: There has been a tremendous increase in the use of this option, however, the inquirers are often not aware of the possible consequences. In software manuals, this option is praised as a strategy that significantly reduces item non-response. In contrast, research studies offer many doubts that counter this strategy (Kaczmirek 2005, Peytchev/Crawford 2005, Dillman/Smyth/Christian 2009, Schnell/Hill/Esser 2011, Jacob/Heinz/Décieux 2013). They are based on the assumption that respondents typically have plausible reasons for not answering a question (such as not understanding the question; absence of an appropriate category; personal reasons e.g. privacy). Research Question: Our thesis is that forcing the respondents to select an answer might cause two scenarios: - Increasing unit non-response (increased dropout rates) - Decreasing validity of the answers (lying or random answers). Methods and Data: To analyse the consequences of the implementation of ‘forced response’ option, we use split ballot field experiments. Our analysis focuses especially on dropout rates and response behaviour. Our first split ballot experiment was carried out in July 2014 (n=1056) and we have planned a second experiment for February 2015, so that we will be able to present our results based on strong data evidence. First results: If the respondents are forced to answer each question, they will - cancel the study earlier and - choose more often the response category “No” (in terms of sensitive issues). | |
DGOF | |
Researchers ; Professionals ; Students ; General public ; Others | |
http://hdl.handle.net/10993/19854 |
File(s) associated to this reference | ||||||||||||||
Fulltext file(s):
| ||||||||||||||
All documents in ORBilu are protected by a user license.