When the special edition of Public Opinion Quarterly (POQ) we’re discussing hit my mailbox last month, I was pretty excited. Like many of us, I depend on POQ for updates and statements about where we are as a discipline. What excited me the most was the journal’s willingness, despite its firm foundation in scientifically sound practice, to embrace “speculation” right off the bat. It showed a willingness to look forward, rather than simply recap 75 years as a discipline through the eyes of POQ. Druckman and Mathiowetz state quite clearly in their editors’ introduction, technological evolution has changed us, and a revolution is afoot. There are a number of great pieces in this special edition, but not surprisingly, there was one that I was drawn to first – “The Future of Modes of Data Collection”, by Mick Couper.
Couper begins the article with a review of mode history, and while not exhaustive, it’s a pretty solid recap and I’d encourage anyone interested in the history of modes and mode effects to use it as a starting point for diving into the topic. As Couper highlights the literature on mode differences, he also points out that the term “mode” itself not only has many meanings, but may soon be outdated due to the complexities of the work that we now do. Even when it comes to “traditional” survey work, like RDD studies, which are now complex combinations of true random sample, list-assisted sample, and “known” cell sample, the concept of a single, stand-alone data collection mode is fading.
The discussion of this complexity highlights an important point. We as researchers need to remember that as we work with new technologies, we have to be willing to learn about their possible shortcomings and be prepared to deal with them if they arise. The more complex our approach to data collection, the more we’re threatened by no longer being able to randomly assign sample members to different modes in equal proportions, meaning we cannot easily test for mode differences. Availability and access to new technologies, take smartphones for example, varies greatly with our rapidly shifting technological landscape. That said, this doesn’t mean that the sky is falling! It means that we have to think about how we’re going to bring all of our new approaches to data capture into the fold to contribute to the study of total survey error. To quote Couper, “Mixing modes is much like cooking – one can’t learn to combine ingredients until one understands the properties of the original ingredients.”
Without diving very deep into examples or cool experiments involving new technologies, Couper suggests that as we move forward, smartphones, interactive voice response (IVR), the use of voice over Internet Protocol (VoIP), Skype, and various forms of social media will be at the heart of the enhancements made to data collection/capture. Couper closes by suggesting that while some modes may become obsolete in the very distant future, the focus of the discipline now should be on the enhancement of traditional modes and combating the potential bias that may come with it. This point resonates with many of us here at SurveyPost. Despite our genuine excitement for the work that we’re doing, we also believe strongly that a synergy between existing, scientifically proven methods and the use of new technologies must exist for us to continue to provide accurate, representative, high-quality data. What about you? What do you think of the new technologies that are available to us as researchers? Are they on the verge of replacing our existing methods, or will they serve as an enhancement to what we already do? We’d love to hear your thoughts on this. Comment and let us know!