Cutting Edge Methods for Research – European Master Classes in Survey Methodology

 

natcenRTI_logo_slogan_blueSurveyPost is pleased to share the news that RTI International has partnered with NatCen Learning in the U.K. to develop a series of master classes in survey methodology that will be running in different cities across Europe. These one-day classes are aimed at more experienced survey practitioners and commissioners of survey research. Attendees will have the opportunity to spend the day with a world-renowned expert in survey methodology. The classes are less one-way learning and more about sharing and discussing experiences.

Our first master class will take place on April 1st 2014 in London and will be taught by Dr. Paul Biemer, RTI Distinguished Fellow and the architect of the Total Survey Error paradigm.

This one-day class is going to be unique in that it will not only tie together many different concepts in survey methodology, but it will also provide a number of practical examples and illustrations from the instructor’s years of experience that exemplify the concepts and drive home the key points of the class.

Numbers attending will be restricted to no more than 25, to allow people the chance to interact on a more personal level with Paul. Plus we have chosen a wonderful venue for the occasion at the Royal Institute of British Architects, right in the heart of London’s West End. We’ve heard that the floor to ceiling windows allow an amazing view of central London (although obviously we expect that will not distract from the amazing experience you will have when attending this class).

Book here to reserve a spot in the class.

Social Media, Sociality, and Survey Research: Community-based Online Research

Earlier, I posted about broadcast communication and conversational interaction, levels one and two in the sociality hierarchy presented in our new book, Social Media, Sociality, and Survey Research. We use the sociality hierarchy to organize our thinking about how individuals use digital media to communicate with each other. Broadcast use of social media includes things like Tweets, status updates, check-ins, and YouTube videos. Conversational use of social media includes using Facebook and mobile apps for data collection; it also includes conducting traditional survey interviews via technology like Skype and Second Life. My final post on our book is about level three of the sociality hierarchy, community-based interactions. Community-based research uses social and interactive elements of social media, like group affinity and membership, interactivity, altruism, and gamification to engage and capture data from research participants.

Four chapters in our book present research that relies on the structure of online communities to collect data. In “Crowdsourcing: A Flexible Method for Innovation, Data Collection, and Analysis in Social Science Research,” Michael Keating, Bryan Rhodes, and Ashley Richards show how crowdsourcing techniques can be used to supplement social research. Crowdsourcing does not rely on  probability-based sampling,  but  it does allow the researcher to invite diverse perspectives to the research process as well as offer quick, fast, high quality data collection. In “Collecting Diary Data on Twitter,” Richards, Sarah Cook, and I pilot test the use of Twitter to collect survey diary data, finding it to be an effective tool for getting immediate and ongoing access to research participants. In “Recruiting Participants with Chronic Conditions in Second Life,” Saira Haque and Jodi Swicegood connect with health and support networks in Second Life to recruit and interview patients with chronic medical conditions. Using existing social networks, community forums, and blogs, Haque and Swicegood were able to recruit respondents with chronic pain and diabetes, but were less successful identifying large numbers of respondents with cancer or HIV. In the final chapter, “Gamification of Market Research,” Jon Puleston describes survey design methods that gamify questionnaires for respondents. Gamification makes surveys more interactive, interesting and engaging. Gamification must be used with care, Puleston warns, because it does have an impact on the data, by expanding the breadth and detail of answers respondents give. More research is needed to determine whether this threatens the reliability and validity of survey response.

The community level of the sociality hierarchy is our broadest category and is likely the type of social media communication that will expand as technology continues to evolve and social media becomes more pervasive. As we discuss in the book, there are clear statistical challenges associated with attempting to understand population parameters with methods like crowdsourcing, which collects data from extremely motivated and technologically agile participants, and Twitter surveys, which access only about a fifth of the U.S. population (or for that matter, surveys of Second Life users, an even smaller community). However, community-based data collection adds a social element to online research, much like ethnography, participant observation, and focus groups, that may improve researchers’ understanding of respondents. Research enabled by online communities may represent the future of digital social research.

Social Media, Sociality, and Survey Research: Conversations via Social Media

This week, I’m writing about the sociality hierarchy, a framework we use in our new book, Social Media, Sociality, and Survey Research, to organize our thinking about how individuals use digital media to communicate with each other. My last post was on harnessing broadcast (one-way) communication, like Tweets, status updates, check-ins, and YouTube videos, for social research. Today’s post is about social media and other digital platforms and methods that allow researchers to engage respondents in a conversation, a more traditional two-way interaction between researcher and subject.

In our book, the examples we’ve compiled about applying conversational methods to social media platforms show how traditional survey methods can be transferred to these new platforms. The book contains four chapters presenting data collection techniques for conversational data. In “The Facebook Platform and the Future of Social Research” Adam Sage shows how a Facebook application can be developed to recruit respondents, collect survey data, link to social network data, and provide an incentive to participating in research.

In “Virtual Cognitive Interviewing Using Skype and Second Life” Brian Head, Jodi Swicegood and I introduce a methodology for using Skype and virtual worlds to conduct face-to-face interviews via the internet with research participants. We find both platforms feasible for conducting cognitive interviews. Skype and Second Life interviews generated observations of many errors in the interviews, particularly related to comprehension and judgment. Particular advantages of these technologies include lower cost and access to a geographically dispersed population.

Ashley Richards and I take further advantage of Second Life in “Second Life as a Survey Lab: Exploring the Randomized Response Technique in a Virtual Setting.” In that chapter, we test comprehension and compliance with the RRT. The RRT depends on a random event (such as a coin toss) that determines which question the respondent must answer. The interviewer does not know the outcome of the event, so the respondent’s privacy is protected. By controlling the coin toss (using Second Life code to make it only look random) we were able to determine that significant numbers of respondents did not properly follow instructions, due both to lack of comprehension and deliberate misreporting.

In our final chapter about the conversational level of the sociality hierarchy, David Roe, Yuying Zhang, and Michael Keating describe the decision process required in building a mobile survey panel to facilitate researchers engaging respondents in a conversation via their smartphones. Key elements of the decision process include the choice to build or buy an off-the-shelf mobile survey app, to design a standalone app or to develop web surveys optimized for mobile browsers, how to recruit panelists, and how to maintain panel engagement.

In the book we take a broad view of two-way, conversational communication and consider it as any application of traditional survey interactions between an interviewer (or an instrument) and a respondent translated to the online and social media context. Our key guidance is to take advantage of the wealth of survey methods literature and apply (while testing!) traditional assumptions in social media and other online environments. Tomorrow I’ll post about research at the third level of sociality: community-based interaction via social media and other digital environments, where users and participants share content, work collaboratively, and build community.

Social Media, Sociality, and Survey Research: Broadcast Communication

In our new book, Social Media, Sociality, and Survey Research, Craig Hill, Joe Murphy, and I define the sociality hierarchy, a framework we’ve used to conceptualize how individuals use digital media to communicate with each other. The sociality hierarchy presents three levels of communication: broadcast (one-way), conversation (two-way), and community (within groups). This post is about broadcast communication. Broadcast communication describes the behaviors of expressing thoughts, opinions, and status statements in social media and directing them at an audience. In the broadcast level, individuals send a one-way message to any friends or followers who happen to be tuning in. Broadcast social media communication includes things such as Tweets, status updates, check-ins and blogs, as well as YouTube videos, book reviews on Goodreads and restaurant reviews on Yelp.

Our book features two chapters presenting analytic techniques for broadcast data. In “Sentiment Analysis: Providing Categorical Insight into Unstructured Data,” Carol Haney describes a sentiment analysis methodology applied to data scraped from a vast range of publicly available websites, including Twitter, Facebook, blogs, message boards and wikis. She describes the steps of preparing a framework for data capture (what data to include and what not to include), harvesting the data, cleaning and organizing the data, and analyzing the data, with both machine coding and human coding. In her research, Haney finds sentiment analysis to be a valuable tool in supplementing surveys, particularly in market research.

Sentiment analysis has challenges too. The complexities of analyzing statements expressing sarcasm and irony and decoding words with multiple meanings can make the process more difficult or the results more open to interpretation. Despite the challenges, sentiment analysis of unstructured text yields insights into organic expressions of opinion by consumers that may never be captured through surveys. Two examples Haney provides:

  • A swimsuit company learned from blog content that they were not providing enough bikinis in large sizes. A subsequent market research survey confirmed that more than 25% of 18-34 year-olds could not find their size in the brand’s swimsuit line.
  • A public messaging campaign sought out factors that motivate young people to avoid smoking by scraping publicly available Twitter and Facebook data. The campaign determined that watching loved ones live with chronic illness or die because of the side effects were painful events. The campaign referenced this in the messaging and validated the effectiveness of the messages with subsequent testing with research subjects.

In “Can Tweets Replace Polls? A U.S. Health-Care Reform Case Study,” Annice Kim and coauthors analyze Tweets on the topic of health care reform as a case study of whether Twitter analyses could be a viable replacement for opinion polls.  Kim’s chapter describes the process of searching for and capturing Tweets with health-care reform content, coding Tweets using a provider of crowdsourced labor, and the comparison of analyses of Twitter sentiment to the results of a probability-based opinion poll on health-care reform. Ultimately, Kim and her coauthors found that sentiment expressed on Twitter was more negative than sentiment expressed in the opinion poll. This does not correspond with earlier studies on other topics, which demonstrated correlation between Twitter sentiment and opinion poll data on consumer confidence and presidential job approval. For certain types of sentiment, Twitter results correlate with public opinion polls, but for other types, Twitter is not currently a viable replacement for polling research.

These two chapters in our book represent a subset of the types of analyses that can be done with broadcast social media data. Are your findings similar? Have you worked with other types of online content? Or do you think of one-way communication differently than the broadcast model?

New Book by SurveyPost Researchers: Social Media, Sociality, and Survey Research

When SurveyPost launched two years ago, our community of researchers had just begun to investigate developments in digital technology, social media, and big data and their impact on the future of survey research. In order to engage other survey researchers and establish a body of knowledge about this rapidly changing aspect of our discipline, our team committed to documenting the process as we progressed. We’ve done that by maintaining this blog, writing journal articles, and participating in conferences and working groups in our fields of study. Another way we’ve documented the process is through an edited volume recently published by Wiley.

Social Media, Sociality, and Survey Research is our new book. It presents methodological research investigating new techniques in data collection and analysis, including sentiment analysis, capturing and analyzing data using Facebook, Twitter, Second Life, and Skype, developing mobile surveys, gamification of the interview process, and crowdsourcing. We’ve documented, and will continue to document, the steps along the way for these and other studies on this blog. In the book, we present some final results.

The book is organized around the idea of sociality, that is, the extent to which individuals are social and engage in and communicate with social groups, and a hierarchy of how it is observed in interpersonal interactions using computers, smartphones, and other digital tools. As we formulated our research on social media and digital methods, we found it helpful to conceptualize communication technologies according to communication flow: broadcast (one-way), conversational (two-way) and community (within groups). The sociality hierarchy organizes our research according to these three levels.

Over the next few days, I’ll be previewing the book on SurveyPost by presenting some detail on each of the three levels of the sociality hierarchy and summarizing some of the methods and findings presented in the book. We hope that you find it a useful tool for organizing and conducting your own research, and look forward to your feedback.

 

Cognitive Interviews in Second Life and Skype: Preliminary Results of a Pilot Study

Cognitive interviewing is a commonly used questionnaire pretesting method designed to evaluate the cognitive properties of survey instruments as sources of potential measurement error.  In a cognitive interview, the researcher administers a questionnaire, but the primary purpose is not the collection of survey data. The researcher instructs the cognitive interview participant to “think aloud” while determining his/her answers so that the researcher can capture and analyze how the participant cognitively processes each question. The researcher may administer concurrent or retrospective probes that ask the participant to report what he was thinking when the answer was determined, or how confident she is in her answer, or how he thinks other people might answer the question. Participants are encouraged to report when response categories are not applicable or when questions are confusing or difficult to understand.

Cognitive interview studies provide rich sources of data on questionnaire problems, but are labor and cost intensive due to time required to recruit and interview (often 1-2 hours) participants and to analyze the qualitative data provided in the interviews. For this reason, small convenience samples of 8-20 participants are typically used, recruited from a locally accessible population. Recent research suggests that the small samples of cognitive interview studies can be a problem. Blair & Conrad (2011) found that that a sample size of 10 can detect approximately half of the most severe problems and while detecting approximately 25% of the most subtle problems.  Furthermore, easily recruited, locally available  populations may provide a limited participant pool made up of professional respondents  with limited cultural and socioeconomic variation.

A potential solution to cost and logistical challenges of cognitive interviews is the use of virtual communication technologies. We piloted a study of using Second Life and Skype to conduct cognitive interviews. Second Life may be well-suited to in-depth and cognitive interviewing methods, because it enables face-to-face-like communication with avatars, which when self-designed, can represent individuals’ selves more authentically than in real life (Taylor 2001). Additionally, the environment allows a simulation of face to face interaction with a community of people spread across the globe. Similarly, Skype allows face-to-face communication with participants via the computer all over the world, except that instead of an avatar, the participant uses video calling so that the researcher can see the participant’s facial expressions.

As a pilot test of using these two technologies as cognitive interview modes, we recently conducted 40 cognitive interviews across three modes: Second Life (using voice chat), Skype (using video conferencing), and in-person. Anecdotally, our researchers felt both modes were promising, for a variety of reasons:

  • Logistically, virtual interviews in both Skype and Second Life were easier to schedule and conduct than in-person interviews.
  • Participants were recruited from locations across the country, rather than near RTI’s offices and represented some populations that are hard to bring into a cognitive lab (extremely obese, homebound, employed more than 40 hours per week, Indian reservation resident).
  • Cash incentives (in Second Life currency and Amazon gift cards) were easy to disburse electronically.

Preliminary results also show that both methods can result in reasonable quality cognitive interview data.  Second Life interviews on average were longer (62 minutes compared to 44 minutes for Skype) but had more technical disruptions (5 minutes compared to nearly zero for Skype). Yet even controlling for technical problems, Second Life interviews were 50 minutes on average compared to 32 on average for Skype. We’re assuming a longer interview means more cognitive interview content, but a more detailed analysis of the content of the interviews will be required to confirm that.

As displayed in the table, looking at the analysis of our first four interviews, Skype interviews on average uncovered more problems (21 compared to 19 for Second Life). Preliminary evidence may suggest some variation in the types of problems uncovered. More comprehension and response problems were uncovered in Skype whereas more retrieval and judgment problems were uncovered in Second Life.

These findings are only preliminary, and I look forward to reporting the full analysis of all 40 interviews conducted (including some comparison interviews conducted in real life). But the results suggest that both while Skype may ultimately be more comparable to an in-person interview, both modes are viable for conducting cognitive interviews.

Blair, Johnny and Fredrick Conrad. 2011. “Sample Size for Cognitive Interviewing.” Public Opinion Quarterly, 74: 636-58.

Taylor, T. L. (2001). Living digitally: Embodiment in virtual worlds. In R. Schroeder (Ed.), The Social Life of Avatars: Presence and Interaction in Shared Virtual Environments (pp. 40–61). London: Springer.

Day 3 of AAPOR 2012 – What is the new frontier?

Here’s a word cloud of the session presentations only from the 2012 Conference program (posters are excluded).

Looking at the overall program, here’s the number of hits identified when searching the program for terms of SurveyPost interest.

The Emerging Technologies:
Frontier – 30 hits
Mobile – 25 hits
Social Media – 16 hits
Future – 11 hits
Facebook – 9 hits
Twitter – 9 hits

The Established Methods:
Web – 100 hits
Nonresponse – 45 hits
Questionnaire – 40 hits
Telephone – 35 hits
Error – 22 hits

The Debate:
Probability – 31 hits
Non-probability – 13 hits

The Community:
Survey – 584 hits
AAPOR – 422 hits
RTI International – 115 hits

So, AAPOR, what does this mean to you?

Day 2: 5 Examples of the Frontiers of Public Opinion Research at AAPOR 2012

You may have heard on Twitter, on this blog, or at the RTI booth that we’re running a contest at this year’s AAPOR, designed to solicit your favorite examples of the frontiers of public opinion research from the conference. Today, I had so many options that I don’t know what is going to be my #1 pick. Here are my top 5 from Day 2 of AAPOR.

  1. Are we starting to see an impact of the networked or “born digital” generation’s facility with online communication on survey response? Matt Lackey from Fors Marsh Group found that among 16-25 year-olds, users of social network services (predominantly Facebook) respond differently to certain web survey questions than non-users. SNS users gave on average longer and richer answers to open-ended surveys and provided fewer “Don’t Know” and “Refused” responses.
  2. “Designed” and “organic” data can be complementary, but there’s also an important tension. Scott Keeter’s presidential address reminded us of the importance of probability-based surveys for the state of democracy. A probability survey allows every member of the sample frame an equal chance of participation. What other avenue of public engagement does that? Keeter implored that AAPORites do all we can do to defend high quality research, its producers, and those who use it, as well as promote transparency on old and new, designed and organic methods.
  3. Second Life “avatars do not have minds of their own” or so say Sarah Dipko and colleagues from Westat. They evaluated the effect of mode on participant responses to qualitative research in virtual worlds and found that respondents gave the same answers in real life as they did through their avatars about 80% of the time. Follow-up qualitative interviews indicated that people they interviewed (recruited from an SL panel as opposed to the networking and avatar-to-avatar methods we’ve used) believed that their avatar was an extension of their real life selves, not a separate identity.
  4. Online surveys have come a long way in the past 2 decades. But we’re still missing an integrated tool for designing, implementing and analyzing results from online surveys. Ana Slavec and colleagues from the University of Ljubljana are developing a tool that enables collaborative survey design, data collection and analysis all in one. As the experts in web survey methodology, I can’t wait to see what they come up with.
  5. What’s the impact of humanized social presence on data quality in self-administered web surveys? NONE, according to Chan Zhang. A “computer-like” prompt (featuring html language) to speeding respondents resulted in more backups to reconsider answers than a “humanized” prompt featuring an interviewer’s face.

What are your top picks? What inspired you today? What best represents the future of public opinion research at today’s AAPOR? Share your thoughts with SurveyPost, and right now you’ve got a pretty good chance of winning an iPad 3!

Wisdom from Peter Miller at AAPOR, Day 1

Peter Miller served as the discussant for the session I organized on “Interactive and Gaming Techniques to Improve Surveys,” and his perspective of decades of experience, former editor of POQ, and AAPOR past president added tremendous value to the session. He shared two pieces of wisdom that I will likely be thinking about for the rest of the conference.

  • In response to the idea that gamification of surveys makes them more fun and engaging for respondents, Peter told a story of an alternate approach he took many years ago in which respondents were told that the survey request was important, serious work. Respondents had to sign a commitment form at the beginning of the process indicating that they would provide thoughtful and accurate answers. I love this approach and I wonder if it’s really that different from gamification. Jane McGonigal, author of Reality is Broken, describes “flow” as a heightened sense of attentiveness, engagement, and self-efficacy that gamers experience when they are fully immersed in a game. I’d like to see survey features developed and tested that increase that sense of importance of effort. Jeffrey Henning presented some evidence that to support this in his finding that imposing additional rules on respondents (e.g., describe a product in 7 words or less) resulted in richer data for questions with open-ended answers. And while Ashley Richards’ test of the RRT did not result in improvements in data quality, she did obtain feedback from some respondents that the RRT process made them think more carefully about their answers than other surveys.
  • Peter recommended some caution in adopting interactive techniques and applications for surveys and reminded us to think about what measures would indicate that these new methods are a success. He’s right. It’s not enough to try out these ideas about gamification and see what happens – although exploratory research is a necessary step. It’s time to refine hypotheses about the impact of gaming and interactive techniques and test them against what we currently know about increasing survey data quality and engagement.

I’m happy that Peter’s insight has given me more work to do. Game on!

AAPOR Preview: Elizabeth Dean

This year, I plan to embrace AAPOR’s conference theme! I’m organizer and chair of a panel session on gamification and interactivity for surveys. This session features researchers from RTI, Affinova, and Nielsen as well as past AAPOR president Peter Miller from Census as discussant. I’ll also keep busy throughout the conference with Tweeting and live blogging daily updates for SurveyPost.

Three events I don’t plan to miss include:

  1. Thursday night’s plenary, “Examining the Value of Non-Probability Sampling in Social Research”. We know that there are statistical challenges to much of the social research that is conducted via emerging communication technologies. I’m looking forward to a serious discussion of what is possible and what is not possible with non-probability samples.
  2. Saturday morning’s session, “New Frontiers: Survey Responses vs. Tweets – New Choices for Social Measurement.” This session is organized by Conrad and Schoeber, authors of “Envisioning the Survey Interview of the Future,” and includes researchers from the public opinion and information science fields.
  3. Sunday morning’s session, “New Frontiers: Social Media Analysis.” Included in this group are a paper on crowdsourcing and several papers on communication norms and opinion expression in social media. Looks very interesting.

Be sure to check out my panel, “New Frontiers: Interactive and Gaming Techniques to Improve Surveys” on Thursday, May 17 at 1:30pm in Mediterranean 1. And I look forward to chatting with you about the frontiers of public opinion all weekend!