NatCen Social Media Research: What Users Want

fryAt the beginning of October 2013, there were reportedly 1.26 billion Facebook users worldwide. The number of Tweets sent per day is over 500 million. That’s a lot of communication happening every day! Importantly for researchers, it’s also being recorded, and because social media websites offer rich, naturally-occurring data, it’s no wonder researchers are increasingly turning to such websites to observe human behaviour, recruit participants, and interview online.

As technology constantly evolves, researchers must re-think their ethical practices. Existing guidelines could be adapted ad-hoc, but wouldn’t it be better to rethink the guidelines for this new paradigm? And what do social media users think about research that utilises social media? The work of the “New Social Media, New Social Science?” network in reviewing existing studies suggests that these questions have not yet been adequately answered.

In response, a group of NatCen researchers are soon to report data from a recent study on how social media users feel about their posts being used in research, and offer recommendations about how to approach ethical issues.

What do participants want?

A key ethical issue participants talked about was consent: participants wanted researchers to ask them before using their posts and information. Although it was acknowledged that “scraping” a large number of Tweets would pose practical problems for the researcher trying to gain consent, users would still like to be asked. Consent was seen as particularly important when the post contained sensitive or personal information (including photographs that pictured the user). An alternative view was that social media users shouldn’t expect researchers to gain consent because, by posting online, you automatically waive your right to ownership.

Participants’ views about consent were affected by other factors, including the platform being used. Twitter, for example, was seen as more public than Facebook so researchers wouldn’t necessarily need to ask for the user’s permission to incorporate a Tweet in a report.

Views about anonymity were less varied. Users felt anonymity should be afforded to all, especially if posts had been taken without consent. Users wanted to remain anonymous so that their posts wouldn’t be negatively judged, or because they were protecting identities they had developed in other contexts, such as at work.

Our participants were also concerned about the quality of information posted on social media. There was confusion about why researchers would want to use social media posts because participants felt that people didn’t always present a true reflection of themselves or their views. Participants noted, for example, how users post pictures of themselves drinking alcohol (which omits any mention of them having jobs or other, more serious things!), and that ”people either have more bravado, and ‘acting up’ which doesn’t reflect their real world self”. They expressed concern over this partial ‘self’ that can be presented on social media.

What does it mean?

Later this year, NatCen will publish a full report of our findings, so stay tuned! If you can’t wait, here’s a preview:

  • Consider that users’ posts and profiles may not be a reflection of their offline personality but an online creation or redefinition;
  •  Even if users are not utilizing privacy settings they still might expect you to ask permission to use their post(s);
  • Afford anonymity. Even if someone has let you know you can quote their username, you should learn how ‘traceable’ this is and let the user know (i.e. can you type their username into Google and be presented with a number of their social media profiles?). It’s our responsibility as researchers that the consent we get is informed consent.

Let us know at NatCen if you would like to receive an electronic copy of the report, or if you have any questions about the study.

Survey: What’s in a Word?

As those of us in the survey research field are aware, survey response rates in the United States and other countries have been in decline over the last couple decades.  The Pew Research Center sums up the concerning* state of affairs with a pretty eye-popping table showing response rates to their telephone surveys from 1997 (around 36%) to 2012 (around 9%).  Others have noted, and studied the same phenomenon.

So what’s really going on here?  There are plenty of explanations, including over-surveying**, controlled access, and a disinterested public.  But what else has changed about sampled survey respondents or their views towards surveys in recent years that might contribute to such a drop?  As a survey methodologist, my first instinct is to carry out a survey to find the answer.  But conducting a survey to ask people why they won’t do a survey can be like going fishing in a swimming pool.

One place many people*** are talking these days is on social media.  In the past decade, the percentage of Americans using social media has increased from 0 to “most.”  I was curious to see how the terms survey and surveys were being portrayed on online and social media.  Do those who use (or are exposed) these terms have the same things in mind as we “noble” researchers?  When we ask someone to take a survey, what thoughts might pop into his or her mind?  Social media is by no means the only place to look****, but there is a wealth of data out there and you can learn some interesting things pretty quickly.

Using Crimson Hexagon’s ForSight platform, I pulled social media posts that included the word survey or surveys from 2008 (the earliest data available) to today (January 8, 2014).  First I looked to see just how often the terms showed up by source.  Here’s what I found:

Survey

In sheer volume, Twitter seems to dominate the social media conversation about surveys, which is surprising given that only about 1 in 6 U.S. adults use it. Of course, just because the volume is so high doesn’t mean everyone is seeing these posts.  The surge in volume is quite dramatic late in 2012!  Maybe this had to do with the presidential election?  We’ll see… keep reading!  My next question was what are they saying when it comes to surveys?  I took a closer look at the period before that huge spike in 2012, focusing just on those co-occurring terms that pop up most frequently with survey(s).  I also split it out by Twitter and non-Twitter to see what comes up.

clouds1

We see according, each, and online for Twitter posts and according, found, and new for all other social media.  Hmm, what could this mean?  Drilling down into each term, we can look at individual posts for each term.  I include just one example for each here just to give a flavor of what the data show:

Twitter 5/08-7/12

  • According to one national survey on drug use, each day…
  • D-I-Y Alert! Nor your usual survey job $3 each – Research: This job….
  • We want you 2 do online survey and research for us. Easy…

Other online media 5/08-7/12

  • Nonresidential construction expected to lag in 2010 according to survey…
  • Surprise! Survey found young users protect their privacy online
  • New survey-Oil dips on demand worry, consumer view supports

Among these sample posts, we survey results being disseminated from several kinds of surveys on both Twitter and other online media.  The Twitter posts, though, seem to have more to do with earning money online than other social media.  Next, I looked at August 2012 to today (January 8, 2014):

clouds2

Among the other online media, there’s not much change here from the previous period.  People replaces found among top co-occurring terms, but the focus is still on survey results.  For Twitter, we see a new top 3 terms co-occurring with survey(s): earned, far, and taking.  Here’s what some of the Tweets from the more recent period look like:

Twitter 8/12-1/14

  • Awesomest week ever! I earned $281.24 just doing surveys this week :)
  • Cool! I got paid $112.29 so far from like surveys! Can’t wait for more!
  • What the heck – I got a free pizza for taking a survey!

Now, I know that most of this is pure Twitter spam***** and not every Tweet is read or even seen by the same number of people, but I do think the increasing predominance of ploys to sign up people for paid surveys on networks like Twitter is a sign that term survey is being corrupted in a way that, if it does not contribute to declining response rates, surely doesn’t help matters.  They leave an impression and if these are the messages some of our prospective respondents have in mind when we contact them with a survey request, we are facing an even steeper uphill battle that we might have thought.

So, this leads us back to the classic survey methods question: what should we do?  How do we differentiate the “good” surveys from the “bad” surveys among a public who likely finds the distinction less than salient and won’t bother to read a lead letter, let alone open a message that mentions the word survey? Should we come up with a new term?  Does study get across the task at hand for the respondent?  Would adding scientific before survey help keep our invitations out of trash cans, both physical and virtual?

What are your thoughts on the term survey? Leave a comment here, or discuss on your favorite listserv or social media platform.  If you do, I promise not to send you a survey about it!

*scary=the degree to which lower response rates equate to lower accuracy, which isn’t always the case

**Personally, I sympathize with respondents when I get a survey request on my receipt every time I buy a coffee or sign up for a webinar.  “Enough already with the surveys!  I’ve got surveys to write!”

***not all people, and not all kinds of people, but still many!

****A few years ago, Sara Zuckerbraun and I looked at the portrayal of surveys in a few select print news media.

*****Late 2012 appears to have been a golden age for Twitter spam about paid surveys.

Social Media, Sociality, and Survey Research: Community-based Online Research

Earlier, I posted about broadcast communication and conversational interaction, levels one and two in the sociality hierarchy presented in our new book, Social Media, Sociality, and Survey Research. We use the sociality hierarchy to organize our thinking about how individuals use digital media to communicate with each other. Broadcast use of social media includes things like Tweets, status updates, check-ins, and YouTube videos. Conversational use of social media includes using Facebook and mobile apps for data collection; it also includes conducting traditional survey interviews via technology like Skype and Second Life. My final post on our book is about level three of the sociality hierarchy, community-based interactions. Community-based research uses social and interactive elements of social media, like group affinity and membership, interactivity, altruism, and gamification to engage and capture data from research participants.

Four chapters in our book present research that relies on the structure of online communities to collect data. In “Crowdsourcing: A Flexible Method for Innovation, Data Collection, and Analysis in Social Science Research,” Michael Keating, Bryan Rhodes, and Ashley Richards show how crowdsourcing techniques can be used to supplement social research. Crowdsourcing does not rely on  probability-based sampling,  but  it does allow the researcher to invite diverse perspectives to the research process as well as offer quick, fast, high quality data collection. In “Collecting Diary Data on Twitter,” Richards, Sarah Cook, and I pilot test the use of Twitter to collect survey diary data, finding it to be an effective tool for getting immediate and ongoing access to research participants. In “Recruiting Participants with Chronic Conditions in Second Life,” Saira Haque and Jodi Swicegood connect with health and support networks in Second Life to recruit and interview patients with chronic medical conditions. Using existing social networks, community forums, and blogs, Haque and Swicegood were able to recruit respondents with chronic pain and diabetes, but were less successful identifying large numbers of respondents with cancer or HIV. In the final chapter, “Gamification of Market Research,” Jon Puleston describes survey design methods that gamify questionnaires for respondents. Gamification makes surveys more interactive, interesting and engaging. Gamification must be used with care, Puleston warns, because it does have an impact on the data, by expanding the breadth and detail of answers respondents give. More research is needed to determine whether this threatens the reliability and validity of survey response.

The community level of the sociality hierarchy is our broadest category and is likely the type of social media communication that will expand as technology continues to evolve and social media becomes more pervasive. As we discuss in the book, there are clear statistical challenges associated with attempting to understand population parameters with methods like crowdsourcing, which collects data from extremely motivated and technologically agile participants, and Twitter surveys, which access only about a fifth of the U.S. population (or for that matter, surveys of Second Life users, an even smaller community). However, community-based data collection adds a social element to online research, much like ethnography, participant observation, and focus groups, that may improve researchers’ understanding of respondents. Research enabled by online communities may represent the future of digital social research.

Social Media, Sociality, and Survey Research: Conversations via Social Media

This week, I’m writing about the sociality hierarchy, a framework we use in our new book, Social Media, Sociality, and Survey Research, to organize our thinking about how individuals use digital media to communicate with each other. My last post was on harnessing broadcast (one-way) communication, like Tweets, status updates, check-ins, and YouTube videos, for social research. Today’s post is about social media and other digital platforms and methods that allow researchers to engage respondents in a conversation, a more traditional two-way interaction between researcher and subject.

In our book, the examples we’ve compiled about applying conversational methods to social media platforms show how traditional survey methods can be transferred to these new platforms. The book contains four chapters presenting data collection techniques for conversational data. In “The Facebook Platform and the Future of Social Research” Adam Sage shows how a Facebook application can be developed to recruit respondents, collect survey data, link to social network data, and provide an incentive to participating in research.

In “Virtual Cognitive Interviewing Using Skype and Second Life” Brian Head, Jodi Swicegood and I introduce a methodology for using Skype and virtual worlds to conduct face-to-face interviews via the internet with research participants. We find both platforms feasible for conducting cognitive interviews. Skype and Second Life interviews generated observations of many errors in the interviews, particularly related to comprehension and judgment. Particular advantages of these technologies include lower cost and access to a geographically dispersed population.

Ashley Richards and I take further advantage of Second Life in “Second Life as a Survey Lab: Exploring the Randomized Response Technique in a Virtual Setting.” In that chapter, we test comprehension and compliance with the RRT. The RRT depends on a random event (such as a coin toss) that determines which question the respondent must answer. The interviewer does not know the outcome of the event, so the respondent’s privacy is protected. By controlling the coin toss (using Second Life code to make it only look random) we were able to determine that significant numbers of respondents did not properly follow instructions, due both to lack of comprehension and deliberate misreporting.

In our final chapter about the conversational level of the sociality hierarchy, David Roe, Yuying Zhang, and Michael Keating describe the decision process required in building a mobile survey panel to facilitate researchers engaging respondents in a conversation via their smartphones. Key elements of the decision process include the choice to build or buy an off-the-shelf mobile survey app, to design a standalone app or to develop web surveys optimized for mobile browsers, how to recruit panelists, and how to maintain panel engagement.

In the book we take a broad view of two-way, conversational communication and consider it as any application of traditional survey interactions between an interviewer (or an instrument) and a respondent translated to the online and social media context. Our key guidance is to take advantage of the wealth of survey methods literature and apply (while testing!) traditional assumptions in social media and other online environments. Tomorrow I’ll post about research at the third level of sociality: community-based interaction via social media and other digital environments, where users and participants share content, work collaboratively, and build community.

Social Media, Sociality, and Survey Research: Broadcast Communication

In our new book, Social Media, Sociality, and Survey Research, Craig Hill, Joe Murphy, and I define the sociality hierarchy, a framework we’ve used to conceptualize how individuals use digital media to communicate with each other. The sociality hierarchy presents three levels of communication: broadcast (one-way), conversation (two-way), and community (within groups). This post is about broadcast communication. Broadcast communication describes the behaviors of expressing thoughts, opinions, and status statements in social media and directing them at an audience. In the broadcast level, individuals send a one-way message to any friends or followers who happen to be tuning in. Broadcast social media communication includes things such as Tweets, status updates, check-ins and blogs, as well as YouTube videos, book reviews on Goodreads and restaurant reviews on Yelp.

Our book features two chapters presenting analytic techniques for broadcast data. In “Sentiment Analysis: Providing Categorical Insight into Unstructured Data,” Carol Haney describes a sentiment analysis methodology applied to data scraped from a vast range of publicly available websites, including Twitter, Facebook, blogs, message boards and wikis. She describes the steps of preparing a framework for data capture (what data to include and what not to include), harvesting the data, cleaning and organizing the data, and analyzing the data, with both machine coding and human coding. In her research, Haney finds sentiment analysis to be a valuable tool in supplementing surveys, particularly in market research.

Sentiment analysis has challenges too. The complexities of analyzing statements expressing sarcasm and irony and decoding words with multiple meanings can make the process more difficult or the results more open to interpretation. Despite the challenges, sentiment analysis of unstructured text yields insights into organic expressions of opinion by consumers that may never be captured through surveys. Two examples Haney provides:

  • A swimsuit company learned from blog content that they were not providing enough bikinis in large sizes. A subsequent market research survey confirmed that more than 25% of 18-34 year-olds could not find their size in the brand’s swimsuit line.
  • A public messaging campaign sought out factors that motivate young people to avoid smoking by scraping publicly available Twitter and Facebook data. The campaign determined that watching loved ones live with chronic illness or die because of the side effects were painful events. The campaign referenced this in the messaging and validated the effectiveness of the messages with subsequent testing with research subjects.

In “Can Tweets Replace Polls? A U.S. Health-Care Reform Case Study,” Annice Kim and coauthors analyze Tweets on the topic of health care reform as a case study of whether Twitter analyses could be a viable replacement for opinion polls.  Kim’s chapter describes the process of searching for and capturing Tweets with health-care reform content, coding Tweets using a provider of crowdsourced labor, and the comparison of analyses of Twitter sentiment to the results of a probability-based opinion poll on health-care reform. Ultimately, Kim and her coauthors found that sentiment expressed on Twitter was more negative than sentiment expressed in the opinion poll. This does not correspond with earlier studies on other topics, which demonstrated correlation between Twitter sentiment and opinion poll data on consumer confidence and presidential job approval. For certain types of sentiment, Twitter results correlate with public opinion polls, but for other types, Twitter is not currently a viable replacement for polling research.

These two chapters in our book represent a subset of the types of analyses that can be done with broadcast social media data. Are your findings similar? Have you worked with other types of online content? Or do you think of one-way communication differently than the broadcast model?

Altruism: Alive and Well on Facebook?

Facebook has been derided by some researchers as a contributor to growing levels of narcissism in our society.  It is true that a lot of what happens on Facebook is about how you’re seen and gaining approval from peers.  But a recent experiment we conducted recruiting subjects for cognitive interviews using Facebook made me reconsider this simple characterization.

For a paper I’m presenting next week at the 2013 Federal Committee on Statistical Methodology conference (“Crowdsourcing in the Cognitive Interviewing Process “) we tested whether Facebook was viable for cognitive interview recruitment.  We set up three separate ad campaigns to see which was most effective:

The gift card treatments offer an award, appealing to extrinsic motivations—“what can you give me?” whereas the donation appeals to altruism—“what can you give others?”  With a “narcissistic” base of Facebook users, you might expect the gift card to perform better than the donation.  But what we found was that the donation incentive vastly outperformed the gift card treatments and accounted for 50 of our 60 completed interviews.  Not only were those exposed to the Red Cross ad more likely to click on it, they were more likely to complete the interview after clicking on an ad.

In the same session at FCSM, Michael Keating will be discussing a framework for understanding crowd motives based on the MIAB model (motives, incentives, activation, behavior).  Simply put, the altruistic incentive may work something like this: Some Facebook users are Motivated by altruistic causes (like helping the Red Cross); the Incentive of a $5 donation Activates their Behavior of taking the survey and producing the data we need.

As I sit back and think about it, there are some other logical explanations for the Red Cross incentive outperforming the gift card.  People might not feel they can really make good enough use of a $5 gift card (e.g. “$5 isn’t enough for anything really good!”) and the Red Cross incentive may appeal to guilt as much as altruism (e.g. “how can I say no to helping the Red Cross?”).  Further research into the motivators on Facebook and other platforms can help us tailor future messages and designs.

Have you experimented with Facebook ads?  If so, what’s worked for you?  Leave a comment and share your experience!

New Social Media, New Social Science – Blurring the Boundaries?

In 2013, NatCen and RTI International formed a strategic relationship with the aim of sharing knowledge and expertise to build research capacity and capability. We will be working jointly on methodological projects, and exchanging knowledge and expertise around a range of topics including the use of social media in social science research.

A version of this blog post was posted previously on the NVivo blog and can be viewed here.

Should social science researchers embrace social media and, if we do, what are the implications for our methods and practice? This is the key question our network of methodological innovation (New Social Media, New Social Science? – NSMNSS) has been discussing this year. Led by NatCen, SAGE and the Oxford Internet Institute we have over 470 members worldwide joining the debate and bringing insights from all fields of social science.

What have we learnt?

By bringing together researchers from different disciplines and different sectors of the research world we have tried to break down barriers between different disciplines and to provide a space where researchers can share their knowledge and practice, moving methodological understanding forward. What has been striking is an underlying uncertainty about the validity of online methods and a lack of confidence amongst the research community about whether they are ‘getting it right’.

This concern with ‘getting it right’ has focused on how to do social media research ethically. Whether or not we really understand the context of the world of social media has also been a persistent theme during our discussions. Do we really know what the users of social media platforms expect from researchers accessing their data for research? Limited research exists with users of platforms to explore what expectations and concerns, if any; they have about privacy, confidentiality and the use of their personal data. As a result, researchers can feel like they are working in a vacuum and making assumptions about what is ethical based on what they think social media users would want or expect. A team of network members at NatCen are conducting primary research to fill this gap. A major output of the network will be a report on the current ethical guidelines in use around the world.

Discussions about ‘getting it right’ have been equally lively around the issue of quality. How can researchers conduct social media research which is robust enough to stand up to scrutiny and add to the research evidence base? There are many differing views about what constitutes quality in social media research and as a result researchers feel tentative about what claims they can make from their data.

The boundaries are being blurred between ‘real’ life and ‘virtual’ worlds; conventional research methods and new approaches; researchers & participants; qualitative and quantitative methods; and, between researchers working in a range of disciplines from Computational Science to Anthropology. We are still at the start of our journey into the methodology of social media research. We haven’t yet agreed a coherent set of epistemological or ethical frameworks for online research. Some of our members argue this is positive, allowing researchers fluidity and freedom in the methods and approaches they adopt, reacting to what is a fast-changing research environment, others are less certain. What is clear is that the guidelines, epistemologies and methods of conventional research cannot simply be transplanted to the world of social media without scrutiny and adaptation.

You can read more about the network and lessons learnt during the last 18 months here and hear about the implications of social media for social science here. Along the way the network has produced a number of outputs including a lively blog which provides a useful review of the issues that have been raised, video resources and helpful links. The network will continue in 2013 and we hope you’ll join the ongoing debates by joining our virtual discussions on Twitter using #NSMNSS or by following our blog. Contributions from around the world are welcomed.

Video Interviewing: Is It a Feasible Mode of Survey Data Collection?

In keeping with the rapid evolution of technology, survey researchers have historically adapted and modernized data collection by offering alternative modes of participation. Traditional survey data collection modes such as mail, field, telephone, and Web surveys have some limitations compared to more recent communications technology, which add new features and capabilities.

Along with my RTI colleagues Tamara Terry, Richard Heman-Ackah, and Michael Price, we’ve been evaluating video interviewing as a new methodology in the data collection toolkit. These days, most desktops, laptops, tablets, and smartphones have built-in or supplemental Web cameras and microphones that allow video conferencing or chatting between two (or more) parties whether stationary or on the go. More recently video chatting has become integrated with social networking sites that many users (and potential survey respondents) check and update frequently. As the popularity of these video platforms increases, we focus our concept of video interviewing on two prominent platforms, Facebook and Skype.

Founded in 2003, Skype is communications software that allows users to make phone calls using voiceover IP (VoIP) technology, video calls, and instant message chatting. Skype has over 100 million users and 65 million daily users (Caukin, 2011), and its popularity makes it a viable option for communicating with respondents. In July 2011, Facebook announced an agreement that integrated Skype’s video calling feature into Facebook chat (Su 2011).

Since 2004, Facebook has evolved and continued their efforts to become a legitimate supplemental, and in some cases, outright alternative, communication platform through several tools of communication: content sharing, status updates, commenting, liking, blogging, and chatting.  With staggering growth rates that have reached a current monthly active user base of over 1 billion individuals (Facebook Inc. 2013), and a United States and Canada penetration of nearly 20 percent, Facebook has solidified its position as a major communications platform.  Skype and Facebook are platforms that many individuals are using for regular communications and may be willing to use to complete a survey interview.  With their integration, the viability for research is even greater.

For video interviewing to successfully function as a method of data collection, both the interviewer and respondent’s hardware and software equipment will need to meet standard requirements for successful video communication. Video communication can be a relatively inexpensive proposition. Web cameras typically range in price from 20 to 50 dollars; however, many devices already have this hardware built in. Internet service, an additional cost, would average 20 dollars a month. Most households already have Internet service on multiple devices. As reported by the Pew Research Center (2013), 82 percent of American adults use the internet .

Video interviewing poses some unique capabilities, such as the ability to visually authenticate or confirm the sample member and the utilization of qualitative data to analyze nonverbal respondent cues to questions. Such technology also allows for recording and analyzing both verbal and nonverbal communication between interviewer and respondent. Other visual considerations are the physical and professional appearance of the interviewing staff and whether being able to view the background of the data collection facility or respondent location compromises privacy and/or security. Additionally, providing visual aids to respondents further enhances communication on survey projects that rely on recall of past events or exposure.

The research on video interviewing is limited thus far. Our recent article published by Survey Practice  reviews case studies that utilize a variation of visual interactions to provide further insight on potentially using video interviewing.  A face-to-face interview is the closest comparison to a video-interviewing data collection method. During our research, we found various case studies that support face-to-face interviewing as a common, preferred method for data collection. Will video interviewing allow for greater access to respondents and the collection of high quality survey data?  Stay tuned as we pilot these methods and report back with the results!

A Birds-eye View of the 2013 AAPOR Twittersphere

As Joe Murphy mentioned in the preceding post, Twitter can provide a unique and efficient glimpse into conference dialogue that may be difficult to capture in other ways. We can measure when conversations occur between tweeters, the subject of those conversations, and other interesting patterns such as the most frequent words appearing in Tweets or the most popular Tweet (as indicated via retweets). There’s one analytic possibility, though, that’s frequently overlooked: social network analysis (SNA).

As I discussed in my poster presented at this year’s AAPOR conference, something inherent and potentially very useful to researchers is the network aspect of social networking sites. Inspired by a growing Twitter presence among the AAPOR community, and Joe’s analysis of the AAPOR Twittersphere, I decided to put some of the SNA concepts I discussed in my poster to practice. Specifically, in my attempt to examine public opinion in a different light, I asked the question “who is talking to whom?”

For those of you familiar with SNA, or are just curious as to how the measures I discuss are actually quantified, I’ve pulled the equations and explanations from my poster and inserted them for reference. In the first image below, a birds-eye view captures what the AAPOR Twitter conversation looked like. By conversation, I’m specifically referring only to Tweets where one user Tweeted to another using the hashtag #AAPOR. For instance, if (hypothetically speaking) Joe (@joejohnmurphy) Tweets to me “hey @AdamSage, your poster was A-M-A-Z-I-N-G #AAPOR,” the link connecting Joe to me would be directed from Joe to me.

At first glance, this birds-eye view looks kind of cool (or like a big mess depending on your perspective) – but what does it mean? Each dot, or node, represents a Twitter user that at one point used the hashtag #AAPOR and mentioned another Twitter user in a Tweet. Each line represents the association that’s created when such a Tweet occurred. The degree of each node is number of different connections made as a result of these Tweets. In the AAPOR Twittersphere, the average degree, or the average number of people to whom a given Twitter user is linked via these dialogues, is slightly over 2 (2.017). This is important because an average degree of 1 would mean we’re not a community, but rather a group of 1 on 1 dyadic cliques, which makes ideas difficult to spread. While these did occur, I excluded them in these graphs and focused what is called a giant component in SNA, or the largest group of connected individuals in a network.

Excluding those conversations occurring outside of the giant component, the diameter of the entire conference Twitter conversation was 8. In other words, the furthest any two people were away from each other during this Twitter conversation was 8 steps. Imagine yourself at the largest Tweet-up in AAPOR history consisting of those AAPOR attendees participating in a dialogue on Twitter (using #AAPOR of course). As people traverse from conversation to conversation, everything you say would be no more than 8 conversations away from anyone in the room. So that rumor about my awesome poster could theoretically spread quite quickly. Hopefully it’s starting to become clear how great papers and keynote addresses can become popular Twitter topics in a given year. Great papers resonate and travel quickly; great keynotes are viewed by a lot of people and travel even faster.

But wait, it gets better. Ideas spreading among the AAPOR community wouldn’t traverse the network as you might think. Looking at the overall graph below you will notice several color-coded groups of tweeters. Each color represents a community, which is essentially defined as those tweeters who are mentioning each other in their Tweets more than others. For instance, community around @AAPOR (the green colored node with several links in the top middle) does not include many of the users with high betweenness and closeness. Betweenness measures the strength of one’s connection to the entire network (i.e., how interconnected or embedded one is to the entire network), and closeness measures one’s distance to the overall network (an average distance). They are measured as:

Betweenness:

Closeness:

As you might expect, these people tend to drive conversations because they have many connections, which when analyzed using certain measures of network centrality, can reveal likely members of such conversations. For instance, just by measuring mere mentions within Tweets, clustering coefficients (defined as a given groups number of connections divided by the number of possible connections) allow us to distinguish “communities” from the rest of the network. Because this graph is directed, meaning I can tell if someone is doing the Tweeting or being Tweeted at, I know a majority of @AAPOR links were created by people mentioning @AAPOR in a Tweet, rather than @AAPOR mentioning others. The community around @AAPOR (the green colored node with several links in the top middle circled in red) does not include many of the users with high betweenness and closeness because @AAPOR’s propensity to draw Tweets to it naturally steals the thunder of everyone in close proximity.

While I haven’t analyzed the content of the conversations occurring among these communities, my guess is there are general topics (e.g., polling, elections, survey methods, statistics etc). After all, we tend to Tweet about what we’re interested in. In fact, just looking at the categories identified in Joe Murphy’s post, Tweets coming from @AAPOR and the highest frequency Tweeter (who I’ll leave unnamed, but I have circled in yellow) were generally categorized differently with the former much more often covering general conference information and the latter results from paper sessions.

While this is a quick analysis of the Twittersphere, these analyses can reveal characteristics of conversations and conversation participants that can potentially lead to other discoveries, such as the propensity for ideas to spread (or fade) among certain groups, identifying which individuals (and their behaviors) are key to group cohesiveness, and how we can eventually focus efforts to create a more robust environment for innovation, e.g., creating a hashtag to link people and ideas in the survey and public opinion research Twittersphere, much like market researchers use #MRX.

AAPOR 2013 – The view from the Twittersphere

Last week, more than 1,000 public opinion researchers convened in Boston for the annual meeting of the American Association for Public Opinion Research (AAPOR). The four day conference was centered on the theme “Asking Critical Questions” and included papers, posters, courses, and addresses from top researchers in the field.

As with each of the last several conferences, some attendees (and non-attendees) took to Twitter to discuss the results being shared and connect with colleagues old and new.  Our analysis of the #AAPOR hashtag (the “official” hashtag for the conference) shows more than 1,500 Tweets during the conference by more than 250 Tweeters.  About three quarters of these were original Tweets and the remaining quarter reposts (retweets) of #AAPOR posts made by others.  This is up about 500% from Tweeting at the 2010 conference. Five Tweeters posted more than 50 times during the 2013 conference, but more than half only posted once.

The conference activities kicked off on Wednesday 5/15 with afternoon short courses and wrapped up on Sunday 5/19.  Overall Tweet volume peaked on Saturday, though the most original Tweets (total minus retweets) were posted on Friday.

By hour, Tweet volume peaked during AAPOR events such as the plenary, President’s address, and award ceremony.  There was also a peak during the Saturday morning paper sessions with many Tweeters relaying thoughts shared by high-profile researchers like Jon Krosnik and Tom Smith.  In the chart below, the solid lines show volume excluding and the dotted lines including retweets.  Noon is indicated by the vertical line above each day.

We sorted original Tweets by time and looked at every fourth one to get a sense of the popular topics of discussion.  Content and research findings from presentations topped the list, making up about a third of all original tweets.

The most retweeted post was from @AAPOR announcing the release of the Non-Probability Task Force Report.  In the spirit of transparency though, it should be said that some (including me) were asked to retweet this announcement.  The post was retweeted 26 times.

Here is a quick word cloud of #AAPOR tweet content during the conference (the phrase “AAPOR” removed, for obvious reasons).

Interestingly (to us, anyway), the #AAPORbuzz experiment was more of an #AAPORbust… Few attendees were interested in replying to our Tweeted survey items, despite endorsement from @AAPOR itself.  It may be that attendees were more interested in sharing just what they wanted when they wanted and weren’t looking to respond to a survey/poll/vote simultaneous with discussing the topic of surveys itself at the conference.  It would be interesting to find out more about why people did not respond to these items.  Were they lost in the sea of Tweets?  Would another outreach approach be more effective? Would this work with a different population?  Further experimentation should help answer these questions.

Also interesting was the increased focus on Twitter itself at the conference.  In addition to the short course I gave with @carolsuehaney (Carol Haney), there were papers focused on Twitter analysis and at least one company (Evaluating Effectiveness), downloading #AAPOR Tweets and posting a dataset on their website. Popular bloggers like @mysterypollster (Mark Blumenthal) used Twitter to announce and link to their blog posts covering the conference.

Is Twitter here to stay as a mode of communication and topic of research for public opinion research?  Time will tell, but adoption and utility for those in the field seems to be on the rise.