NatCen Social Media Research: What Users Want

fryAt the beginning of October 2013, there were reportedly 1.26 billion Facebook users worldwide. The number of Tweets sent per day is over 500 million. That’s a lot of communication happening every day! Importantly for researchers, it’s also being recorded, and because social media websites offer rich, naturally-occurring data, it’s no wonder researchers are increasingly turning to such websites to observe human behaviour, recruit participants, and interview online.

As technology constantly evolves, researchers must re-think their ethical practices. Existing guidelines could be adapted ad-hoc, but wouldn’t it be better to rethink the guidelines for this new paradigm? And what do social media users think about research that utilises social media? The work of the “New Social Media, New Social Science?” network in reviewing existing studies suggests that these questions have not yet been adequately answered.

In response, a group of NatCen researchers are soon to report data from a recent study on how social media users feel about their posts being used in research, and offer recommendations about how to approach ethical issues.

What do participants want?

A key ethical issue participants talked about was consent: participants wanted researchers to ask them before using their posts and information. Although it was acknowledged that “scraping” a large number of Tweets would pose practical problems for the researcher trying to gain consent, users would still like to be asked. Consent was seen as particularly important when the post contained sensitive or personal information (including photographs that pictured the user). An alternative view was that social media users shouldn’t expect researchers to gain consent because, by posting online, you automatically waive your right to ownership.

Participants’ views about consent were affected by other factors, including the platform being used. Twitter, for example, was seen as more public than Facebook so researchers wouldn’t necessarily need to ask for the user’s permission to incorporate a Tweet in a report.

Views about anonymity were less varied. Users felt anonymity should be afforded to all, especially if posts had been taken without consent. Users wanted to remain anonymous so that their posts wouldn’t be negatively judged, or because they were protecting identities they had developed in other contexts, such as at work.

Our participants were also concerned about the quality of information posted on social media. There was confusion about why researchers would want to use social media posts because participants felt that people didn’t always present a true reflection of themselves or their views. Participants noted, for example, how users post pictures of themselves drinking alcohol (which omits any mention of them having jobs or other, more serious things!), and that ”people either have more bravado, and ‘acting up’ which doesn’t reflect their real world self”. They expressed concern over this partial ‘self’ that can be presented on social media.

What does it mean?

Later this year, NatCen will publish a full report of our findings, so stay tuned! If you can’t wait, here’s a preview:

  • Consider that users’ posts and profiles may not be a reflection of their offline personality but an online creation or redefinition;
  •  Even if users are not utilizing privacy settings they still might expect you to ask permission to use their post(s);
  • Afford anonymity. Even if someone has let you know you can quote their username, you should learn how ‘traceable’ this is and let the user know (i.e. can you type their username into Google and be presented with a number of their social media profiles?). It’s our responsibility as researchers that the consent we get is informed consent.

Let us know at NatCen if you would like to receive an electronic copy of the report, or if you have any questions about the study.

Social Media, Sociality, and Survey Research: Conversations via Social Media

This week, I’m writing about the sociality hierarchy, a framework we use in our new book, Social Media, Sociality, and Survey Research, to organize our thinking about how individuals use digital media to communicate with each other. My last post was on harnessing broadcast (one-way) communication, like Tweets, status updates, check-ins, and YouTube videos, for social research. Today’s post is about social media and other digital platforms and methods that allow researchers to engage respondents in a conversation, a more traditional two-way interaction between researcher and subject.

In our book, the examples we’ve compiled about applying conversational methods to social media platforms show how traditional survey methods can be transferred to these new platforms. The book contains four chapters presenting data collection techniques for conversational data. In “The Facebook Platform and the Future of Social Research” Adam Sage shows how a Facebook application can be developed to recruit respondents, collect survey data, link to social network data, and provide an incentive to participating in research.

In “Virtual Cognitive Interviewing Using Skype and Second Life” Brian Head, Jodi Swicegood and I introduce a methodology for using Skype and virtual worlds to conduct face-to-face interviews via the internet with research participants. We find both platforms feasible for conducting cognitive interviews. Skype and Second Life interviews generated observations of many errors in the interviews, particularly related to comprehension and judgment. Particular advantages of these technologies include lower cost and access to a geographically dispersed population.

Ashley Richards and I take further advantage of Second Life in “Second Life as a Survey Lab: Exploring the Randomized Response Technique in a Virtual Setting.” In that chapter, we test comprehension and compliance with the RRT. The RRT depends on a random event (such as a coin toss) that determines which question the respondent must answer. The interviewer does not know the outcome of the event, so the respondent’s privacy is protected. By controlling the coin toss (using Second Life code to make it only look random) we were able to determine that significant numbers of respondents did not properly follow instructions, due both to lack of comprehension and deliberate misreporting.

In our final chapter about the conversational level of the sociality hierarchy, David Roe, Yuying Zhang, and Michael Keating describe the decision process required in building a mobile survey panel to facilitate researchers engaging respondents in a conversation via their smartphones. Key elements of the decision process include the choice to build or buy an off-the-shelf mobile survey app, to design a standalone app or to develop web surveys optimized for mobile browsers, how to recruit panelists, and how to maintain panel engagement.

In the book we take a broad view of two-way, conversational communication and consider it as any application of traditional survey interactions between an interviewer (or an instrument) and a respondent translated to the online and social media context. Our key guidance is to take advantage of the wealth of survey methods literature and apply (while testing!) traditional assumptions in social media and other online environments. Tomorrow I’ll post about research at the third level of sociality: community-based interaction via social media and other digital environments, where users and participants share content, work collaboratively, and build community.

Altruism: Alive and Well on Facebook?

Facebook has been derided by some researchers as a contributor to growing levels of narcissism in our society.  It is true that a lot of what happens on Facebook is about how you’re seen and gaining approval from peers.  But a recent experiment we conducted recruiting subjects for cognitive interviews using Facebook made me reconsider this simple characterization.

For a paper I’m presenting next week at the 2013 Federal Committee on Statistical Methodology conference (“Crowdsourcing in the Cognitive Interviewing Process “) we tested whether Facebook was viable for cognitive interview recruitment.  We set up three separate ad campaigns to see which was most effective:

The gift card treatments offer an award, appealing to extrinsic motivations—“what can you give me?” whereas the donation appeals to altruism—“what can you give others?”  With a “narcissistic” base of Facebook users, you might expect the gift card to perform better than the donation.  But what we found was that the donation incentive vastly outperformed the gift card treatments and accounted for 50 of our 60 completed interviews.  Not only were those exposed to the Red Cross ad more likely to click on it, they were more likely to complete the interview after clicking on an ad.

In the same session at FCSM, Michael Keating will be discussing a framework for understanding crowd motives based on the MIAB model (motives, incentives, activation, behavior).  Simply put, the altruistic incentive may work something like this: Some Facebook users are Motivated by altruistic causes (like helping the Red Cross); the Incentive of a $5 donation Activates their Behavior of taking the survey and producing the data we need.

As I sit back and think about it, there are some other logical explanations for the Red Cross incentive outperforming the gift card.  People might not feel they can really make good enough use of a $5 gift card (e.g. “$5 isn’t enough for anything really good!”) and the Red Cross incentive may appeal to guilt as much as altruism (e.g. “how can I say no to helping the Red Cross?”).  Further research into the motivators on Facebook and other platforms can help us tailor future messages and designs.

Have you experimented with Facebook ads?  If so, what’s worked for you?  Leave a comment and share your experience!

Video Interviewing: Is It a Feasible Mode of Survey Data Collection?

In keeping with the rapid evolution of technology, survey researchers have historically adapted and modernized data collection by offering alternative modes of participation. Traditional survey data collection modes such as mail, field, telephone, and Web surveys have some limitations compared to more recent communications technology, which add new features and capabilities.

Along with my RTI colleagues Tamara Terry, Richard Heman-Ackah, and Michael Price, we’ve been evaluating video interviewing as a new methodology in the data collection toolkit. These days, most desktops, laptops, tablets, and smartphones have built-in or supplemental Web cameras and microphones that allow video conferencing or chatting between two (or more) parties whether stationary or on the go. More recently video chatting has become integrated with social networking sites that many users (and potential survey respondents) check and update frequently. As the popularity of these video platforms increases, we focus our concept of video interviewing on two prominent platforms, Facebook and Skype.

Founded in 2003, Skype is communications software that allows users to make phone calls using voiceover IP (VoIP) technology, video calls, and instant message chatting. Skype has over 100 million users and 65 million daily users (Caukin, 2011), and its popularity makes it a viable option for communicating with respondents. In July 2011, Facebook announced an agreement that integrated Skype’s video calling feature into Facebook chat (Su 2011).

Since 2004, Facebook has evolved and continued their efforts to become a legitimate supplemental, and in some cases, outright alternative, communication platform through several tools of communication: content sharing, status updates, commenting, liking, blogging, and chatting.  With staggering growth rates that have reached a current monthly active user base of over 1 billion individuals (Facebook Inc. 2013), and a United States and Canada penetration of nearly 20 percent, Facebook has solidified its position as a major communications platform.  Skype and Facebook are platforms that many individuals are using for regular communications and may be willing to use to complete a survey interview.  With their integration, the viability for research is even greater.

For video interviewing to successfully function as a method of data collection, both the interviewer and respondent’s hardware and software equipment will need to meet standard requirements for successful video communication. Video communication can be a relatively inexpensive proposition. Web cameras typically range in price from 20 to 50 dollars; however, many devices already have this hardware built in. Internet service, an additional cost, would average 20 dollars a month. Most households already have Internet service on multiple devices. As reported by the Pew Research Center (2013), 82 percent of American adults use the internet .

Video interviewing poses some unique capabilities, such as the ability to visually authenticate or confirm the sample member and the utilization of qualitative data to analyze nonverbal respondent cues to questions. Such technology also allows for recording and analyzing both verbal and nonverbal communication between interviewer and respondent. Other visual considerations are the physical and professional appearance of the interviewing staff and whether being able to view the background of the data collection facility or respondent location compromises privacy and/or security. Additionally, providing visual aids to respondents further enhances communication on survey projects that rely on recall of past events or exposure.

The research on video interviewing is limited thus far. Our recent article published by Survey Practice  reviews case studies that utilize a variation of visual interactions to provide further insight on potentially using video interviewing.  A face-to-face interview is the closest comparison to a video-interviewing data collection method. During our research, we found various case studies that support face-to-face interviewing as a common, preferred method for data collection. Will video interviewing allow for greater access to respondents and the collection of high quality survey data?  Stay tuned as we pilot these methods and report back with the results!

AAPOR Preview – Social Network Analysis and Survey Response: How Facebook Data Can Supplement Survey Data

If you follow SurveyPost, you may have seen me comment on the potential social network analysis (SNA) has for survey and public opinion research. Next week at the 2013 Annual AAPOR Conference in Boston, I will be presenting a poster that demonstrates how certain measures of centrality that are specific to SNA might be used to either supplement survey data or provide relatively under-explored perspectives of public opinion formation and flow throughout a population.

Using common SNA formulas, such as betweenness centrality (used to understand the connectedness of a network) and modularity (used to discover communities within a network), and data from my own Facebook network and a network comprised of users from an application I developed called Reconnector (also discussed here), I demonstrate how network analysis can further our understanding of certain concepts common to survey and public opinion researchers. For example, I demonstrate how understanding brokers, or critical individuals that tie two other seemingly disparate individuals together, has potential to optimize snowball sampling. I also demonstrate how grouping individuals within my Facebook network on a certain dimension (friends that Like Barack Obama) allows us to understand shared opinions in the context of social proximity. For instance, by calculating the betweenness of my entire Facebook network and the betweenness of just my friends that Like Barack Obama, I was able to show that my friends who Like Barack Obama were actually more connected (in terms of Facebook friendship) to each other compared to my entire friendship network.

I hope to see you there!

Understanding Betweenness: By calculating betweenness for each network, it can be demonstrated that the graph on the left (friends that like Barack Obama) is more closely connected (in terms of Facebook friendships) than the graph of all my Facebook friends on the right. The colors represent communities identified by running a modularity algorithm in Gephi.

Session date/time: Friday May 17th 3:15pm – 4:15pm

Session Name: Poster Session 2

Location: Commonwealth Complex A & B

 

Social Media vs. Online Classified Advertisements: Does where we advertise for cognitive interviews matter?

Cognitive interviewing has become, over the past 30 years, one of the prevailing pretesting techniques used by survey researchers. Despite this, little empirical work has been done to look into the most effective methods to recruit prospective participants. Two commonly held notions help explain this: 1) cognitive tests are qualitative projects which have findings that are not generalizable to the target population: and 2) researchers have argued that modest sample sizes are adequate to find the measurement errors cognitive testing is designed to detect. But, recent evidence suggests that these notions may be incorrect.

A number of techniques have been used to recruit for cognitive testing studies, such as: newspaper advertisements, flyers, intercept methods, recruitment firms/panels, institutional or personal contacts, snowball or purposive sampling, and even probability sampling. A more recent technique has had researchers placing advertisements on online classified ad sites such as Craigslist, which is popular because it is relatively inexpensive and effective, compared to the other traditional recruitment methods. However, the internet has evolved quite a bit in the 5+ years that researchers have been using Craigslist. Social media sites, like Facebook, may offer an alternative with potential advantages on factors such as ad visibility, return on investment, and control over who sees the ad (Head, 2012a).

In this presentation, my co-authors and I will extend a line of research (for initial findings see Head, 2012b) comparing Craigslist, Facebook, and other recruitment techniques on effectiveness and quality. Measures of effectiveness include speed of sample recruitment and geographic dispersion of the sample pool. Measures of quality include the extent to which “professional participants” make up the sample pool and demographic characteristics of those recruited. We find mixed evidence for differences between the two recruitment platforms. When one is preferable to the other, as one might expect, probably depends on the study target population.

Authors: Brian Head, Elizabeth Dean, Timothy Flanigan, Jodi Swicegood, and Michael Keating

Session date/time: Sunday, May 19 10:15 a.m.-11:45 a.m.

Session name: Applications of Social Media to Surveys and Pretesting

Location: Waterfront 3

Moderator: Clarissa Steele

Facebook Graph Search: An Overview and Implications for Survey Research

As you might imagine, the volume of Facebook data is big, which is why we consider it to be big data – what else would you call the largest repository of human data to ever exist? Facebook’s Graph Search is now relying on these bits of data to respond to more and more complex search queries, and it’s going to change things.

As Facebook continues to roll-out their Graph Search, Facebook users will no doubt experience a kind of search they have yet to experience. If you’re unfamiliar with Graph Search, you might best describe it as the ability to utilize the links that tie everything in Facebook together (e.g., friendships, likes, and tags), in addition to objects themselves (e.g., people, places, and things such as photos or brand pages) to find something you know is inside Facebook, but are unsure where it is. Data used for Graph Search comes from Facebook’s Open Graph, something I’ve attempted explore the benefits of for researchers. In addition, Facebook has partnered with Microsoft to support searches of external content (queries outside of Facebook’s Social Graph) with Bing.

As we have discussed in several posts, we’re only beginning to understand the potential uses of Facebook.  One potential for Graph Search is tracing study respondents. In studies that require re-connecting with participants at some point in time down the road, it is occasionally the case where someone, for example, changes their physical address, phone number, or name. To uphold the integrity of the original sample of participants, we often analyze the cost and benefit to searching and locating these “lost” participants. With the advent of the internet, cell phones, then social networking sites, locating these individuals has become less and less costly and more and more effective. More recently, research has demonstrated this (see Rhodes and Marks, 2012; Wood et al, 2012). However, previous efforts to utilize Facebook as a tracing mechanism were limited by Facebook’s search functions, which only allowed for searches by name, hometown, current city, schools attended, and workplace.

So rather than breaking down a search into discrete characteristics, discrete characteristics can now be combined into complex queries. Consider the simple curiosity “what restaurants have I visited in Chicago?” I can simply type that query into Graph Search, which will produce results which can then be refined. But I can add more specificity. I can search for “people named Adam who like the Cleveland Browns and live in Raleigh, North Carolina.” There are four of us, by the way. From there I can refine the results on a variety of dimensions, including aspects of their basic information (e.g., gender, age, and relationship status), work and education, likes and interests, places they’ve lived or checked-in, and their relationships (e.g., family, friends, and significant others). I can also just browse their photos, videos, Friendlists, and likes and interests.

Graph Search also utilizes maps to visualize and filter results associated with geographic locations. Data characteristics such as the location of a friend, or the location of places I’ve checked-in, are plotted on a map, which can then be adjusted to visualize or filter location-based results.

Adding filters of my Facebook friends from college allows me to see where they cluster. In this case, they seem to cluster in Ohio, particularly Kent, Ohio.

While Graph Search is a developing tool, the potential complexity of searches is unprecedented.  For instance, as relationship-based searches are developed, researchers will eventually have the capability of typing in “people named john who are married to people named sally who live in North Carolina.” Often times tracing study participants can be difficult because data is missing or inaccurate. Graph Search is allowing us to more easily piece these bits of data together to produce actual results, in essence solving puzzles quicker, but with fewer pieces.

For me personally, Graph Search gives me more reason to tie my experiences to Facebook. As was intended with Timeline, Facebook is allowing us to document our lives, and along with other technologies such as smartphones, we’re doing so at unprecedented speeds in unprecedented volumes. Graph Search seems like a handy tool for sifting through it all. On the other hand, privacy issues certainly exist. After all, it is not clear the extent to which, and the clarity of which these relationships between data are perceived as data by Facebook users. So while aspects of my profile are public, and aspects of my friends’ profiles are public, are the connections between implicitly public? What are your thoughts? Is all of this part of the co-evolution of technology and research?

Social media for social science: The imperfect window

Much of the world, it seems, has been atwitter about social media in recent years. Researchers are no exception. Rather than needing to solicit insight from people with telephone calls during dinner or mailing surveys that largely end up in the trash, social scientists now have readily available tools to observe people’s thoughts and ideas, posted publicly. We also can now easily track, at least in the aggregate, what information people are seeking. As Google has emerged as close confidante to many of us, we collectively can track concerns about the flu, interest in political party conventions, and what questions people have about nutrition. All of these developments suggest a veritable gold mine for social science.

Researchers have responded in earnest. As Senior Editor for Health Communication, I have noticed a distinct uptick in the percentage of submitted papers that rely in some fashion on electronic surveillance rather than formal solicitation of survey respondents. I have even joined the party myself a number of times.  A few years ago, for example, former graduate student Brian Weeks and I looked at search interest in (completely unsubstantiated) rumors about Barack Obama, as measured by Google search data, and its direct (if ephemeral) correspondence to television and print news coverage. Whether we should rush headlong toward this research approach without caveats, though, is an open question.

Mounting empirical evidence suggests that we vary substantially in our engagement with social media and yet the exact nature of that variation is not fully understood or appreciated by researchers. Much has been made of the so-called digital divide, which suggests the role of socioeconomic factors in explaining Internet use and a gap between those with access to technology and those without. The electronic media landscape has changed since the 1990s, however, and economic factors may not be the most powerful predictor of social media technology any longer. Spokespeople from IBM have forecast the imminent closure of the digital divide as more and more people from a range of socioeconomic backgrounds adopt mobile technology that allows ready access to the Internet. Despite these changes, we cannot say that people do not differ fundamentally in using social media. A recent paper suggests that our basic personality is evident in our pattern of engagement with Facebook, for example.

What we also know is that the public display of information and information sharing between people vary as a function of topic, circumstance, and even available social network ties. A few years ago, collaborators and I found that viral marketing for a free mammography program was constrained by the social ties available in one’s immediate community. In a different example, colleagues and I recently found in a study of household energy tip sharing between people that relatively few people opted to post such information via social media (as opposed to other means of interpersonal communication). As I outline in a new book – Sharing Disparities: Social Networks and Popular Understanding of Science and Health – to be published this year by RTI Press, information itself is not equal in its tendency to be shared. Emotionally provocative information or information that addresses a pressing situation of uncertainty, for example, seem more prone to sharing than other types of information (hence the proliferation of rumors relative to dry expository information). Moreover, Pew recently reported substantial discrepancy between Twitter sentiment and that assessed through other public opinion measurement.

What does all of this mean for social scientists interested in leveraging our electronic forays as evidence of generalizable thoughts and sentiments? It does not suggest that there is no utility in such data; far from it. Research using such datasets is noteworthy and has proven useful in detecting the emergence of urgent concern, e.g., searches for flu symptoms. Nonetheless, we need to be cautious in suggesting that the only generalizability limitation for Internet-based research involves socioeconomic disparity. Who publicly posts, what and when they post, who forwards content and to whom they forward, and even who searches are all constrained by fluctuation in individual circumstance, topical salience, social norms, and the availability of technology and social network resources. We need more research regarding these constraints to better understand when, and how much of, the glittering mine of big data from social media is actually valuable and what and whom it represents.

Recruiting on Facebook and Craigslist: A tool to help estimate costs and make decisions

A few months ago I wrote about some initial findings from a study in which we compared online classified advertisements (Craigslist ads) to advertisements on a popular social media site (Facebook ads) for recruiting a nonprobability sample.  In that piece I noted that Facebook ads could, in some situations, be an alternative to the more commonly used Craigslist ad.  Put simply, Craigslist (CL) has some drawbacks.  It is difficult to use CL to recruit when a target population is diverse or geographically dispersed, and when trying to recruit a large sample.  There are also issues with one’s ability to control who can see a CL ad or with getting one seen by the right people.  In some situations Facebook can be an alternative space on which to advertise.

As part of my previous post, I mentioned developing a tool that would aid in decisions of whether to pursue Facebook ads as part of a recruitment strategy.  Working with my colleague John Holloway, who possesses valuable programming skills, we were able to develop such a tool. I hope others will find it to be a useful tool. Suggestions for how it could be improved are welcomed.

The tool offers three options for calculating advertising and sample costs.  Because the three options share inputs and outputs, I have provided a description of each below.  Next, I offer a description of when each option could be useful and offer a guide on how to use them.

Inputs

  • Desired sample size—“Sample” is a term used by researchers to mean different things.  Here I mean the total number of people who complete a screening survey and from which study participants could be selected.  It is common, for example, in cognitive interview studies to interview nine participants (usually per round).  However, I may wish to recruit 50 people from which to select the nine participants with the intention to meet demographic quotas.  In that case my desired sample size is 50.
  • Labor minutes—the labor needed to run an advertisement on Facebook and CL is quite different.  Both incur upfront costs.  For Facebook, that can include time spent searching for the right image (which is important) and formatting it to work with Facebook requirements, crafting a headline and message that can fit within relatively strict word limits, identifying the target population, and defining where and when the ad will run.  For CL, labor includes crafting a headline and message (including multiple, different ads for each city since CL restricts posting the same ad across markets), and reposting ads (this is necessary for certain cities as posts are automatically removed with the periodicity depending on the city to which the ad is posted—and it is also important to keep the post closer to the top of the list to maintain visibility).  Ads can include an automated means for potential participants to provide contact info (e.g., a web form) or the option to call a study representative to provide it.  If the latter is offered then one should include time to take those calls in labor estimates. So, for the call-in option it is clear that the more successful the advertisement the most it will cost.
  • Labor cost—the amount that the person who is working on advertisements is paid.
  • Expected clicks—Facebook makes money based on the number of times an ad is clicked (CL is free).  Conversely, the more an ad is clicked the more you pay.  But, you won’t necessarily get a participant to sign-up for your study every time they click on the ad.  So, you must estimate how many clicks there will be per person who volunteers for your study (e.g., completes a screening survey).  This can vary depending on the target population, the quality of the ad (well-designed ads are clicked more), the salience of the study to the target population, etc.
  • Bid price per click—Facebook bases how frequently your ad is shown to users upon how much you are willing to pay.  Again, how much you should bid depends largely on the target population.  Our experience has been to estimate between $1.00 and $3.00 depending on the population.  While it affects the estimates in the calculation, it’s worth pointing out that when you run a Facebook ad you can bid low at first and incrementally increase as needed. Facebook does offer a suggested bid price based on the targeting you do.  So, you can start with this suggested cost per click and increase it if needed.

Outputs

  • ROI tab
    • Total Labor cost = Labor minutes to place ad * Labor cost per hour
    • Number of clicks to reach desired sample size = sample size * expected clicks
    • Cost per sample piece = Total advertising cost / sample size
  • Facebook sample I can afford tab
    • Estimated Sample Size You Can Afford = Advertising Budget for Ads / Expected Clicks
    • Cost per screener complete = sample size & expected clicks
    • Total advertising cost = Total Labor Cost + Advertising Budget for Ads
  • How big a budget for Facebook ads
    • Total Labor Cost = Labor to Place Ad (in minutes) * Labor Cost per Hour
    • Advertising Budget For Ads = Advertising Budget for Ads / Cost Per Screener Complete
    • Cost Per Screener Complete = Expected Clicks * Bid Price Per Click

Options

Return on Investment (ROI)—the primary reason I decided to work on this tool, this option allows the user to compare relative costs of Facebook and CL ads.  That is, this option is useful if one is trying to decide on whether or to what extent to include each ad type.

How much Facebook sample can I afford—this option is useful if one has decided to use Facebook ads, has a set budget, and would like to determine what sample size they will be able to recruit.

How big a budget for Facebook ads—this option is useful if one knows the sample size they wish to recruit and needs to know how much to budget.

How to use the tool

Each input can be modified either by using the arrow buttons to the right of the input box or by clicking inside the input box and typing the desired value.

Summary

The three option tool is intended to help when deciding on the use of Facebook and CL advertising for nonprobability samples.  It is entirely possible that others have thought of inputs I have not.  But, I hope that the tool is useful and can be something that is improved over time.  Are there factors or inputs you consider when designing your ad strategy that aren’t considered here?  Have you had experiences with using both Facebook and CL ads to recruit?

 

Recruiting From Chronic Condition Populations in Second Life

In one of our recent studies we recruited from four chronic condition populations in SL: diabetes, chronic pain, HIV/AIDS, and cancer. Many of the methods used to recruit SL users with these conditions were the same as those used to recruit nonprobability samples from the general population: social media (e.g., Facebook ads), Craigslist, SL Classifieds, the SL Forum, and word-of-mouth recruiting (see previous findings here). However, in this study we also used more targeted approaches that included information sessions held in health-related support communities to inform community members directly about the study. Other targeted strategies included making contact with health community leaders inworld (in SL) whose help we enlisted in spreading the word about our survey. A referral program was also used to further incentivize referrals.

One new approach we used was recruiting SL users through NewWorld Notes (NWN), a blog that covers SL and other virtual worlds. Through a partnership we initiated with NWN, we maintained a permanent ad on their website linking readers to our eligibility survey. NWN creator and contributor, Wagner James Au, also blogged about our study and posted links to our survey on social networking sites Facebook, Twitter, Plurk and Google +.

We were interested in not only the overall effectiveness of these methods, but also whether they varied by chronic condition. Essentially, were some methods more effective with certain members of our sample? Table 1 highlights our results. These data are self-reported and represent the total number (314) that completed a brief screener survey to determine their eligibility.

Table 1: Recruitment Method by Chronic Condition 

Figure 1: Recruitment by Condition

For the diabetes, chronic pain, and cancer samples, the NWN was the most effective recruitment method. For users with HIV/AIDS—a population who may have greater concerns about privacy—the referral program was the most effective method.  The second most effective was the SL Forum for diabetic users, “Other” for chronic pain, and a SL Support Community for those who currently have or had ever had cancer. The SL Classifieds, Facebook and “Other” were equally the second most effective method among our HIV/AIDS sample.

Other interesting findings are that the majority (64 percent) of our total sample reported chronic pain, followed by 24.8 percent reporting diabetes or prediabetes, cancer, 8.6 percent, and HIV/AIDS at 2.5 percent. One unanswered question is how this sample might differ from the general population of SL users, as well as the general U.S. population. When compared to the total number (573) recruited for this study—including ineligibles—our sample only slightly over represents these conditions when compared to the U.S. population. For example, of 573 recruited participants, 35 percent or 201 reported chronic pain compared to 32.5 percent of the general U.S. population. 13.6 percent of our sample reported diabetes—compared to 8.4 percent; 4.7 percent reported cancer vs. 3.9 percent of the U.S. population and 1.4 percent reported HIV/AIDS compared to approximately 0.4 percent of the overall population.

From these findings we believe an online blog can be highly effective in recruiting study participants from chronic condition populations—mainly as a result of increasing the study’s visibility and generating higher traffic to a screener survey. While other methods reported higher percentages of eligible participants, the NWN blog recruited the largest number of users at 95. Other successful methods were the SL Forum among diabetic SL users and a SL Support Community for our cancer sample. One explanation for why a SL support community was effective in recruiting cancer users is that these patients, more so than those with diabetes, chronic pain or HIV/AIDS are more likely to seek support from communities and groups who benefit from this type of inworld interaction. More research should be done to determine whether other blogs and online communities, as well as other methods used here, are effective in recruiting these and other chronic conditions.