mHealth – Day 1 Recap

With Day 1 of mHealth wrapped up, I’d like to share my thoughts on some underlying photo (1)themes and takeaways from the sessions and presentations I was able to catch.  Specifically, I was able to attend sessions on shaping care coordination with mobile tech, chronic disease management, and wearable tech and fitness devices, as well as a few short presentations at the NIH pavilion.

Following his talk on mobile clinical decision support tools, Robert Furberg mentioned the need for a reconciliation between the speed of technological developments, and seemingly glacial speed of science to adequately and appropriately adopt such develops. While there is a growing recognition for the need for tech integrations such as that mHealth seeks to promote, the scientific and healthcare communities are not always adhering to set guidelines, standards, and best practices in science and medicine when adopting new technology.

Today I was reminded of the impact mobile technology can, will, and is already having on health improvement, healthcare delivery, and healthcare costs (and the list goes on). From mobile tool kits utilizing tablets for administering patient questionnaires, reminders, and messages, to gaming approaches to health interventions, each presentation discussed how mobile can bring solutions to issues that were once too complex to efficiently and effectively address. The reason this is possible is because mobile is where a majority of us have consolidated several everyday tasks, including one of the most natural tasks: communicating.  And where we communicate is where we exchange information (i.e., data). In fact researchers from a variety of disciplines are facing the same reality (several of which we discuss in our new book as it relates to survey research).

So if we’re all in a agreement that a trend toward mobile solutions is a necessary one, what are some of the more immediate hurdles we face that are keeping us from getting to a world where, as one speaker put it, “mHealth is as taken for granted as the internet is today?” Well, in my opinion, a more ubiquitous mHealth means seamless integration as both an experience and a process.

Personally, given my work with creating and developing applications for data collection that utilize Facebook’s API, I was happy to hear the term “user-centered design” and “APIs” in some of today’s presentations. Specifically as it relates to mobile applications, I feel these are where the experience (user-centered design) and process (APIs) seams exists:

User-centered design: How can information be collected and utilized in ways that promote things such as application use, response rates, and decrease things like application fatigue? At the NIH pavilion in the Exhibit Hall (a cool way of adding on short presentations and demos BTW! It’s not candy or trinkets, but a giveaway that will last a lifetime – knowledge!), there was an underlying theme I noticed in the short 10 minute presentations – user experience (UX). Seamless integration into the mobile experience is critical to service delivery and the collection of data, which means objectives such as message delivery for behavioral interventions  (e.g., smoking cessation, exercise, or diet monitoring) can potentially hit on all cylinders except UX, and produce results that are ineffective. So if your application isn’t gaining any traction, consider how the end-user feels. And remember, superb functionality for a researcher does not always mean a superb app for the user!

In his presentation, Furberg noted that the decision support tool was well received by its users, and that to optimize effectiveness of such apps, researchers and developers should also concern themselves with APIs (application programming interfaces).

APIs:  Why are APIs important? Understanding API structures can mean the difference between optimizing systems communication and having a smooth transfer and utilization of data for all stakeholders, and a clunky system everyone complains about. When platforms can effectively speak with one another, researchers can efficiently and effectively obtain the information needed. This goes for any process that receives, disseminates, and utilizes data.  This could be a survey instrument collecting and sending data to a server, which then goes to a case management system where it is analyzed in ways that improve survey delivery, and in turn, data quality. It could also be a mobile app intended to collect patient data to successfully implement guidelines for physicians as they care for a patient, and having that data shared with administrators, and possibly even researchers for longitudinal tracking.

That’s all for Day 1 – I look forward to hearing and seeing what mHealth has in store for Days 2 and 3!

Market Research in the Mobile World – Recapping Day 1 Keynotes

It’s the morning of Day 1 of the Market Research in the Mobile World (MRMX) conference,and the keynote speakers have done an excellent job explaining the context of a mobile world. To warm up the crowd, conference MC Mary Evans Kasala from Capella University asked everyone to engage in some show and tell – show your neighbor your mobile phone and tell them why you like it. My neighbor had a flip phone he’s had for 10 years and an iPod he used for his mobile internet. For the record, his wife has an iPhone, so he wasn’t exactly shunning the smartphone revolution.

The mere fact that everyone readily had a mobile phone to show and tell (some of us with multiple mobile devices) was a clear indicator that we’re tethered to these devices. And as Guy Rolfe of Kantar Mobile pointed out, some of us are so tied to them that checking picking up our phone was probably the first thing we did in the morning. When we’re tethered to our devices so much so that most of us check our mobile devices before we accomplish anything else in the morning, or we email or text someone in another room rather than physically walk to another room to speak with them, you might say we’re living in a mobile world. But what does that mean for researchers?

Jeanine Bassett (VP of Global Consumer Insights at General Mills) suggests that, for a variety reasons, mobile is THE way to do research in many respects. If not for trends in mobile adoptions, or that Millenials have little to no grasp of more dated modes of communication, it’s the versatility of mobile devices (photos, internet, email, video etc) and the ability to capture people in the moment. In fact, that’s what companies like General Mills and Electronic Arts are doing. Lisa Spano, Head of Consumer Insights at EA, discussed how in-app surveys are providing more context appropriate and expedient evaluations of mobile games by embedding surveys within games (sometimes getting thousands of responses overnight!). But is that the extent of mobile research? Surely we’ll develop more creative ways to utilize mobile phones in research, but to what extent is further research and development into mobile methods going to pay off when other technologies are on the horizon? This is the very question General Mill’s is asking, and why they’ve decided to sunset their mobile research agenda after 2014.

So why, exactly, is General Mills sunsetting their mobile research agenda when there is clearly work to be done and insights to be had? Perhaps, as Guy Rolfe suggested, it’s because wearable tech is on the horizon – indeed, the next big thing is on its way! While mobile research methods are certainly not drawing to a close, research on a global scale will continue to require methods that integrate the technology we’re actually using and the ways in which we use such technology. Increasingly, this includes the latest and greatest in the tech world, as developing countries tend to bypass other bridge technologies that most of Western countries experienced (e.g., landlines, dumb-phones, wired internet access etc), and move straight to the fun stuff!

Over-reporting of Mobile Phone Use: Methods for Improvement

A recent article in the Journal of Computer-Mediated Communication examines the accuracy of self-reported mobile phone use. The study is similar to one I reported on previously that compared self-reported versus actual Tweeting. In the present study, researchers compared self-reports to cell phone records and found that respondents over-reported use of their cell phones for outgoing calls and text messages. In fact, only 3% of respondents said they place outgoing calls less often than 1-2 times per week, but the records showed that 17% of respondents should have selected this response option. That is a big difference! In this post I’ll discuss explanations for the over-reporting, methods to improve the accuracy of these measurements, and I’ll end with a few lingering questions and ideas for future research.

Why might phone use have been over-reported?[1]

  • For most respondents, placing calls or sending text messages is not out of the ordinary, so these events are not memorable and are therefore more difficult to recall.
  • The questions were asked in a way that differs from how we normally think about our phone use. Because of the way billing is typically set up, I assume most users think in terms of total minutes per month rather than outgoing calls per day or week.
  • Cell service is not cheap. People are probably aware of how much they pay or what they pay for (e.g., 1,000 minutes per month), and then they use that price tag or target use to estimate how much they use their phone (or should use their phone, to get their money’s worth if under contract). Maybe people inadvertently over-report to try to justify to themselves why they pay so much.

How could measurement of phone use be improved?

  • Reframe the questions to ask about cell use in a way that people think about cell use. As I mentioned earlier, it may be easier for respondents to think about their minutes or texts per month. Unfortunately, respondents may lack awareness of their use if they have unlimited texting or a monthly minute allowance so high they don’t worry about surpassing it. Even people who track their minutes carefully may not have an accurate perception of their actual use because many plans offer free nights and weekends, free calling to others with the same provider, etc.
  • Ask respondents to consult their phone records. This is a great approach in theory, but in reality, it’s quite a bit of effort for respondents. They may not even have access to this information at the time they complete the survey, or they may be unwilling to take so much time to dig up the requested info. Even if they are willing, they may struggle to correctly interpret the records.
  • Bypass respondents and collect this information directly. This can be done by acquiring records (as was done in this study) or by tapping into their phones (e.g., using an app that monitors and transmits usage statistics). This option is by far the most accurate. However, it’s likely to be the most expensive and time-intensive. Plus, some respondents will inevitably refuse access to their information.

Like nearly every decision in survey design, this decision comes down to weighing the pros and cons of different approaches. And like usual, there is no “right” answer, but rather, a call for further research.

Next steps

I’d be curious to see how the accuracy of self-reports differs for minutes used per month versus number of calls per day or week. If the purpose of the question is to separate respondents into groups of heavy and light mobile phone users, then asking about minutes used per month might not only be sufficient but also more accurate.

The authors point out that respondents’ perceptions of whether they are heavy or light users likely shapes their responses. I’d also be interested in knowing how accurately respondents could classify themselves into one of these groups when asked about phone use in the most general way possible.

One final thought

Last week there was a discussion on AAPORnet about how best to word a question about time spent at the airport. Someone pointed out that the approach will differ if you’re interested in perceived versus actual time. The same is true in this case. Perceived phone use would likely be of greater interest to a market researcher interested in learning how much people are willing to pay for phone service compared to an academic trying to discover how communication patterns are changing over time. Regardless of what you’re trying to measure – time at the airport, phone use, or something else – cognitive testing is a beneficial step in the process of developing a good survey. Cognitive testing can highlight issues with a survey (e.g. questions that are misunderstood or that ask for information respondents are unable to recall) before the survey is fielded, ultimately leading to more accurate data.



[1] Note: The survey was conducted with a sample from Norway, but my explanations assume a cell system similar to the U.S.’s.

The Advantages of Crowdsourcing Through Twitter

A colleague of mine recently shared an NPR story on Women Under Siege, a project using crowdmapping to gather real time data on rape and other forms of sexualized violence in Syria. Women Under Siege collects reports from survivors, witnesses, and first-responders via a web form, email, SMS, and Twitter (#RapeinSyria). The data are then analyzed by public health researchers and reports are plotted on a crowdmap using the open source Ushahidi Platform.  The map provides a visual reminder of the prevalence of this violence, further emphasizing the importance of this public health research.

I was initially surprised to read that Women Under Siege collects these data on Twitter. I assumed that accounts of sexualized violence are rare on Twitter for the same reasons this violence tends to be underreported in surveys (shame, stigma, fear of retaliation, etc.). And it turns out that this reporting method is underutilized. Even though the project seeks reports of violence via Twitter and other methods, so far all 137 reports submitted have been submitted via the web form. Twitter issues aside, I remained skeptical about whether crowdsourcing was actually beneficial for such a sensitive topic.

However, as I read more about the project, I started to see the benefits of crowdsourcing to collect these data. Compared to traditional survey data collection, this crowdsourced approach has several benefits. First, crowdsourcing is much cheaper than conducting a survey. Second, data are collected and available much quicker. The crowdsourced data could be available within hours of the violence taking place, compared to potentially months for survey data. Third, crowdsourcing enables anonymous reporting through Women Under Siege’s web form. Anonymity is assured in legitimate surveys, but respondents may question how their information will be protected and they may hesitate to reveal sensitive information to an interviewer. Fourth, crowdsourcing has the potential to reach out to a broader group of people, including those with only second or third hand knowledge of the violence. Although these respondents may have fewer details and perhaps some inaccurate details of the violence, they may be more inclined to report all the information they have. Perhaps it is worth accepting less accuracy in the details of the incidents to gain this perspective on the scope.

Overall, is reaching out to more people through crowdsourcing better? Normally I’d say no, that it’s more important to draw a representative sample to make inferences to the target population, as is done in scientifically rigorous surveys. In this case, however, I can see the benefits of collecting as many reports as possible via crowdsourcing. For instance, collecting these reports in real time draws more attention to the prevalence and seriousness of this violence in Syria. Also, the method provides what is perhaps a more accurate snapshot by reducing the likelihood of underreporting by reaching out to more people who may have encountered sexualized violence and offering them a more anonymous way to report it. As with a survey, the findings should be interpreted with a critical eye and the shortcomings of the methods made clear. But even if these methods may not provide the probability-based assurance a survey may, they provide a relatively efficient and timely glimpse where traditional surveys may not provide the best cost/benefit for the job.  Where else might these methods be appropriate in addition to or in place of a survey?  We welcome your thoughts!

SAPOR 2012 – Thinking Outside of the Traditional Survey Research Toolbox

“Mobile, Social, Global: Applications of Emerging Technologies in Survey Research,” a short course offered by RTI colleagues Adam Sage and Robert Furberg provided an overview of how mobile technologies are being leveraged in what Sage referred to as Social Science 2.0. This portion of the course described the evolution of the web, the emergence of social media, and other factors that distinguish Web 2.0 from the Internet of the 90’s and early 2000’s (and the pre-Dot-com bubble burst). This includes an overall internet experience that is more social, interactive, and user-centered. The birth of social networking sites such as Facebook and Twitter, as well as Wikipedia, API’s, and blogs further characterize this shift toward a more self-sustaining data environment.

So what does this mean for the future of survey research and data capture? Ultimately, these changes allow for more interactive data collection opportunities, which have the potential to alter the way we administer surveys, measure context, and collect and create data. Several of these new technologies, specifically Facebook and Twitter, are currently being utilized in social research. Targeted ads have been used on Facebook to recruit nonprobability samples from specific populations including Second Life users. Data collected in status updates on Facebook are being used as measures of national happiness; these updates as well as ‘likes’ and content that users share can also provide supplemental data for researchers. Likewise, text and sentiment analysis are being used to track specific content on Twitter, such as health epidemics, which allow public health officials the ability to better plan and prepare for potential health crises as well as the annual flu season. Public opinions and attitudes can be tracked via social networking sites, which researchers can use to measure mood, diet, activity and other health behaviors. These sites also provide researchers the ability to communicate with and engage participants over a longer time period, useful in longitudinal surveys.

Armed with this information, my next question was: how widespread are these technologies in today’s society? I learned that 83% of U.S. adults own a cell phone while ⅓ of households are mobile-only. (Thinks about the implications for telephone surveys…) What’s more, smartphone usage is growing exponentially with mobile technology use highest among adolescents, young adults, socioeconomically disadvantaged populations and less educated young adults. The implications of these technologies for tracking health outcomes was also explored. 50% of U.S. adult cell phone users have apps on their phones and 29% of app downloaders have downloaded apps that allow them to better track or manage their health. These apps can increase patients’ self-awareness and accountability to an adherence mechanism and ultimately their own health outcomes. Research into this area has yielded positive results and these findings have exciting implications for medication adherence and the way in which patients view their role in their overall health and wellbeing.

This course highlighted a few of the unique capabilities of mobile technologies as they are currently being used in the field. There are distinct limitations to each platform/technology and the extent to which current methods translate is inherently unknown. The value of this course was a better understanding of how these emerging technologies are pushing the current bounds of survey research. This discussion invites social and survey researchers to take a more innovative approach to our work as we uncover new tools and ultimately expand our survey research toolbox.

Survey Research in 2012: The #1 Top Tech Development of 2011

#1 The Pew Internet & American Life Project Smartphone Report 

Americans love their smartphones more than ever, according to the Pew Research Center. In July, new findings indicated that 35% of Americans own smartphones. When asked, 72% of owners spoke positively about their smartphones. Certain demographic subpopulations use smartphones more than others, including college educated, those with household incomes over $75,000, 18-44 year-olds, and African Americans and Latinos.

Among smartphone owners, 25% say they mostly go online using their phones rather than a computer. This is particularly true for smartphone users who are younger than 30, nonwhite, and have lower than average income and education levels.

What’s important about these findings for the future of social research? It’s noteworthy how positively people feel about these mobile devices. Respondents used words like “convenient,” “love,” “satisfied,” and “necessity” to describe how they feel about their smartphones. More data collection efforts should be designed to incorporate mobile surveys or other data capture tools using smartphones simply because people like to use them. Given the overwhelmingly positive thoughts people have about their smartphones, the 35% ownership rate can be expected to grow as they become accessible to other users.

Going forward, studies incorporating a smartphone sample should take advantage of smartphone apps because, unlike web, SMS, and voice, these are unique to smartphones. Mobile design should be considered when designing standard web surveys as well, given that 9% of Americans use smartphones only for Internet access. Finally, studies targeting African Americans, Latinos, young people, and the affluent should consider mobile data collection modes since these populations are more likely to be smartphone users.

Social Media and the Transportation Research Board Annual Meeting

I will be presenting a synthesis of RTI International’s research on social media during the 2012 TRB Annual Meeting beginning on January 22 in Washington, DC. This annual gathering of over 11,000 transportation professionals from around the world will focus on the theme “Transportation: Putting Innovation and People to Work”. With more than 4,000 presentations, these four days will be packed with information and new ideas.

Some problems are perennial in research, including in transportation surveys. Everyone is looking for the next best thing in survey research. How can we improve response rates? How can we get responses from hard-to-reach populations? How can we reduce respondent burden?

Transportation surveys can be especially time consuming and tedious when respondents are asked about details for every transit trip they make. It can also be hard to find survey respondents who are regular transit users or who require paratransit. The session “Innovations in Travel Surveys” will focus on respondent fatigue and recall memory for transit trips – two areas where social media may be able to make a positive impact.

Using tools such as Facebook registries, analyzing Tweet sentiment, implementing text message reminders for diaries, and conducting SecondLife surveys for sensitive questions like impaired driving or distracted driving and GPS or location features may help researchers overcome these issues in transportation.

The workshop on Incorporating Social Media in Transportation Surveys will take place on Sunday, January 22 at 1:30 p.m. and is sponsored by the Travel Survey Methods Committee, Public Involvement in Transportation Committee, and Public Transportation Marketing and Fare Policy Committee of the TRB. The outcomes of our workshop will be a list of agreed-upon applications of social media in transportation surveys and a list of topics for future research ranked by priority.

Internet and Mobile Trends

SurveyPost researchers are actively exploring the use of smartphones and mobile devices in our research. From SMS health interventions to mobile app creation, these devices offer new and exciting opportunities to make advances in our approaches to data collection. In this infographic, SurveyPost researchers Adam Sage and Robert Furberg have provided an overview of Internet connectivity and smartphone adoption to help illustrate why we are harnessing mobile technologies.

Pairing Text Messaging with the BreathEasy Application

Working with our own funding, RTI has created the RTI Short Message Service (or ARTEMIS) platform, a web service designed for researchers interested in investigating the use of text messaging to support health behavior change, including risk reduction, disease prevention, and chronic disease management. Designed by researchers for researchers, ARTEMIS can meet the unique needs of investigators studying the effect of these technologies on individuals. Our project staff supports intervention design and implementation across a range of areas from message development through analysis of data gathered from participants via SMS.

ARTEMIS is a network service that is based on web technologies and is built on top of the Adobe ColdFusion application server and Microsoft SQL Server database engine. The platform consists of the following core components: a messaging communications module that sends messages via an SMS gateway directly to all major mobile carriers; a message campaign manager to allow running multiple simultaneous interventions; scheduling and logging features for customization and reporting; and a connector to the research profiles that enable messaging to be tailored for individual subjects.

The platform is currently being piloted in support of the Robert Wood Johnson Foundation’s Project HealthDesign BreathEasy intervention. In this capacity, text messages are primarily being used to provide reminders to study subjects to submit observations of daily living (via the Android app diary). The system also pushes clinical alerts and message content supporting general health and wellness to participants via SMS. More details on the BreathEasy SMS pilot can be found in the Project HealthDesign blog I wrote over the summer. The pilot is currently underway and we will certainly be sharing our experience and results with more updates here on SurveyPost along the way.