SDSU
Research Story

 

 Follow SDSU  Follow SDSU on Twitter Follow SDSU on Facebook Follow SDSU on Google+ SDSU RSS Feed

Finding Real Value for Public Health in Big Data

Big claims from big data haven’t yet been realized, but don’t lose hope—minor tweaks can dramatically improve a popular algorithm’s validity.
A graph depicting Google Flu Trends.
A graph depicting Google Flu Trends.

Media reports of public health breakthroughs made possible by big data have been largely oversold, according to a new study conducted at Harvard University, San Diego State University, and the Santa Fe Institute, published in the American Journal of Preventive Medicine.

“Many studies deserve praise for being the first of their kind, but if we actually began relying on the claims made by big data surveillance in public health, we would come to some peculiar conclusions,” said John W. Ayers, San Diego State University Graduate School of Public Health research professor and corresponding author of the study. “Some of these conclusions may even pose serious public health harm.”

But don’t throw away that data just yet.

The authors maintain that the promise of big data can be fulfilled by tweaking existing methodological and reporting standards. In the study, Ayers and his colleagues demonstrate this by revising the inner plumbing of the Google Flu Trends (GFT) digital disease surveillance system, which was heavily criticized last year (see here and here) after producing erroneous forecasts.

“Assuming you can’t use big data to improve public health is simply wrong,” added Ayers. “Existing shortcomings are a result of methodologies, not the data themselves.”

A solution for Google Flu Trends

In the first external revision proposed to GFT, Ayers and co-researchers Mauricio Santillana and David Zhang of the Harvard School of Engineering and Applied Sciences, and Benjamin Althouse with the Santa Fe Institute, explored new methods for using open-sourced, publicly available Google search archives to estimate influenza activity in real-time, an approach that can serve as a blueprint to fix broader shortcomings in public health surveillance.

To address GFT’s problems, the team significantly beefed up the existing GFT model. First, they expanded the model's ability to monitor how individual search queries correlated with patient data on influenza. Then, rather than relying on an investigator to provide subjective, periodic updates to the model, the team built in automatic updating, based on artificial intelligence techniques, that adjusts the weight given to any single query each week  to maximize predictive accuracy.

During the 2009 H1N1 pandemic and 2012/13 season — two critically important periods of influenza surveillance in the United States — the alternative method yielded more accurate influenza predictions than GFT every week, and was typically more accurate than GFT during other influenza seasons.

“With these tweaks, GFT could live up to the high expectations it originally aspired to,” Ayers said. “Still, the greatest strength of our model is how the queries being used to describe influenza trends are changing over time as search patterns change in the population or the model occasional underperforms due to false-positive queries.”

For example, during the 2012/2013 season, GFT predicted that 10.6% of the population had influenza like illness when only 6.1 percent did according to patient records. The team’s alternative significantly reduced the error in that prediction, estimating that 7.7 percent of people would have the flu. And within two weeks the model self-updated, considerably changing the weight given to certain queries that spiked during that time, improving the model for future performance.

What’s next for big data

“Big data is no substitute for good methods, and consumers need to better discern good from bad methods,” Ayers said.

To achieve these ends, digital disease surveillance researchers need greater transparency in the reporting of studies and better methods when using big data in public health, according to Ayers and his colleagues.

“When dealing with big data methods, it is extremely important to make sure they are transparent and free,” co-author Althouse added. “Reproducibility and validation are keystones of the scientific method, and they should be at the center of the big data revolution.”

Importantly, these criticisms shouldn’t be taken as an indictment of the promise of big data, or of the early attempts to wrangle it into something beneficial for the public, Ayers said. Now that the initial hype is wearing off, researchers can begin seriously exploring and testing the strengths and limitations of existing models and sharpening their methodologies.

“We certainly don’t want any single entity or investigator, let alone Google—which has been at the forefront of developing and maintaining these systems—to feel like they are unfairly the targets of our criticism,” Ayers said. “It’s going to take the entire community recognizing and rectifying existing shortcomings. When we do, big data will certainly yield big impacts.”
 

Latest NewsCenter Stories
blog comments powered by Disqus