ISDS Conference and EpiSanta

Andrew Walsh, PhD is Health Monitoring Systems resident expert is public health.  A Johns Hopkins graduate, he attended this years annual International Society of Disease Surveillance conference and has this tongue-in-cheek report.  — kjh

Who doesn’t love a good tag cloud these days? If there had been a tag cloud for 2010 ISDS conference, the largest words would certainly be “meaningful use”, “social network”, and “stakeholder” in some order. So because no one demanded it, here are my musings on these topics, plus a few more that I thought should have been given higher billing.

Meaningful use – Maybe it’s just the timing of the conference, but these conversations reminded me a little of my kids when the toy catalogs come at Christmastime. There was a sense of “You mean we get to ask for stuff and someone will actually give it to us?!?”-level excitement, a belief that they’d get everything on their list, and very little discussion of whether they’d be better off if they actually did.

Out of all the discussion, I thought the most interesting comment was that many providers and facilities are being encouraged to go after public health reporting options with the expectation that public health won’t be able to receive the data, an outcome that lets the senders off the hook. That seems like an important issue, but it was essentially a throwaway comment; there was no discussion on how to be ready to receive the data. Now maybe that’s because the various health departments are confident that they are ready, but I still thought it would have come up more.

Also, after two talks from the ISDS Meaningful Use Workgroup, I’m still confused about what the purpose of their document is. As I understand things, it can’t become part of the federal requirements, at least for Stage I. So is it meant to be a guideline for states when they are deciding what they will actually accept? Or is it just to give providers and facilities a notion of what syndromic surveillance is all about so they know what will be done with the data and what data actually needs to be sent? I would love to be enlightened.

Social Network – It turns out that this meant different things to different people, which led to an amusing panel discussion on “Harnassing social networks for public health surveillance” that wound up being something of a non-sequitor since not all the panelists had the same interpretation. There’s the original notion of a social network as a set of people and the physical contacts that exist between them, which can be used to understand the spread of certain diseases (generally less contagious diseases which require significant contact that can actually be quantified). This overlaps somewhat with the second notion of a social network like Facebook where the connections now exist in a virtual realm, but might also give some information about who interacts with whom in physical space. But then some people are interested in Facebook and Twitter because they are a place where people talk about being sick, which might be another indicator of disease prevalence. And finally, there were social networking platforms like Facebook but specifically set up for people to post data on their own health and talk to other people about specific health-related issues. Talk about overloaded terminology; maybe next year there will be a panel discussion about vectors.

Stakeholder – This word was used constantly, and yet not one presenter made the obvious Buffy the Vampire Slayer joke; I’m not sure if I’m pleased or disappointed.

And now, EpiSanta, I’ve been a good boy, so here is my list of things I’d like to hear more about next year.

Validation – A lot of people were building quantitative, predictive models from data, and many of them paid lip service to validation as a good thing to do and something they hoped to get around to, but very few actually did anything about it. When everyone works in their own corner on their own dataset, overfitting is a major concern. There was even a prime example of it at last year’s conference – someone from Google Flu gave a plenary talk in which they revealed that their trend line showed no signal from H1N1 flu in the Spring of ’09. Why? They didn’t say in exactly these terms, but basically they had overfit to keywords related to “normal” flu patterns. If we’re going to be in the business of making predictions, we need to pay more than just lip service to seeing if those predictions bear any resemblence to reality.

Standard data sets – Ostensibly, much of the research in syndrome surveillance is on detection algorithms. Algorithms are the domain of computer science, and in computer science they compare algorithms on the same data so that they actually have some basis for making comparisons and judgements. If we’re going to focus on algorithms, perhaps we should borrow more ideas from the folks who lead that field. I heard several pleas from public health practitioners for help in assessing the value of using one algorithm over another; such assessments will never be possible until we start making apples-to-apples comparisons.

Something other than the flu! – Everywhere I looked, someone was doing something with the flu – modeling it, detecting it, predicting it, DiSTRIBuTing it. And all of that is perfectly understandable since it is a major public health concern. But it seems like any time a new data stream is created/found/summoned from the ether, we see if it predicts the flu, and when it does we declare victory and move on. And that just makes me wonder – Why do we need yet another data source that shows the same trends? And if everything predicts the flu, what does that tell us about the bar we’ve set?

Case in point – everyone was agog over a research talk about using Twitter to track the flu season (Zut alors, surely such a thing cannot be done!). Given the response, you would have thought we had seen lead turned to gold before our very eyes, and yet all they had was one year of data that was so heavily smoothed that it could have been approximated quite nicely with a second order polyomial. Then they showed their fitted curve, which was clearly a higher-order polynomial that had more structure than their data (overfitting, anyone?) and declared victory without quantifying the fit in anyway or doing any serious validation.

(And no, I’m not bitter at all that everyone thought Mr. Twitter was both brilliant and hilarious, while my abstract was summarily rejected for lack of rigor.)