Coincidence is not correlation. About the same time that the UK was moving towards yes to a referendum to exit the European Union, the WHO’s International Agency for Research on Cancer (IARC) was downgrading coffee and upgrading “hot beverages” as a causal agent in cancer.
As the story goes, expert opinion debunked 400,000 years of the human experience of drinking hot beverages while the Remain camp boasted that expert opinion was on their side.
Thank you experts.
Leading up to the Brexit, the Conservative Party’s Michael Gove remarked: “I think people in this country have had enough of experts.” Anti-expert sentiment gained momentum and perhaps an upper hand. At the brink of the British exit from the EU and in the aftermath, experts scrambled for legitimacy:
Expertise can breed arrogance and false certainty; specialisms fall prey to group-think. Experts must be challenged and their work robustly interrogated. But that is very different from attacking evidence merely because it undermines your arguments (something that both Remainers and Leavers have done) and instantly impugning the motives of those who have produced it (ditto).
How could experts have got it so wrong? Immediately post-referendum, one pundit asked straight away: “do the experts know anything?” Another queries if we are now witnessing ‘the twilight of the experts.”
Were data saying one thing while experts were hearing another? Intentionally?
Welcome to the summer section of EDUC 500: Research Methodologies!
News Staff, Science20.com, June 27, 2014– Obviously, as the creators of the four pillars of the Science 2.0 concept, we’re interested in new ways to use data to make meaningful decisions, but we recognize that key breakthroughs are more likely to happen in the private sector, where money can be made filling a demand.
A paper by Aetna and GNS Healthcare Inc. in the American Journal of Managed Care demonstrates how analysis of patient records using analytics can predict future risk of metabolic syndrome.
This could be useful, not just because a third of the population has the somewhat fuzzily-defined metabolic syndrome, a condition that can lead to a condition like chronic heart disease, stroke and diabetes, or because obesity accounts for almost 20 percent of overall health care costs in the U.S., but because it’s a roadmap for how to port the Science 2.0 approach to lots of fields.
“This study demonstrates how integration of multiple sources of patient data can help predict patient-specific medical problems,” said lead author Dr. Gregory Steinberg, head of clinical innovation at Aetna Innovation Labs. “We believe the personalized clinical outreach and engagement strategies, informed by data from this study, can help improve the health of people with metabolic syndrome and reduce the associated costs.”
“The breakthrough in this study is that we are able to bring to light hyper-individualized patient predictions, including quantitatively identifying which individual patients are most at risk, which syndrome factors are most likely to push that patient past a threshold, and which interventions will have the greatest impact on that individual,” said Colin Hill, co-founder and CEO of GNS. “The GNS automated data analytics platform paired with Aetna’s deep clinical expertise produced these results on extremely large datasets in just three months, a testament to the ability of both groups.”
GNS analyzed data from nearly 37,000 members of one of Aetna’s employer customers who had voluntarily participated in screening for metabolic syndrome. The data analyzed included medical claims records, demographics, pharmacy claims, lab tests and biometric screening results over a two-year period.
Read More: Science20.com