The National Cancer Data Bias.

Speaking of the quantum leaps from data in -> conclusions out, a recent JAMA pub took a look at just how well those riding the wave of big data analysis were adhering to analytical guidelines. The authors focused on one large and oft-used publicly available database, the National Inpatient Sample (NIS). From >1K studies using NIS data published from 2015-2016, 120 representative samples (including a sizeable portion published in journals with hefty impact factors) were critiqued for their use of the 7 research practices required by the governing body of NIS itself. The results were jarring: 85% failed to adhere to at least one required practice and 62% to at least two. This is bad, especially considering they only had to tick 7 check boxes. And we have a bad feeling the letters "NIS" could be replaced with "NCDB," "SEER" or "VA" (eh hm) to  reproduce the same disappointing conclusions. Failure to adhere to these major domains of data interpretation, data analysis, and even research design as outlined by the data providers calls into serious question recent big inferences made from big datasets in some big time journals.

Comments

Popular Posts