A well-crafted research question based on a thorough literature review should be expected to produce solid data that would stand up to intense scrutiny, and in most cases this is what happens. On rare occasions, however, the stability of that foundation is undermined by a complete failure to navigate the potential pitfalls of data analysis, leading to a misinterpretation of the research results. In most cases the error can be attributes to a lack of knowledge combined with poor training and oversight, but in some instances the misinterpretation has proven to be deliberate in order to produce groundbreaking counterintuitive results. Here are a few suggested (very tongue-in-cheek) steps to make sure that you misinterpret your research results:


Whether as a result of temporal or budgetary issues, research studies don’t always manage to get the most representative sample of the larger population about which they plan to make inferences. Clinical trials are notoriously difficult to fill to capacity, and the general public has been surveyed to death, so small population sample is to be expected these days. If representative sampling is assumed as part of a robust methodology, then all subsequent statistical assumptions about the procedures validity can also be left open to misinterpretation.


Methodological purists will warn you about Type I (rejecting a null hypothesis when it shouldn’t be) and Type II (not rejecting a null hypothesis when you should) errors, but the really interesting research results tend to come when you overestimate statistical significance. As long as your p-value is low enough to disregard the element of chance/error, you should be fine, and if you are lucky enough to have a really large sample population, any meaningless difference can still be referenced as being statistically significant.


Research is hard work, and the effort of you and your team should not go unrecognized. Whether your metric is hours of labor, number of surveys taken, or number of tests performed, make sure that the effort put into the research study is factored into the study results. If you can’t remember the difference between causation and correlation, or between precision and accuracy, don’t worry – just remember that the more attention you paid to your study, the better the results are likely to be.

All humor aside, the misinterpretation of research results can, as we all know, prove to be extremely problematic, especially where decisions are made on the basis of statistical results alone. Reliance on the perceived safety net of peer review is not enough, as any journal retraction will prove. Adequate training, mentorship, and ongoing supervision are critical to ensure that the expensive work of research design and data collection is not undermined by sloppy science when that data is analyzed.

Author's Bio: 

I am Robert Smith, Global Marketing Manager, Enago Inc. Enago is the flagship brand of Crimson Interactive Inc, one of the world's leading language solutions providers, offering english paper editing and journal-publishing support to more than 81,000 authors across the globe.