Problems with how researchers measure statistical significance are just one potential roadblock in producing good research. Many others are much easier for a layperson to spot.

While the academic community addresses esoteric research questions related to research methodology and transparency in reporting data, what should the rest of us look for when deciding what to trust?

Ed Gracely, biostatistics professor at Drexel University, provides seven questions to ask while reading academic journal articles or news coverage of health studies.

1) Who (or what) did the study study?
Many new medical results are only in animals. These sometimes translate to human benefit, but often do not.

2) Who funded the research?
Studies produced or supported by vested interests such as drug companies should be taken cautiously. Negative studies may not be reported at all. Positive ones may be reported in ways that overstate the effect.

3) How are the numbers framed?
Beware of reporting using relative rather than absolute differences, as it can make differences seem bigger than they are. Here's how: A drop in recurrence of a cancer from 20 percent to 12 percent is an absolute drop of 8 percent. That means the treatment helped 8 percent of patients who got it. But it will often be reported differently, as a relative 40 percent drop, since the 8 percent difference is 40 percent of 20. The 40 percent looks much more impressive, but keep in mind what it represents.

4) How was the study conducted?
A randomized, double-blind study with a suitable control group is the gold standard of convincing research. Many new studies just report a single group result, without a comparison group. In some cases this can be convincing -- for example, if a treatment cures five cases of a previously incurable disease. Even without a comparison group, that evidence is pretty convincing. But usually, it can be hard to judge.

Other studies are not randomized, including many of those looking at how vitamins or specific foods affect health. Because treatments are not assigned to random groups, many different factors could produce the observed effects instead of the vitamin or food in question. Conclusions drawn in these studies are not always confirmed when a randomized trial is done.

Finally, be wary of case histories and anecdotal evidence. This kind of data can be biased and misleading. Placebo effects, self-selection, and spontaneous recovery can play a big role in it.

5) How strong is the reported impact?
Does the study report a significance level as a p-value? A p-value is useful to help determine whether a study is beyond chance, but does not directly address the magnitude or importance of the result. So, also consider this: Does the study include a margin of error, or simply report the observed effect as though it were the truth? The true effect may be smaller than that observed in any single study.

6) What other studies have been conducted on the same topic?
A single study, unless very well done, will rarely provide a definitive answer. In any controversial area, it is wise to read multiple studies, and see what the general consensus is. Remember that science is a process of putting the pieces together -- a scientist will look at the whole picture to determine the truth.

7) How big is the sample size?
Small studies that don't find interesting results are hard to publish. As a result, published small studies may not reveal the whole picture, since there may be others just like them with non-significant results that were unpublished, resulting in what scientists call the "file drawer" effect. Larger studies are more likely to be published even if the results are negative.