top of page
Search

Science: How Do I Know What to Believe? Tips for Analyzing Medical Articles

James Odell, OMD, ND, L.Ac.


When reading a medical article, it is important to look to the research source that the authors quote to support their conclusions. Sometimes the research says something entirely different from the conclusion of the article. For example, when someone says, “Studies prove the COVID-19 vaccines save lives,” it is best to do your due diligence and check the original study that the conclusion is based on. Scan the reporter’s article for the study name, then find the article through Google Scholar (https://scholar.google.com/), http://www.freemedicaljournals.com/, duckduckgo.com, Medline, or another search engine. Once you have found the scientific article there are several points to look for. This information is designed to help you determine facts from fiction and understand how to determine flaws in scientific studies. While it will not make you an expert, this skill will put you far ahead of the average reporter!

Parts of a paper - most scientific papers have the following sections:

  • Title and authors: The general focus of the paper, who wrote it, and where they work

  • Abstract: This is an executive summary of the entire paper, including the findings and conclusion. Abstracts are always provided for free to help readers determine if they want to keep reading (or possibly pay for) the whole article. Some papers written in foreign languages will have their abstracts in English.

  • Introduction/Background: Reviews previous research, why this study was done, and the hypotheses the researchers had before starting.

  • Methods: Explains how the research was done. This section includes details on the sample (animals, humans, adults, kids, etc.), size of the sample, new drug or treatment procedures, data collected, and more. The section is very detailed so that other scientists can replicate the study to confirm the findings, and so people know exactly what was done in case there are questions. This is one of the most important parts of the paper’s disclosure. The hardest sections to read are often the most important sections to understand. A paper with poorly designed methods and/or inaccurately reported results can invalidate the entire study. Take your time here. Read it more than once to have a good understanding of how the data was collected. Be aware, while most researchers do commendable work, many studies are politicized or designed to meet an agenda. This will be apparent from analyzing the methods. For example, it is possible to design a study in which the subject analyzed is guaranteed to fail. Unfortunately, a great deal of research reported in journal articles is poorly done, poorly analyzed, or both, and thus is not valid. A great deal of research is also irrelevant to our patients and practices.

  • Results: This section presents information on data analysis, technical facts, and figures, and usually presents large amounts of statistical results in tables and graphs.

  • Discussion/Conclusion: This is the reader-friendly explanation of the results section. It also usually includes a discussion of any weaknesses or limitations of the study and ideas for future research. Limitations are vitally important to read.

  • References/Bibliography: The citation list of previous research that informed this paper.

  • Limitations: All studies have limitations, but not all limitations may be acknowledged. The term “study limitations” refers to anything that may affect the reliability or generalizability of the results in a study or experiment.

  • Conflicts of Interest: do the authors have conflicts such as financial incentives for writing the paper.

A cardinal rule when reading research papers is not to read an article from the beginning to the end. It is better to begin by identifying the conclusions of the study by reading the title, abstract and discussion/conclusion. From that point, you can begin to discern if the authors have proven their hypothesis.


Once you have read the abstract, and the discussion/conclusions, next look at the limitations as these can seriously undermine the study’s conclusions. These describe events that could compromise or even negate the entire study. Compare the discussion comments and the limitations with the author’s conclusions. Be aware that often limitations are included in discussion. An example of this is in the original article Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine. The authors do not include a separate Limitation section, instead they state the studies’ limitations in the discussion section: “This report does not address the prevention of Covid-19 in other populations, such as younger adolescents, children, and pregnant women.” Of course, the reason being that these groups were excluded from the study.


Next evaluate if the conclusions are consistent with the other statements? Does the study contain enough information for the author to draw his/her conclusion? Many studies you hear quoted in the media include flaws that are a good indication the study is inconclusive, or the reporter’s comments are misleading.


Finally, reading the methods and results also help to understand the limitation of the study. Most papers also include tables, graphs, and figures (pictures) to visually share important information. Journal articles are strictly limited to only a few thousand words – and a picture is worth a thousand words! In addition, it’s often easier for readers to see a picture demonstrating something like umbilical cord blood collection than to read several paragraphs and try to imagine the process.


Comparison


Almost all research involves comparison. Does a woman who takes Vitamin D have a lower rate of breast cancer recurrence than women who take a placebo? Do left-handed people die at an earlier age than right-handed people? Are men with severe vertex balding more likely to develop heart disease than men with no balding? When you make such a comparison between an exposure/treatment group and a control group, you want it to be a fair comparison. You want the control group to be identical to the exposure/treatment group in all respects, except for the exposure/treatment in question. You want an apples-to-apples comparison. To ensure that the researchers made an apples-to-apples comparison, ask the following three questions:


• Did the authors use randomization? Randomization ensures balance among the two therapy groups concerning both measurable and immeasurable factors.


• Did the authors use matching? Matching ensures comparable groups during the selection process.


• Did the authors use statistical adjustments? There are several ways to make statistical adjustments. First, there are regression adjustments. In a study of breastfeeding, there was an imbalance between the two groups in that one group was much older than the other group. From a regression model, we discover that older mothers breastfeed for longer periods, on average than younger mothers. In fact, for each year of age, the duration of breastfeeding increases by 0.25 weeks on average. So, we would adjust the difference of the two groups by 0.25 weeks for every year in the discrepancy between the average mothers’ ages. Second, there are weighting adjustments. Suppose a group includes 25 males and 75 females, but in population, we know that there should be a 50/50 split by gender. We could re-weight the data so that each male has a weighting factor of 2.0 and each female has a weighting factor of 0.67. This artificially inflates the number of males to 50 and deflates the number of females to 50. A second group might have 40 males and 60 females. For this group, we would use weights of 1.25 and 0.83. Both adjustments are imperfect, especially when the adjustment variable is imperfectly measured. And these adjustments are impossible if the researchers did not/could not measure the covariates. Regression or weighting makes adjustments after the data are collected.


The Plan and Study Design


Was there a plan? The presence of a plan developed before data collection and analysis adds to the quality of a publication. Also, did the authors deviate from the plan? While minor deviations are expected, be cautious about major deviations from the research plan, such as developing new exclusion criteria during the study. Particularly, removing outliers without a sound scientific reason can skew the accuracy of the study.


Did the research have a narrow focus? A good research study has limited objectives that are specified in advance. Failure to limit the scope of a study leads to problems with multiple testing. When there are a large number of comparisons being made, the study is considered a fishing expedition. Thus, “if you torture your data long enough, it will confess to something.” Many comparisons limit the amount of evidence that you can place on any single conclusion. Results from a limited number of planned comparisons are considered more authoritative.


Though limited objectives are generally desirable, sometimes the participant selection can be too narrow. When too many patients are left out, those who remain may not be representative of the types of patients you will encounter. When you are trying to figure out who was left out and what impact this has, ask the following questions:


• Who was excluded at the start of the study? As in most vaccine studies – the elderly, children, the ill, and pregnant

• Who refused to join the study?

• Who dropped out or switched therapies during the study? A large number of dropouts during the course of a research study can bias the conclusions.


Meta-Analysis


Meta-analysis is the quantitative pooling of data from two or more studies with sometimes small and sometimes large disparities. Think of it as a multi-center trial where each center gets to use its protocol and where some of the centers are left out. On the other hand, a meta-analysis lays all the cards on the table. Sitting out in the open are all the methods for selecting studies, abstracting information, and combining the findings. Meta-analysis allows objective criticism of these overt methods and even allows replication of the research. When you are examining the results of a meta-analysis, you should ask the following questions:


• Were apples combined with oranges? Heterogeneity among studies may make any pooled estimate meaningless. Meta-analyses should not have too broad an inclusion criterion. Including too many studies can lead to problems with “apples-to-oranges” comparisons.


• Were all the apples rotten? The quality of a meta-analysis cannot be any better than the quality of the studies it is summarizing. A meta-analysis cannot correct or compensate for methodologically flawed studies and may reinforce or amplify the flaws of the original studies.


• Were some apples left on the tree? An incomplete search of the literature can bias the findings of a meta-analysis. One of the greatest concerns in a meta-analysis is whether all the relevant studies have been identified. If some studies are missed, this could lead to serious biases.


• Did the pile of apples amount to more than just a hill of beans? Make sure that the meta-analysis quantifies the size of the effect in units that you can understand. It is not enough to know that the overall effect of a therapy is positive. You must balance the magnitude of the effect versus the added cost and/or the side effects of the new therapy. Unfortunately, most meta-analyses use an effect size (the improvement due to the therapy divided by the standard deviation). The effect size is unit-less, allowing the combination of results from studies where slightly different outcomes with slightly different measurement units might have been used.


Publication Bias


Many important studies are never published; these studies are more likely to be negative. This is known as publication bias. Publication bias is the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings. Much of what has been learned about publication bias comes from the social sciences, less from the field of medicine. In medicine, three studies have provided direct evidence for this bias. Prevention of publication bias is important both from the scientific perspective (complete dissemination of knowledge) and from the perspective of those who combine results from several similar studies (meta-analysis). If treatment decisions are based on the published literature, then the literature must include all available data that is of acceptable quality. Currently, obtaining information regarding all studies undertaken in each field is difficult, even impossible. Registration of clinical trials, and perhaps other types of studies, is the direction in which the scientific community should move. Another aspect of publication bias is that the delay in the publication of negative results is likely to be longer than that for positive studies. So, a meta-analysis restricted to a certain time window may be more likely to exclude negative published research. Many experts are advocating the registration of trials as a way of avoiding publication bias. If trials are registered prospectively (i.e., before data collection and analysis) then they can be included in any appropriate meta-analysis without worry about publication bias.


Duplicate Publications


Duplicate publication is the flip side of the publication bias coin. Positive studies are more likely to appear more than once in publication. This is especially problematic for multi-center trials where individual centers may publish results specific to their site.


Limitations of a Google Scholar and Medline Search


While a Google Scholar or a Medline search is the most convenient way to identify published research, it should not be your only source for publications. Google Scholar is heavily censored for certain topics and Medline searches cover only 3,000 of some 13,000 medical journals. The studies missed by Medline and other databases are more likely to be negative studies. Furthermore, these databases may fail to index major journals in the third world that can provide important trials.

No question that reading medical research can be hard work. The medical terminology is daunting enough, but the hard part is assessing the strength of the evidence. Critical appraisal of research does not require a PhD or background in statistics. Stay organized, use a consistent approach, keep things simple, and you will become an efficient researcher.

bottom of page