News reports on the latest nutrition studies can be confusing. On Monday we read in the Post that drinking coffee will give us a thick, luxurious coat of fur. On Tuesday we read in the Times that drinking coffee will give us mange. What are we to make of it?
An understanding of the types of studies we see discussed in the media and a bit of insight on how to interpret them can help make some sense of a very confusing field. Among the most common types of studies in the field of nutrition and health include:
Observational studies – In these studies researchers collect data on dietary patterns from large groups of people (sometimes in the hundreds of thousands) and follow them over a period of years to observe health outcomes. Many of the studies you see talked about in the media are of this sort. These studies cannot show causation, only association. They can be an important step in getting reliable knowledge about human health and nutrition, but they should not be considered definitive.
The way the science is supposed to work, is that when an association is found (between, say, red meat consumption and heart disease) the next step is to do an experiment to see if one of the things in the association causes the other. All too often in nutrition science that next step is not taken, and the association is trumpeted to the media, with the public assuming a causal relationship that has not been proven.
Randomized controlled trials – In these studies researchers take their subjects, whether animals or people, and randomly assign them to groups—for example, they might put one group on one diet, another group on a different diet, and direct a third group to eat as they normally eat (this would be the control group). They then collect health outcome data on the subjects and compare the outcomes among the groups. These studies can show causation provided they are well designed and executed.
Mechanistic studies – These are experimental studies in biochemistry and physiology showing how things work, such as what happens to the food we eat after it is broken down to the molecular level and enters our cells, and how different types of molecules interact with each other. These can be done in vivo (in a living person or animal) or in vitro (e.g., in a petri dish).
Potential problems with nutrition studies – When we encounter reports about these health and nutrition studies, we should look upon them with a skeptical eye. Health reporters typically pass on what they are told by researchers without any skepticism and often go further than the researchers in making claims for the study. Before we make a dietary or lifestyle change in response to a study we see in the media, we need make sure the study is valid. Here are some things to look for:
Exaggerated effect – You may see a study saying, e.g., that by avoiding some type of food you can reduce your risk of a heart attack by fifty percent. That sounds huge. But if you look at the actual study results you may find that the effect is really very small. Let’s say that over the course of the study, one percent of the people eating the food have a heart attack, while only .5 percent of the people avoiding the food have one. Since .5 is fifty percent of one, that would often be reported as a fifty percent risk increase. The actual risk would be just .5 percent, or 1/2 of one percent. I don’t think many of us are going to make a big lifestyle change to avoid a half of one percent increase in risk. (In this case the relative risk was 50%; the absolute risk was .5 percent; absolute risk is what matters.)
Uncontrolled variables – When comparing two elements of diet in a nutrition study it is critical that the two groups vary only in the element being studied. In a study comparing the health effects of eating chicken versus eating red meat, for example, the researchers should strive to keep everything else in the diet as identical as possible. If the red meat group is eating a hamburger on a bun with a side of fries, while the chicken group is eating a grilled chicken salad, it is not a valid study. If the researchers don’t make very clear which variables they have controlled for, be skeptical of the study.
Bad study inputs – In an observational study, researchers may base the entire study on one recall questionnaire asking people to estimate how much of different foods they ate over the past year. In some studies they don’t check to see if the subjects are eating the foods they claim they are. In a randomized controlled trial or mechanistic study, the diets they feed the subjects may not be relevant to the real world, or in animal studies may not be species appropriate. Or they may feed the subjects an amount of the food in the study far beyond what anyone would eat in the real world. As I seem to recall, in one of the early studies on artificial sweeteners they used an amount of the sweetener in mice studies that, if the amount were used to bake a Twinkie and the Twinkie were scaled up accordingly, it would be a Twinkie 35 feet long and weighing approximately 600 pounds.

Poor study design – Sometimes studies are conducted in a way that the results aren’t clear. You may see a report showing that the subjects lost weight or had certain health improvements on a certain diet, but that the subjects gained back some of the weight and lost some of the improvements after a year. You look at the study and find that they were including people that stopped eating the diet half-way through the study. You can’t tell from the study what the outcome is for people that stick to the diet, which is what you really want to know.
Healthy user bias – This is common in observational studies. If people have been told by the health authorities to avoid a certain food for decades, it should not be surprising to find that health-conscious people avoid that food. Since health-conscious people do all sorts of things that might improve their health there is no way to know whether avoiding the food has anything with their improvement. Health-conscious people and non-health-conscious people can differ in many ways. A yoga instructor who wheels her Prius into the drive-through lane at Salad and Go for a grilled chicken salad, and the contractor who pulls his Super Duty F250 into Carl’s Jr. for a Double Western Bacon Cheeseburger combo likely differ in ways other than the type of meat found in their lunch.
Evaluating nutrition studies – Here are some tips for evaluating the studies you see reported on in the media:
- Assume any study is fatally flawed unless confirmed otherwise. Find the actual study and read the abstract, methods, and conclusion. If those are not written in clear, understandable language or aren’t available for inspection, be skeptical of the study.
- Look for the actual effect size. The media probably reported the relative risk. You need to find the absolute risk, which is the relevant value.
- Look at the data. If it is a diet study, look at what the people or animals in the study were actually eating, and make sure that it makes sense. If that information is not available, be skeptical of the study.
- If the researchers (or those reporting on the research) are claiming a causal effect from an observational study, be more than skeptical of the study.
A note about the word “significance”: In statistics, the word “significance” is a term describing whether a result has passed a mathematical test showing that an outcome is likely not the result of chance. It may not be a large enough effect to be considered significant in the common use of the word.