Skipping breakfast: Will it really make you fat? | Precision Nutrition

Skipping breakfast:
Will it really make you fat?

By Helen Kollias


The importance of a “healthy breakfast” is nutritional gospel. Everyone from your grandma, to your personal trainer, to your favorite fitness magazine “knows” a morning meal will help you lose weight and stay lean.

But is “what everyone knows” actually true?

Sure, most research on breakfast and body comp shows that breakfast eaters tend to be leaner than non-breakfast-eaters.

Unfortunately, as you’ll see below, a lot of scientific research doesn’t quite “prove” what people think it does. 

Yes, science is our main pathway to genuine discovery. But it’s also a human endeavor, and fallible. That’s right, despite their expertise, scientists are people too.

As such, they (and their research) can be influenced by many factors, including:

  • Worrying about where their next research dollars are coming from.
  • Their own deeply rooted assumptions.
  • Who’s running their lab or overseeing their work.
  • What’s “hot” or “trendy” in their field.
  • Sticking it to their arch-rival, Dr. Smug Loudmouth.
  • Getting published in The Bigname Journal.
  • Their upcoming tenure file review.

Yep, even though we like to think of the scientific process as distant from the petty inter-social nonsense of daily life, it’s not. In fact, it can sometimes resemble a soap opera.

That’s why, when it comes to interpreting the results of their own studies, or other people’s, scientists might mess up.

Their needs and their beliefs can distort the way they see the evidence, and what they make of it. This can lead to biased reporting and faulty recommendations.

An even bigger problem? This can lead to wide-spread mistaken beliefs among non-scientist people.

But is this a rare occurrence? Not really. It happens more often than you think.

Take, for example, breakfast.

While study after study may appear to support the idea that breakfast is the most important meal of the day, it turns out there has never been a properly randomized (causal) study that “proves” the positive effects of breakfast!

So… if the evidence for this belief is so sketchy, why the heck is it so persistent and widespread?

Let’s take a closer look.

Science and beliefs

There are many different ways things can go wrong in scientific reporting, from straight-up fraud to more subtle and unintentional misrepresentation.

Here are two of the more common problems:

  1. Research lacking probative value.
  2. Biased research reporting.

Research lacking probative value

“Lacking probative value” is a fancy term for “beating a dead horse.” It means experiments that focus on already-answered questions, and studies designed in such a way that they can’t really provide us with any new information.

This type of research usually crops up in areas that are hot. Just like in fashion and fitness, science has its trends. If you happen to be working in a trendy area, you’re more likely to get published if you find supporting evidence for the latest big idea.

Most often, you’ll see it in observational studies that examine a population nearly identical to the population considered in an earlier study. The population may be just distinct enough to seem different (and therefore worthy of research dollars). But the study actually has little value.

Biased research reporting

Biased research reporting refers to four different but overlapping phenomena:

  1. biased interpretation of your own results;
  2. improper use of causal language in describing your own results;
  3. misleadingly citing others’ results; and
  4. improper use of causal language in citing other people’s work.

Biased interpretation of your own results

Biased interpretation of your own results means reporting that you’ve found a positive result when the evidence doesn’t actually support your assertion.

Usually, it looks like this:

Somewhere in the abstract or the conclusion, the author reports: “Weight loss increased with X.”

But a careful consideration of the actual results shows no statistical difference in weight — merely a marginal difference.

To be completely accurate in a case like that, the author would have to say something like: “We didn’t find a difference this time, but with a larger sample size, we think that using X would increase weight loss.”

But that wouldn’t sound as impressive. Which is probably why the authors don’t say it.

Improper use of causal language

Not to be confused with a casual relationship, a causal relationship means that one thing made something else happen. Smacking your head against a wall will cause a headache.

Of course, there’s nothing wrong with claiming that one thing causes another, if that is what your experiment is designed to discover and that is what the results demonstrate.

The trouble comes when what you’ve really discovered is a correlation — two things occurring together — but your language suggests that you have found a causal relationship.

For instance, maybe you are doing a study on headaches. And, somewhat to your surprise, you discover that headaches occur more often in people who eat broccoli.

Does that mean broccoli causes headaches?

Maybe… but you certainly haven’t proved it. If you aren’t careful with your language in your study writeup (or if you get an overzealous editor or journalist looking for an attention-grabbing headache headline), your research might imply that broccoli gave you that migraine.

Given broccoli’s proven health benefits, it would be a real pity if people stopped eating it simply because you overstated your findings.

If you think I’m exaggerating the nature of this problem, see my review about fish oil and prostate cancer. It happens more often than you think.

Misleadingly citing others’ results

Supplement ads are notorious for this error. They will make a claim and cite a study in support of the claim. But the study they’re citing doesn’t actually offer any proof of what they’re claiming!

This strategy works well for them, since most people are persuaded by the authoritative-sounding citation alone. They never bother to check what the original study said.

Improper use of causal language in citing other people’s work

In this type of biased research reporting, improper use of causal language + misleading citation of others’ results = a two-for-one special!

For example, the offending article might claim: “In their major 2000 study, Smith & Jones found that eating breakfast increased weight loss.”

When in fact, Smith & Jones performed an observational study that could demonstrate only correlation, not cause. And what they found was that people who lost more weight happened to be more likely to eat breakfast.

Why words matter

Worrying about the specific words that researchers use may seem nit-picky. But it’s important.

Biased research reporting creates a scientific version of the broken telephone game we played as children. After a few whispered repetitions, a sentence like, “The bird is sitting on the wire,” could somehow morph into, “The bard spit on his guitar.”

Over time, inaccurate causal language and misleading interpretations create an “everyone knows” reality — shared beliefs and “common wisdom” based essentially on rumor and superstition.

And shared beliefs can lead to irrational habits and unhelpful actions.

Maybe all the bards will start spitting on their instruments before playing…without even knowing why.

Or maybe people start eating breakfast because “everybody knows” that eating breakfast will help to control weight.

But hang on. Aren’t scientists a little more sophisticated than kids in the playground? Don’t they know enough to question their own assumptions and take care with their own language?

That’s exactly what the authors of this week’s study wanted to find out.

Let’s dig in.

Brown AW, Bohan Brown MM, Allison DB. Belief beyond the evidence: Using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence. Am J Clin Nutr. 2013 Nov;98(5):1298-308.


The researchers examined 92 unique articles about the “proposed effect of breakfast on obesity.” Some of the articles they looked at were analyses of previously published articles, or (in the scientific lingo) “meta-analyses.”

They carefully analyzed the language and statistical methods of these studies to see:

  • whether each study’s authors really found what they said they found — i.e. whether the findings truly supported the conclusions, and
  • how people talked about these findings afterwards — i.e. whether the findings were accurately represented in subsequent studies.


The first thing the researchers learned is that breakfast eating (or not eating) and obesity are correlated.

Note: we have far more proof of this than we actually need. In other words, lots of research dollars have been wasted establishing this correlation over…and over..and over again.

Already, by 1998, after only three studies, that result was statistically cemented (p<0.001). The chance that these findings were a fluke was approximately nil.

However, by 2011 the number of studies examining this relationship had exploded. And statistical evidence showed that the chance there is no relationship between obesity and eating breakfast is 1 in 1042. (Which is kind of like a snowball’s chance in hell.)

At this point, you might wonder why the heck scientists kept on performing these studies. After all, they already had an answer to the question!

And it’s strange, because research usually takes a “been there, done that” approach; if the proof looks definitive, science moves on to a new question.

A case of “research lacking probative value”?  You be the judge.

Biased interpretation of one’s own results

After looking at 88 observational studies on obesity and breakfast, the authors of our study made some interesting discoveries.

Turns out that if a study showed a positive correlation between skipping breakfast and obesity (i.e. skipping breakfast was associated with increased fat), there was a good chance this information would show up in the study’s abstract and its conclusion (34/52).

Of course, it’s normal to report your results in the abstract — which, for those of you who have forgotten science class, is the little précis at the beginning. It summarizes the research. And it’s this abstract that busy researchers rely on when compiling meta-analyses.

Interestingly, if a study found a negative correlation (i.e. skipping breakfast was not associated with gaining weight), the result went unreported in the abstract or conclusion (0/2).

This is a big reason why it’s a bad idea to skim the abstract instead of reading the whole study!

And it looks to me like a case of biased reporting.

Improper use of causal language

Next, our researchers looked at the same 88 studies to see how the results were described in the abstracts.

Recall — the most these studies established was correlation. Not causation.

So using words such as “led to” or “caused” in the abstract or conclusion would be improper, since that would imply they’d proven a causal relationship.

Ideally, the authors would confine themselves to words such as “related to” or “associated with” in describing their findings.

Of the 88 abstracts, 42 made conclusions about eating breakfast and obesity.

  • Of those 42, 11 used obviously improper causal language.
  • A further 10 used improper causal language but hedged their bets by adding qualifiers, saying things like “may cause” or “could cause.”

In other words, half of the abstracts that made conclusions about eating breakfast and obesity overstated the relationship they had actually established.

Hmmm. If researchers are so inaccurate when they report on their own findings, what happens when they start citing other people’s work?

Misquoting and overstatement in reporting others’ results

While most studies on the subject are correlational in nature, one 1992 study did establish a causal relationship between eating breakfast and obesity.

Comparing the results of that experimental study to what 91 subsequent articles said about those results gives us a fascinating glimpse into the ways that misleading citations can influence scientific — and popular — beliefs.

What the 1992 study actually found was that if people usually ate breakfast and stopped, they lost weight. And if they didn’t eat breakfast and started to eat it, they lost weight.

Statistically speaking, this is called an interaction effect between baseline (habitually eating or not eating breakfast) and experimental (starting to eat breakfast or stopping). For stats geeks, the relationship was p=0.06, so technically it wasn’t even significant.

The bottom line? It was the change that mattered — not whether the subjects ate breakfast. Or, as the study’s author said, “those who had to make the most substantial changes in eating habits to comply with the program achieved better results.”

Yet in 91 subsequent articles citing this study, 62% of the obesity-breakfast relevant references misrepresented the actual findings.

Instead of focusing on the role that making a change seemed to play in controlling weight, they implied something quite different — that the study had established a causal relationship between eating breakfast and staying slim (or not eating breakfast and getting fat).

Meanwhile, of these 91 studies, 72 also cited another study. This one had never claimed to establish a causal relationship of any kind between breakfast and obesity.

Yet 40% of the 72 misleadingly suggested that it had — and went on to say that what it showed was that eating breakfast could help to prevent obesity!


What can we make from all of this?

Well, generally, scientists are just as prone to error as anybody else! We see it time and time again. Small biases, and tiny language differences, cause a whisper-down-the-lane effect. And “truths” are accepted that were never true in the first place.

Specifically, there’s very little evidence to suggest that skipping breakfast will cause you to get fat.

Sure, we can establish a correlation between skipping breakfast and being overweight. But many factors —  from genetics, to a general lack of interest in health — could explain this relationship. We just don’t know that one causes the other.

What to do

For those of you looking to lose or control your weight…should you eat a big breakfast or not? Well, here are some guidelines.

First, remember that you’re unique. We don’t know all the relevant factors yet. You may be someone who thrives on breakfast. Or you may not.

Observe your own body’s cues. Experiment on yourself. Does eating breakfast make you feel better and more in control of the rest of your day’s consumption? Or does it make you weirdly ravenous later on? When it comes to making decisions, your body’s actual response is the only evidence that counts.

Try different breakfast types. What happens if you exchange one food source (say, processed carbs) for another (say, lean protein)? How do you feel? How does your body react?

Whatever you eat, whenever you eat, stick with your fundamental healthy habits. Eat slowly, watch your portion size, avoid distractions, and pay attention to how you feel.

And, of course, try not to get carried away by rumors. Even if they seemed backed by scientists. Because those same scientists may be struggling even more than you are.

Eat, move, and live… better.

Yep, we know… the health and fitness world can sometimes be a confusing place. But it doesn’t have to be.

Let us help you make sense of it all with this free special report.

In it you’ll learn the best eating, exercise, and lifestyle strategies — unique and personal — for you.

Click here to download the special report, for free.


Click here to view the information sources referenced in this article.