Fish oil & prostate cancer:
What does the research really say?

By Helen Kollias


A recent study linking Omega-3s and prostate cancer caused a media storm and frightened a lot of people.

But before you stop taking your fish oil, pay attention to the kind of study this was, and remember – correlation does not equal causation (more on this below).

When it comes to using research to make real-life decisions, study design matters.


A new study appears and it’s all over the news.

Vitamin X or food Y is correlated to cancer, heart disease, or stroke!

Suddenly people start avoiding vitamin X or food Y.

But hold on. What does it mean to say that two things are “correlated”, anyway?

Correlation: A mutual relationship or connection between two or more things.

If you ever hang out with geeks like me, you’ve probably heard us mutter things like, “That’s correlational, not causational”, or, “That was only a correlation.”

With flashbacks of your last math or stats class, you nod blindly and hope the conversation moves on to something more interesting, like a princess having a baby.

But have you ever wondered why we get so worked up about correlational studies in the first place?

Actually, it’s not correlation itself, or even correlational studies, that are the problem.

The problem comes when correlation is confused with causation.

To put it simply, just because two or more things happen at the same time doesn’t mean that one causes the other.

Ice cream causes murder

Here’s an example to make this clearer.

One very well known and repeatable correlation is that murder rates correlate to ice cream sales.


Yes! Strange as this may seem, as murder rates go up, ice cream sales in most cities also rise.

Let’s look at a couple of possible explanations.

Hypothesis 1: Murder causes people to eat ice cream.

Here, I’m making the reasonable assumption that people buy the ice cream because they plan to eat it.

A few problems with this idea:

  • Even though ice cream sales and murders increase at the same time, lots of people buy and eat ice cream, not just murderers. In fact, even if every murderer bought ice cream after committing the deed, this would make no statistical difference. Thankfully, ice cream sales are always higher than murder rates – even though both tend to increase at the same time.
  • If committing murder caused people to buy and eat ice cream, then nabbing bad guys would be as simple as hanging out at ice cream parlors and grocery stores near the freezer department! But last time I looked, policing wasn’t quite that easy.

Hypothesis 2: Buying or eating ice cream causes murder.

Now, not having ice cream may indeed lead to a murderous desire for ice cream.

But if that were the case, you’d think that the murder rate would be higher whenever ice cream was in short supply — and sales figures were correspondingly low.

In fact, sales go up as murders rise. So maybe it’s a post ice cream high or a sugar crash that turns ice-cream buyers into murderers?

Problems with this idea:

  • If this were true, then banning ice cream would stop murder. Hmm. I don’t think it’s quite that simple. After all, before ice cream was invented, we had murders. And in places where nobody has ever heard of ice cream, they still have murders.
  • Ice cream parlors would be the epicenter of murders.  But see hypothesis 1 above.

Hypothesis 3: The correlation is due to random happenstance (coincidence).

This is similar to the correlation between the decrease in pirates and climate change / global warming. (Yep, that correlation also exists.)

So, although the association between murders and ice cream may be random, I think there’s a better explanation…

Hypothesis 4: Something else causes the increase in both murders and ice cream sales.

Let’s try this on for size:

Warmer weather is probably the cause of both increased murders and increased ice cream sales.

Note that this is also a correlative explanation. But it’s the best one we’ve come up with, so far.

Silly examples don’t count

Okay, okay, the ice cream sales and murder correlation is a silly example.

When it comes to legitimate scientific studies, nobody would really make the mistake of thinking that correlation is proof of causation, right?


Actually, the scientific research is littered with examples of people doing precisely that.

The most famous of these may be the studies linking hormone replacement and the prevention of coronary heart disease.

Hormone replacement therapy studies

Back in 2012, we discussed one of these studies.

To summarize, back in the 80s and 90s, because scientists had noted a correlation between estrogen replacement therapy and a decreased risk of heart disease, many women were put on estrogen replacement by their doctors.

Twenty years later, a controlled randomized study showed that in fact, estrogen replacement was very bad for heart disease! Oops.

How could a mistake of this magnitude occur?

It turned out that women of higher socioeconomic status who were more interested in their health (or better positioned to do something about it) were much more likely to ask for or agree to take estrogen than poorer women who had less access to health care.

And while on the surface it may have looked as if hormone replacement therapy reduced a woman’s risk of heart disease, in fact, it was a woman’s socioeconomic class that actually predicted that risk.

Middle and upper class women were less likely to suffer from heart disease – despite the fact that more of them were on hormone replacement therapy, not because they were on hormone replacement therapy.

In the end, here’s the important point: correlational studies are usually based on observation. What this means is that researchers gather information without making inferences.

They do this either with questionnaires, or by taking direct measurements (blood pressure, ice cream sales, etc.). They then let the statistical hamsters process the information (data).

Most long-term “people” studies are observational. And while this methodology sounds fine, it’s very weak. Because it can’t prove anything. It can only find when things are related.

Like ice cream and murder.

Pirates and climate change.

Birth control and heart disease.


Now that I’ve done my share of fist-shaking about correlation, I’ll tell you about causation and how to prove it.

Causation: To cause something to happen.

Yes, it’s as simple as that. To cause something is to make it happen.

If I hit you in the head with a baseball bat, that causes your head to ache. This relationship is causational.

There is a cause (getting hit in the head with a baseball bat) and an effect (a headache).

You might see a potential problem right away here.

Sure, hitting you in the head is likely to lead to a headache. But maybe you had a migraine before I hit you.

So the key to showing causation is to control for everything other than the hypothesized cause.

That’s why we call studies with this design “controlled experiments.”

And when you want to prove causation, the ideal is a very specific type of controlled experiment, called a double blind randomized controlled experiment.

Defining the terms

Okay, I’m about to go off on a little research tangent.

Mostly because I think it’s important for you to understand the difference between various types of studies. Especially if you want to figure out how to interpret the fish oil one.

But, if you already know this stuff, or you’re simply not interested, feel free to skip down to the fish oil and prostate cancer part below.

Double blind

The phrase “double blind” has nothing to do with eyesight. Instead, it tells you who is aware of the conditions of the experiment.

As strange as it sounds, in a double blind study, neither the volunteers (subjects) nor the experimenters know exactly what is going on.

The experimenters merely collect the data. Only a third party, who is not doing the testing, is aware of the whole story.

From this you can probably guess that a “single blind experiment” is one where only the subjects are in the dark.

With the estrogen replacement double blind study, the researchers didn’t know who was on estrogen and who was on placebo.

But before the blind was removed, they were convinced that the women taking estrogen were doing better.

In fact, they were so certain of this that they felt it was unethical not to stop the experiment immediately and give estrogen to the entire group!

Why bother conducting blind experiments?

One reason is that it reduces the possibility of researcher bias. That’s important because researcher bias can sometimes influence results.

(In fact, in some cases, including the hormone replacement experiment, even making the study a double blind may not be sufficient to protect against researcher bias.)

All drug trials are double blind (using placebo and drugs). But these are the exception, since many experiments can’t be blinded.

For example, when experiments involve an obvious intervention, like exercise, eating fewer calories, or meditation, pretty much everybody figures out who is in the experimental group and who is in the control group without being told.


Now let’s take a look at another word in the description: “randomized”. This means that subjects are randomly assigned to groups.

Typically, studies like this will include a control group that receives a placebo or no intervention.

And an experimental group that gets the intervention – whether that’s a drug, exercise, a supplement, or something else.

The participants are sent into one group or the other in a completely random way.


A “controlled” experiment is one where the researchers control the intervention.

If that’s a drug, they control who or what gets the drug, how much, when, for how long, and pretty much everything else you can think of regarding the drug.

Researchers also try to control everything else that may change following the intervention.

For example, let’s say that people start to exercise as a part of a study.

In real life, when people take up exercise, they’ll often start to eat differently than in the past. This is a confounding variable — the arch-enemy of controlled experiments.

So in such a study, researchers should control for diet. They might ask people not to change their usual diet until after the experiment is over.

If a controlled experiment is appropriately set up, at the end researchers can be pretty confident in their conclusions about those specific conditions. But they can’t use their results to draw definitive conclusions about much else.

In other words, controlled experiments have limited generalizability.

Correlation versus causation

Now that you understand the difference between correlation and causation, let’s compare the advantages and disadvantages of each type of experiment.

Correlation experiments Causational experiments
Type of study
Epidemiological Double-blind, randomized
Number of subjects
Lots – 100 to 100,000 Few – 5 to 50
Length of study Long – years to decades Short – days to months
Measures Usually lots Vary, but tends to be fewer than correlational
Driven Data-driven Experimental design-driven
Statistics Lots of stats using models, regression analysis and sometimes t-test (ANOVA) Less stats using comparative tests (t-tests)
Humans or animals Nearly all human Mostly animals (except in stage 3 drug trials)
Advantages Able to uncover surprising connections and relationships Able to prove causation
Disadvantages Unable to prove causation or explain the relationships found Limited applicability or generalizability. (All studies should focus on yes or no answers to the question the study was designed to answer; the hypothesis must be accepted or rejected).

Both types of studies and data interpretation have their place. And adopting only one or the other would seriously limit our capacity to understand the world around us.

Correlational experiments and results stretch our minds by suggesting previously unimagined possibilities. Maybe ice cream really does cause murder! Let’s investigate that some more!

The problem with correlational studies is that they can’t give us certainty. Meanwhile, experimental studies give us certainty, but only within a very narrow range. Often, they can’t be performed on humans.

For instance, the definitive study on smoking as the cause of lung cancer was done on primates, since it would be unethical to randomly assign people to the smoking habit.

(In fact, many people argue that it’s unethical to perform such experiments with primates – but that’s a topic for another article.)

Research question

The reason we’ve gone into this detail about study design is simple: to make sense of this week’s review, you need to understand the potential drawbacks of correlative studies.

This week’s study asks: Is there an association between plasma phospholipid fatty acids and prostate cancer risk?

Brasky TM, et al. Plasma Phospholipid Fatty Acids and Prostate Cancer Risk in the SELECT Trial. J Natl Cancer Inst. 2013 Jul 10. [Epub ahead of print]

Subjects and methods

The first thing you need to know is that this study was a study-within-a-study. For you artsy types, that’s sort of like a play-within-a-play.

The main study was a causational study called SELECT that gave a group of 834 men Vitamin E and selenium to see whether this would decrease their rates of prostate cancer.

(This study concluded that vitamin E and selenium didn’t prevent prostate cancer.)

In fact, vitamin E actually increased prostate cancer rates — although the dose was probably too high, which makes it tricky to generalize to a more reasonable dose.

(Incidentally, that’s actually an interesting finding and was lost in all the fish oil debate.)

In the study we’re reviewing today — the study-within-the-study, which was correlational in design — an additional 1393 more prostate cancer-free men joined the original group of 834 guys.

(These were matched for age, race, education, BMI, smoking status, alcohol consumption, family history of diabetes and prostate cancer, and SELECT group.)

Researchers measured the blood plasma levels of omega-3 fatty acids and trans-fatty acids in those who had prostate cancer and those who did not.


More men with the highest levels of long-chain omega-3 polyunsaturated fats in their blood (total of EPA, DPA and DHA) had low-grade, high-grade, and total prostate cancer.

This is what grabbed the headlines, but there’s more. (And this is the crazy part.)

More men with the highest levels of trans-fatty acids in their blood did not get prostate cancer.

So, according to these correlations, the men who did get prostate cancer had more omega-3s in their blood.

And the men who didn’t get prostate cancer had more trans fats in their blood.



Okay, so before you stop taking omega 3s and fish oil because of this study, here are a few things to consider:

This is a correlational study. Like the ice cream and murder study. So we can’t say whether omega 3s cause prostate cancer, prostate cancer causes omega 3s to appear in the blood, or if some other common factor causes both. (Not can we say anything about the causal relationship between trans fats and prostate cancer.)

The researchers measured blood serum levels of omega-3s, not dietary consumption of omega-3s. And the correlation between eating omega-3s and blood serum levels of omega-3s is weak.

Aging may decrease omega-3 consumption and/or deregulate blood serum levels of omega-3s. That means that the problem could be blood serum omega-3 deregulation, not dietary consumption of omega-3s, or the omega-3s themselves.

Serum omega-3 levels rise for fewer than five hours after eating fish or taking fish oil. Forty-eight hours after you take fish oil, your serum levels are back to baseline.

There was no data on how much fish or omega-3s the subjects consumed. None. Based on population data, it’s unlikely that many of these guys were supplementing with fish oil in the first place.

So maybe eating more omega-3s in fish or fish oil form raises the blood serum level of omega-3s, and higher levels of omega-3s in turn cause prostate cancer.

Or maybe prostate cancer causes a dysfunction in the body that also raises omega-3 levels.

Or maybe some physical dysfunction raises both blood omega-3 levels and causes prostate cancer.

In other words, this study doesn’t demonstrate that omega-3 fatty acids cause prostate cancer any more than a spike in ice cream sales explains an increase in murders

Bottom line

Known risk factors

Before throwing out your fish oil, let’s talk about the more important risk factors for prostate cancer, such as:

  • age;
  • ethnicity/ancestry (for example, men of African descent have much higher prostate cancer mortality rates than men of Asian descent);
  • genetic makeup (although, like breast cancer, the genes we know about to date are responsible for fewer than 10% of cases);
  • family history (i.e. having other close relatives with prostate cancer);
  • previous cancers (even if they’re other types);
  • systemic inflammation;
  • poor diet and sedentary lifestyle;
  • obesity (because adipose tissue is inflammatory if you have too much of it); and/or
  • the hormonal environment (such as inappropriately elevated levels of IGF-1).

(Heck, even this interactive map of world prostate cancer incidence and mortality shows that there are significant regional variations.)

And unlike the correlational studies (which, again, simply say that X happens at the same time as Y), we know the causal mechanisms by which many of these risk factors above actually work (in other words, X causes Y because Z).

As a result, we know that fish oil supplementation can actually improve a few of them, including inflammation, obesity, and the hormonal environment.

Omega-3s are still good for you

While the media have jumped on this particular study as newsworthy, many other studies suggest that omega-3 fatty acids prevent prostate cancer (perhaps by controlling inflammation or lowering the cell signaling molecules that could stimulate cancer cell growth), or have no effect at all on prostate cancer.

Meanwhile, omega-3 fatty acid consumption has been found helpful for everything from Alzheimer’s to joint pain.

And the most convincing data demonstrate that omega-3s decrease blood triglycerides and cardiovascular disease – diseases that pose a greater risk for most men than prostate cancer.

So before you avoid omega-3 fatty acids, recognize that in doing so, you’d be giving up proven benefits for unproven risks.

In our view, the benefits of omega-3s still outweigh any risks.

Eat, move, and live… better.

Yep, we know… the health and fitness world can sometimes be a confusing place. But it doesn’t have to be.

Let us help you make sense of it all with this free special report.

In it you’ll learn the best eating, exercise, and lifestyle strategies — unique and personal — for you.

Click here to download the special report, for free.


Click here to view the information sources referenced in this article.