
Just watch. The media will say, over and over, “proves vaccines don’t cause autism.” No, it doesn’t. If you prove Vioxx causes heart attacks, that doesn’t mean all drugs cause heart attacks…Sigh.
Here in the U.S. we’re at 1 in 36! Shouldn’t CDC researchers rush to Denmark to figure out why their autism rate is so much lower than ours? For every 1,000 Danish kids, only 10 have autism. But here in the U.S., we have 28 per 1,000, that’s 177% more autism! I thought Paul Offit wanted everyone to believe the autism rate was the same everywhere? What gives?

The study authors throw around the word “unvaccinated” but at least in the study, they make clear this ONLY means “didn’t get the MMR.” Said differently, the children received EVERY OTHER vaccine. Watch as people try to say this is a vaccinated versus unvaccinated study. It isn’t. Here’s an example:

There is a giant difference between large-scale epidemiology (like this Danish study), and the growing body of BIOLOGICAL science showing, clearly and unequivocally, that the aluminum adjuvant used in vaccines causes brain damage in laboratory animals. Obviously, this study does nothing to prove or disprove this growing body of published science. Nothing.
Conducting a medical record review in 19 randomly selected general practices with a total of 1,712 listed children aged 18-42 months, we found a significantly higher MMR1 vaccination coverage (94%) than estimated through register-based data (86%). This finding is surprising, particularly when considering that the official national vaccination figures are based on these register-based data merged with similar data from the four other regions. More than half of the children who were unvaccinated according to the register-based data (55%) had, in fact, been vaccinated according to the medical records.
“Healthy user bias (HUB) is a serious problem in studies of vaccine safety. HUB is created when people with health problems avoid vaccination. When this occurs the unhealthy, unvaccinated subjects are used as controls. Consequently, the vaccinated group has better health at the outset. The better health of the vaccinated is erroneously attributed to the vaccine. The vaccine gets credit for improving health, when in fact it is causing harm.”Let me try to explain. A hypothetical Danish kid (“Kid B”) has an older brother with autism. Kid B gets all his vaccines before the MMR (not given until 15 months in Denmark) and by age 12 months this Danish kid is not doing well, missing all his milestones (remember, his brother has autism, he’s likely way more at risk, but he’s gotten his shots so far.) The parents are now really worried, so they skip the MMR vaccine. They stop vaccinating. But, it’s too late, he goes on to develop autism. But he never got the MMR. In this study, he proves the “MMR does not cause autism.” Get it? The parents avoided MMR because he was already doing so poorly. But he becomes the data these authors want most to find: a kid with a sibling with autism, who didn’t get the MMR, who still has autism. If you don’t account for this healthy user bias, your data starts to become meaningless, of course the CDC knows this because they have written all about it.
Vaccine safety studies typically compare health outcomes in vaccinated and unvaccinated people. In order to obtain accurate results, the two groups must be ‘matched’, meaning they have similar health and lifestyle characteristics. Matching groups is straightforward if the researchers have control over who gets the vaccine and who doesn’t. If researchers do not have this control (known as an ‘observational’ study), it is impossible to assure the groups are matched. The resulting group differences can cause biases that severely distort the study outcome. Poor matching can cause the study to be totally wrong.
Most vaccine safety studies are observational, and accordingly, do not include researcher control of vaccine exposure. For example, studies are often performed with “administrative data”, which is health data collected by insurance companies or governments. Researchers can use administrative data to compare health outcomes in vaccinated and unvaccinated people. A big problem is that vaccinated and unvaccinated people are not matched. Critical differences include:
1) Healthy people are more likely to choose to be vaccinated. People with chronic diseases or health issues tend to avoid the risk of vaccination. 2) People that choose vaccination tend to have other “health seeking” behaviors, such as having a better diet and exercising, or getting regular screenings and medical tests.
“Since at least 2005, non-CDC researchers have pointed out the seeming impossibility that influenza vaccines could be preventing 50% of all deaths from all causes when influenza is estimated to only cause around 5% of all wintertime deaths.14 15 So how could these studies—both published in high impact, peer reviewed journals and carried out by academic and government researchers with non-commercial funding—get it wrong? Consider one study the CDC does not cite, which found influenza vaccination associated with a 51% reduced odds of death in patients hospitalized with pneumonia (28 of 352 [8%] vaccinated subjects died versus 53 deaths among 352 [15%] unvaccinated control subjects).16 Although the results are similar to those of the studies CDC does cite, an unusual aspect of this study was that it focused on patients outside of the influenza season—when it is hard to imagine the vaccine could bring any benefit. And the authors, academics from Alberta, Canada, knew this: the purpose of the study was to demonstrate that the fantastic benefit they expected to and did find—and that others have found, such as the two studies that CDC cites—is simply implausible, and likely the product of the “healthy-user effect” (in this case, a propensity for healthier people to be more likely to get vaccinated than less healthy people). Others have gone on to demonstrate this bias to be present in other influenza vaccine studies.17 18 Healthy user bias threatens to render the observational studies, on which officials’ scientific case rests, not credible.” -Dr Doshi of Johns Hopkins U., 2013Healthy user bias is a specific type of “selection bias.” Selection bias is well known. For example, a commonly used textbook on epidemiology and statistics states the following:
“Selection bias results when subjects are allowed to select the study group they want to be in. If subjects are allowed to choose their own study group, those who are more educated, more adventuresome, or more health-conscious may want to try a new therapy or preventive measure. Differences subsequently found may be partly or entirely due to differences between the subjects rather than to the effect of the intervention. Almost any nonrandom method of allocation of subjects to study groups may produce selection bias.” (Emphasis in original) Epidemiology, Biostatistics and Preventative Medicine, Jekel et al, 3rd ed., 2007, page 70
CDC Researchers Study Healthy User Bias
In 1992, CDC researchers Dr. Paul Fine and Dr. Robert Chen published an important paper describing evidence for HUB in studies of the DPT vaccine and sudden infant death syndrome (SIDS). They derived a mathematical model for calculating the strength of HUB. Their paper states:“…individuals predisposed to either SIDS or encephalopathy are relatively unlikely to receive DPT vaccination. Studies that do not control adequately for this form of “confounding by indication” will tend to underestimate any real risks associated with vaccination.”AND
“Confounding…is a general problem for studies of adverse reactions to prophylactic interventions, as they may be withheld from some individuals precisely because they are already at high risk of the adverse event.”AND
“If such studies are to prove useful, they must include strenuous efforts to control for such factors in their design, analysis and interpretation. Whether this is possible at all may be open to discussion. The difficulty of doing so is indisputable.” (emphasis added)So, simple question about this new MMR study: Is the word(s) “Healthy User Bias” anywhere in the study? No, of course not (run a word search yourself), because this isn’t real epidemiology, this is corporate epidemiology to generate a headline, and it will probably work. They don’t take Healthy User Bias into account in a situation where that behavior will massively impact the results. Without it, the data really is meaningless. The authors brush by this topic in the conclusion, but don’t give it anywhere near the attention an honest vaccine epidemiologist knows it deserves:

As you already learned, “HUB” will have a massive impact on results, especially when only 1% of subjects have the thing you are measuring for (autism). But why let details get in the way of a good story?