A few weeks back a friend referred me to an article claiming to have found some benefit for use of a 6C homeopathic solution of Symphytum officinale on bone regrowth after orthopedic surgery. Now, before I delve into this, it's important to understand what homeopathy is. Too many people these days assume that it's a catch all term for anything "natural" or "alternative." It's not. Again, I refer you over to Skeptvet for a more in depth analysis as well as a review of the currently available literature on its effectiveness, but the quick and dirty version is:
- Homeopathy operates on the Law of Similars, which is to say that it claims that "like cures like." For instance, homeopathic sleeping pills contain caffeine.
- Homeopathic dilutions are so vast that the likelihood of a dose containing even a single molecule of the original purported active ingredient is effectively nil. Take, for example, the 6C dilution mentioned above. This is a dilution of E10-12, or one part ingredient in one trillion parts water, or 0.000000000001%. To put this in perspective, this is the equivalent of taking one milliliter of active ingredient and diluting it in over 260 million gallons of water.
- Homeopaths claim that succussion, or shaking the solution in a series of back and forth and side to side motions, is the key to "energizing" the active ingredient and releasing its magical healing properties which will then be transferred to the water around it.
The mechanisms though which homeopathy claims to work are completely scientifically and biologically implausible, and violate everything we currently know about chemistry, physics and the dose-effect curve. Clear? Okay, moving back to the article in question...
I did a cursory glance of the literature on Symphytum officinale or comfrey or "knitbone." There appeared to be a handful of studies demonstrating some effect of the herb on bone growth, so without doing a more thorough literature review I acknowledged that there was a possibility that the active ingredient in this case might actually be beneficial. Even working off of that assumption, however, I had serious reservations about the treatment described and the methodology of the study.
The first concern I had, of course, was the dilution. As mentioned above, the dilution factor used in this study was a power of ten to the negative twelfth. To put this in perspective, the allowable concentration of arsenic in drinking water by USDA standards is ten to the negative eighth. The major federal body governing the level of public safety for exposure to a potential toxin deems that a solution containing arsenic that is 10,000 times more concentrated than the homeopathic solution used in this study and that we drink a much greater volume of on a daily basis, and thus are more likely to ingest a greater amount of the substance it contains, is safe and unlikely to cause harm. What does that say about the likelihood of a solution thousands of times more dilute having any effect on the human body? Not very much. Ah, but remember! According to homeopathy it doesn't matter that the active ingredient is so diluted, because the water it is diluted in has retained its properties through a magic shaking ritual!
Another issue I had was that the blinds did not appear to have been appropriate. When we talk of blinds in a scientific study we refer to whether or not the researchers (and in human medicine, the subjects) were aware of which individual was assigned to which treatment group. We blind studies to eliminate any potential bias on the part of the researcher through reading what they expect or want to see into their evaluation of the results. In this study two measures of bone density were used; radiographic grayscale, or a measure of the opacity of bone on the x-ray, and torque removal of metal pins inserted into the bone. The torque measurements were measured objectively via an electronic meter and were properly blinded, whereas the more subjective grayscale analysis had no mention of blinding and therefore must be assumed to have not been.
My final two critiques were that the data gathered was very all over the place and did not indicate strong evidence of benefit, and that the study was published in Homeopathy journal, which in itself raises a red flag. Whenever a study cannot make it into one of the more reputable and stringently peer-reviewed journals and instead must rely on an echo chamber journal for publication, it's cause for skepticism. Not outright dismissal, but as I said before, a definite red flag.
There was still something bothering me about the statistical analysis of this study. Knowing that alternative medicine studies frequently use inappropriately large margins of error to obtain the results they want, I was skeptical, but not having looked at a t-table or calculated p-values in a very long time I didn't trust my own abilities to effectively make a stab at picking through the data myself. I also wanted a more experienced eye to take a look at the article and confirm or deny my critiques, so I emailed a trusted DVM who will go unmentioned unless they choose to identify themselves. Their critique of the article below mirrors the problems that I found and builds on them.
Sample size was moderate, but there were no power calculations given, so it is impossible to know if the sample size was adequate to show a difference between groups.
It also isn't clear if the radiographic assessments were made by blinded evaluators. The torque measurements did appear to be properly blinded.
The biggest flaw is the inappropriate statistical analysis. Multiple comparisons were made without any correction. When you compare groups at multiple points in time or for multiple variables, you have to adjust the P value or you are likely to get apparently significant results that aren't truly significant. This is a common error which leads to a lot of results being mistakenly thought to be significant. One simple correction for this is to divide the selected p value by the number of comparisons. In Figure 3, it looks like 36 comparions were made (beginning and end within group for each time point and all three locations, and then end between groups for each time point and all 3 locations). This would require a P value <.0014 for significance, which some of the comparisons met and others did not.
Finally, one must look at the pattern of results. For the torque data, there was a significant difference in 1 out of 4 comparisons. For the radiographic data, the treatment group higher values at 14 day for all three locations. It had higher densities at 56 days for two of three locations, but only if you use the uncorrected P-value, otherwise these would not be significant. And the density in the test group was actually lower than initially at 7 days, though not different from the control group. What does this mean? This remedy lowers bone density for the first week, raises it in the second week, and then has no effect after that? Does that make any sense at all or have any rational clinical implications?
The authors tried to explain away the statistically significant decrease in bone density at 7 days, but felt fine accepting that the apparent increase in density at 14 days was a beneficial effect, and that the lack of measurable effects at 28 and 56 days were unimportant. Bias much?
All-in-all, this is the kind of sloppy and clinically meaningless research that keeps the myth of homeopathy alive.