I have a challenge for you, name a top tier MMA coach that has a published a peer reviewed study, maybe a top tier Brazilian Jiu Jitsu professor, how about a top tier Muay Thai Coach or wrestling coach?
Yes, I struggled too and this article will highlight why scientific process and the peer review system is only part of the picture when it comes to researching and designing training protocols. Of course, the aim of all coaching is to produce quantifiable and clear attribute gains. Referencing and using studies to form the basis of your program is a part of that process, but there is also a good reason why it shouldn’t be the deciding factor in identifying the merits of a method.
Famous strength coach Charles Poloquin highlights the very real problem with relying on research to form your training protocols. The excellent example he used in the Tim Ferriss show podcast was related to ‘Cluster set’ training. In it he highlighted a research paper that confirms the utility of cluster set training, which was published in the Journal of Strength and conditioning in February 2008.
“PRESENTS THE THEORETICAL AND RESEARCH FOUNDATION FOR THE USE OF THE CLUSTER SET IN PERIODIZED TRAINING PROGRAMS AND OFFERS EXAMPLES OF PRACTICAL APPLICATIONS THAT CAN BE USED IN THE PREPARATION OF ATHLETES IN A VARIETY OF SPORTS”
However, strength coaches and conditioning experts have been using cluster training / cluster set method since at least the 1960’s. As Poloquin points out, if coaches had waited for the method to be confirmed by the research, athletes would have missed a lot of good years of training!
This highlights one of the realities in the search for performance gains, ‘Clinical’ work in the field will always be ahead of the research to confirm it. There is a simple reason for this, Athletes and coaches simply do not have the available time, finances, or subject volumes that a scientific study requires. Most coaches won’t have the time to design and implement a study for their athlete to confirm or refute their new and innovative training idea. They will simply try it, see if there is a performance increase, no matter how subjective, then reject or adopt it based on that experience. Of course these hunches are usually based on large volumes of experience and solid scientific foundations.
In the vast majority of cases the hypothesis that the sports scientists work from is founded on results already observed in the field. A study will be approved because athletes and their coaches will report an, often subjective, increase in performance from a given method. The research community will then design studies to confirm or refute the claims. The gap between the initial clinical work and the completion of a study that has been through the peer review process could be in the time frame of decades however, as is the case with cluster set study above.
The work of Wim Hoff is another glaring example of this. Now researched and hailed by the scientific community, Wim is working with the professors at Harvard to delve into the whys and wherefores of his unique method. Originally however, he was dismissed as a crackpot or freak of nature and his methods dismissed as fraudulent/those of an extreme outlier.
He developed and understood his method without a scientific basis to back his claims, yet he continued to reliably demonstrate how his method could positively impact the health of those he taught. It was only when a forward-thinking university tested his abilities in the lab and the results were undeniable that suddenly the scientific community took note. Calls to ‘see the research’ early in his life were used to refute his claims, but once that research was there … the calls stopped. We should remember this tale and wonder how many other methods out there share this story, simply not yet played out.
It is important, as coaches that we attempt to recognise this gap between methods in the field and those in the lab. But even with this very clear situation staring us in the face, many will maintain the need for evidence of a given methods utility with calls to see ‘the research’ not accepting any other metrics of success. It is a discussion that many top coaches will not even engage in, they know the results in their athletes, and the athletic performances are often proof enough.
However, with all this said I believe complete ignorance to the research around a given topic is also a mistake. There is of course a continual and ongoing process of scientific discovery and exploration occurring in fields relevant to the Coach. These may be tangential fields of study that are seemingly unrelated to the area at hand. For instance, recently I have been delving into effect of nutrition on the development of connective tissue. In order to gain a complete picture, I have been looking at studies from a wide range of disciplines, from Physiology, Chemistry, Biomechanics, Nutritional science and even anthropology.
This is where the utility of peer reviewed data comes into its own, in the amalgamation of material to form an overall picture relevant to the line of inquiry. Often, coaches and fitness professionals will have a ‘hunch’ when looking to increase their athlete’s performance. Their experience will show them that there is a potential gap or area that requires some attention. With the vast array of data already out there they can delve into the research and see if, across multiple fields, their hunch holds water and can then design or adapt their programming accordingly. If no data exists, the coach should probably design their own protocol with clear metrics and goals in place to assess the results. Of course, this process is not, and would not, ever be considered a true study, but should not be avoided none the less.
Research is, in fact, vital to the process of creating and adapting the programs of physical and mental training. The efforts of research scientists from a wide array of fields should not be underestimated or dismissed by coaches in the field, and their work forms the ultimate confirmation or refutation of someone’s training hunches. However, with the long time frame of implementation, adoption, and review, it is only part of the larger puzzle and should never be the only metric by which a methods utility is judged.
Further reading :
Cluster training paper
Journal of Strength and conditioning research