Inter-examiner reliability studies of cranial palpation have been inadequate in demonstrating consistent findings across examiners.1,2,3,4

Inter-examiner reliability protocols, in general, are notoriously difficult to design well. As with any measured study, the instruments utilized (the hands of the examiners) require precise calibration.

The best inter-examiner reliability studies to date, were completed by William Johnston DO, during the late 1970s and early 1980’s.5,6,7,8,9,10,11 These studies were successful in demonstrating consistent findings across examiners because of their careful attention to the following details:

  1. Examiners worked together for extended periods to ensure that they were able to attend to the same palpatory cues. It was never assumed that general training resulted in the ability to perceive equivalent phenomena.
  2. Palpatory procedures were carefully considered for their simplicity. These procedures were tested and refined over months of pre-study preparation. The more complex the palpatory procedure the more likely the examiners to focus on dissimilar phenomena.
  3. Palpatory cues (the nature of the observed phenomena) were simplified to ensure reliability of the recorded findings.
  4. Recording methods were simplified for the purpose of eliminating potential errors in gathering data.

Most inter-examiner reliability studies of cranial palpatory phenomena violate all 4 of the above procedural refinements.

  1. Examiners with variable skill levels are assumed to be able to perceive the same phenomena. No preparatory calibration or palpatory refinement occurred.
  2. Palpatory procedures are poorly defined, and assumed to be simple, without careful consideration of their actual complexity.
  3. Palpatory cues were not clearly defined nor simplified.
  4. Recording methods tended to be less than adequate.

Of all the cranial inter-examiner reliability studies, only the study performed at Michigan State University by John Upledger DO, in the late 1970’s, was successful in demonstrating consistent findings across examiners.12 It is interesting to note that John Upledger not only met regularly with Bill Johnston as part of his teaching faculty, but also took part in Dr. Johnston’s early studies on interexaminer reliability and was thus familiar with Dr. Johnson’s attention to detailed protocol development.

The failing of cranial palpatory inter-examiner reliability studies is unfortunate. Individuals antagonistic to cranial osteopathic manual medicine have, with apparent bias, emphasized these inter-examiner reliability studies, and completely ignored the full body of supportive research.13,14,15

Summary

Further inter-examiner reliability studies of cranial palpatory phenomena must certainly be developed. This task will be challenging given the complexity of both the palpatory procedures and the nature of the observed phenomena.

References

  1. Wirth-Pattullo V, Hayes KW; Interrater reliability of craniosacral rate measurements and their relationship with subjects’ and examiners’ heart and respiratory rate measurements. Phys Ther. 1994;74:909-920.
  2. Rogers JS, Witt PL, Gross MT, Hacke JD, Genova PA; Simultaneous palpation of the craniosacral rate at the head and feet: Interrater reliability and rate comparisons. Phys Ther. 1998; 78:1175-1185.
  3. Hanten WP, Dawson DD, Iwata M, Seiden M, Whitten FG, Zink T; Craniosacral rhythm: Reliability and relationships with cardiac and respiratory rates. J Orthop Sports Phys Ther. 1998;27:213-218.
  4. Moran RW, Gibbons P; Intraexaminer and interexaminer reliability for palpation of the cranial rhythmic impulse at the head and sacrum. Journal of Manipulative and Physiological Therapeutics. 2001;24(3):183-190.
  5. McDonnell D.G., Beal M.C, Dinnar, U., Goodridge J.P., Johnston W.L., Karni Z., Upledger JE, Blum, G.; Low agreement of findings in neuromusculoskeletal examinations by a group of osteopathic physicians using their own procedures; Journal of the AOA 1980 79:7:441-447
  6. Beal M.C., Goodridge J.P., Johnston W.L., McConnell D.G.; Interexaminer agreement on patient improvement after negotiated selection of tests; Journal of the AOA 79:441-50, Mar 80
  7. Johnston WL, Elkiss ML, Marino RV, Blum GA; Passive Gross Motion Testing: Part II – A study of interexaminer agreement; Journal of the AOA 1982, 81:5:65-69
  8. Johnston WL, Beal MC, Blum GA, Hendra JL, Neff DR, Rosen ME; Passive Gross Motion Testing: Part III – Examiner Agreement on Selected Subjects; Journal of the AOA, 81 (51), January 1982
  9. Johnston WL; Interexaminer reliability studies: Spanning a gap in medical research; Journal of the AOA Aug 1982, 81:2:43-53
  10. Beal M.C., Goodridge J.P., Johnston W.L., McConnell D.G.; Interexaminer agreement on long-term patient improvement: An exercise in research design; Journal of the AOA 1982 81:5:91-97
  11. Johnston WL, Allan BR, Hendra JL, Neff DR, Rosen ME, Sills LD, Thomas SC; Inter-examiner Study of Palpation in Detected Location of Spinal Segmental Dysfunction; Journal of the AOA, 82 (11), July 1983
  12. Upledger JE. The reproducibility of craniosacral examination findings: a statistical analysis. Journal of the AOA. 1977;76(8):890-899.
  13. Norton, JM. A challenge to the concept of craniosacral interaction. Am Acad Osteopath J. 1996;6(4):15-21.
  14. Hartman SE, Norton JM. Interexaminer reliability and cranial osteopathy. The Scientific Review of Alternative Medicine 2002;6(1):23-34.
  15. Hartman SE, Norton JM. A review of King HH and Lay EM, “Osteopathy in the Cranial Field,” in Foundations for Osteopathic Medicine, 2e. The Scientific Review of Alternative Medicine 2004-05;8(2):24-28.