What Does “Follow the Science” REALLY Mean?

For some time, people have been encouraged to “Follow the Science”.  This was said even more loudly during the recent Coronavirus pandemic. The implication was that Science and scientific knowledge were absolute and relatively fixed, and that the answers to the pandemic should be rooted in scientific information, and therefore, shouldn’t change.  However, science and the world around us is shrouded in a degree of unavoidable uncertainty.  In trying to overcome some uncertainty we fall back on science. Science isn’t quite as simple as a static series of pronouncements from a divine providence.  Unfortunately, there appears to be no universal definition of science.   Rather, science refers to a method of thinking used to solve problems of general interest.  In today’s perspective, there are several defined “branches” of science[1], with some being more refined than others.  Scientists often tend to compete, but this competition may not always be constructive. We often think of science as a series of experiments on the topic of interest[2], but science is probably best considered a dynamic process using the “Scientific Method” of making observations about the world around us.  It is never static and, as such, we may find contradictions in thoughts (data) that must be explored and hopefully resolved.

The scientific method is variously defined with multiple levels of complexity. In a simple form, the scientific method consists of a series of five steps:

    1. Defining a problem to be solved after a series of observations.
    2. Forming an hypothesis about a solution to the problem
    3. Testing the hypothesis by making more observations
    4. Reporting and evaluating the results of the subsequent observations, allowing others to critique the hypothesis and the methods used to test it.
    5. At this point, the new observations will appear to support the hypothesis. Sometimes these observations will seem to refute the hypothesis[3].
      • This will necessitate a reformulation of the problem, with a refined hypothesis which will then need subsequent testing.
        • This step is often overlooked as scientists hurry to find new hypotheses to test.  In many instances, I might caution scientists to slow down and review how they got to where they are. As new data or information come up, some dogma from one set of observations may be challenged[4].  It might even be argued that a second well done study confirming an original set of observations may be equally important as the first study itself[5]

Because of the reiterative nature of observations (often called “empiric”), our understanding of a natural phenomenon is in constant flux.  New data may change our understanding of our world as we see it.  The way we look at phenomena such as the coronavirus is a prime example.  There had been a general understanding of the way that a virus might spread through the world[6].  However, when the Coronavirus appeared in late 2019, it was more virulent and contagious than had been predicted by previous concepts.  In addition, in early 2020, the understanding of masking in the western world was incomplete and conflicting[7].  Thus, there were no generally agreed upon scientific bases for some public health recommendations.  The import of isolating people with disease and other methods of prevention of transmission, such as social distancing, were also incompletely understood[8].  Further data had to be acquired. As more empiric evidence became available, and concepts changed, recommendations based on science changed.  The fact that there were changes in our understanding of the illness led many to doubt the validity of new or conflicting recommendations. This uncertainty often led to confusion among the public, who often only wanted to look for one uniform, permanent answer.

Unfortunately. science isn’t always intuitive.  As new information is generated, conclusions based on prior data may have to be revised or even discarded. In that many of our intuitions are based on prior experiences; they must be constantly reviewed and occasionally questioned and revised. If not, we will make mistakes in our interpretations of our surroundings.

There are errors that may impede the orderly process of the scientific method. Fortunately, most of these errors are not deliberate falsifications, but are most often subject to pre-conceived notions on the part of either or both of the investigators and the readers of literature. When deliberate falsification of data or interpretation is identified, the reported observations are most often retracted by the editors of the journal(s) that reported them. More often improvements in the techniques, or refinements of previously used techniques of observation or experimentation[9] may lead to conclusions that seem to contradict prior scientific statements. This happened frequently during the Covid pandemic.

Among other things, new ways of observing phenomena might be clouded by prior concepts or flaws in mental processes. Such impediments to easy interpretation are sometimes referred to as Bias(es).  Biases can also arise from the way data are reported[10]  We should stay alert to these potential pitfalls in looking at changes in how we discuss our “facts” (which may fairly regularly change).

Yet another factor that may obscure our understanding of new observations may be the way that some terms are reported.  One such term is “significant”.  This term actually may have two faces. It may be related to a statistical test that tells us what the likelihood is that an observation is related to random variation (chance).  This is often referred to as a “p-value”.  We generally reject the possibility that chance has been the cause of the observation if the probability of chance is less than 5% (P<0.05).  This will still allow us a 1 in 20 likelihood that the observation is not “real” and is still simply a result of chance.  A second face of “significant” relates to the meaningfulness of the results of a study.  In a series of comparisons, a statistical test may say that there is a low likelihood that chance is the cause of small differences observed. These small differences might not be of any real importance[11].

The way that data are presented can lead to incomplete understanding of that information. If  data are reported as proportions, when an absolute difference may not be as impressive our understanding of the phenomenon may be skewed. For example, the results of a medical procedure may reduce mortality of from 5% to 4%, These results could be reported as a 20% reduction in mortality[12], whereas if the results were reported as a 95% to 96% survival advantage the results are nowhere nearly as dramatic, or compelling.

Science isn’t cut and dried.  If the method of making observations is true and not biased, each experiment that is performed should bring us closer to the truth.  However, even as we approach truth, there are many ways that the issue can be obfuscated or confused. We must not be swayed by biases in the way that the scientific community approaches its task or the way that our own biases influence our acceptance of new information. Some expressions that may be confusing are the types of “significance” or use of ratios that may exaggerate the real meaning of the observation.  We must always be aware that in trying to apply “the science” our concepts should be subject to an understanding of what is being proposed, and we should try to ensure that the input into a scientific statement is being honestly presented. Because science is not fixed, we should be patient with her and understand why she might change her mind from time to time.


End Notes:
[1] Some of the branches of science that are often referred to include Physics, Engineering, Chemistry, Biology, and Mathematics among others.  From the biomedical perspective, subheadings of Biophysics, Bioengineering, Biochemistry, Molecular Biology and many others. The branch of science known as Epidemiology (“the study of the distribution and determinants of health”) is important in helping understand what may be happening in populations

[2] Claude Bernard introduced the concept of “Experimental Medicine’ in 1865.

[3] The great tragedy of science (is) the slaying of a beautiful hypothesis by an ugly fact”: attributed to Thomas Huxley by Shabudin H. Rahimtoola,

[4] See prior post on dogma: https://winslowmedical.com/archives/191

[5] The confirmation of the results of the Framingham study by the People’s Gas study  in Chicago is an example of two studies confirming an hypothesis of risk factors.

[6] https://www.npr.org/2020/03/20/819186528/what-last-years-government-simulation-predicted-about-todays-pandemic/

[7] In Japan (and China) masking was a part of the culture to help people tolerate air contamination and hopefully retard the transmission of potential air born illnesses.

[8] This in spite of our experience in the 1919 “Spanish Flu” epidemic (see post of May 2020 “Pandemics are not new…”)

[9] examples of improvements in techniques include: improvements in creating the lenses and viewing capabilities of microscopes from light microscopes to electron microscopes; or refining X-ray imaging from plain films to Computed Axial Tomography Scanning (CAT scans) to Magnetic Resonance Imaging (MRI)

[10] Often well-done observations that do not show “positive” results are not shared (called “Publication Bias”).  On the other hand, readers may not be open to new ideas because of any one of many potential biases or may be too open to observations that seem to go along with their preconceived notions (Confirmation Bias)

[11] For example the difference in height of two medical school classes might be 0.1 inches, but because of the number of observations this still might be statistically significantly different (p<0.05).  We really wouldn’t care about the small but statistically different heights of the two classes.

[12] 1% reduction of 5% is a 20% reduction.

About Ted

Edward B. J. (Ted) Winslow received an MD from the Faculty of Medicine of the University of British Columbia in Vancouver and an MBA by the Kellogg School of Northwestern University. Before getting his MBA, Ted practiced Cardiology and Internal Medicine at several Chicago institutions (University of Illinois, Veterans West Side, Illinois Masonic, Northwestern Memorial and Evanston Northwestern Healthcare – each one at a time). As a practicing physician, Ted has had experience in managing a medical practice, and implementing the adoption of electronic medical record systems
This entry was posted in General Interest, Health Information, Statistics and Decision Making, Statistics and Decision Makingt. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.