Recently, I was at a virtual conference and the presenter showed graphics from several journals that I had never heard of. While it has always been the case that doctors have had to “keep up” with developments in their field, that seems to be getting harder to do. As a result, one of the issues that might be addressed by medical educators – both in medical schools and in residency training – is how to help young and aspiring doctors learn to navigate the increasingly broad universe of medical literature.
When I was starting my career, I subscribed to eight journals[1]. I scanned or browsed the tables of contents to determine what to read so that I could keep my own knowledge up to date. I remember trying to read journals for approximately 90 minutes each weekday (7 hours a week). In terms of a more current perspective, one source quoted in “The STM report” claims that university scientists/faculty members read about 250 articles/year (about 5 articles per week) usually spending about 40 minutes with each article)[2]
However, doctors today are faced with a much broader array of information from which to choose, with “SciJournal.org listing over 3,800 medical journals worldwide[3]. The National Library of Medicine data suggests that from 1970 to 2017 there has been an increase of about 23 million citations per year (from 1.1MM citations per year in 1970 to 24.3MM per year in 2017).
One potential obstacle to people investing time reading journals today is the view that articles found in many scientific publications may not be as representative of the “truth” as the publishers and sponsors of those journals would have their readers believe. In 2015, a symposium was published in Great Britain, on “The Reproducibility and Reliability of Biomedical Research”[4],[5]. In the report of the proceedings of this symposium, the participants agreed that “it was difficult to quantify the level of irreproducibility” in scientific reports. They thought this difficulty was likely multi-factorial. Some of the major contributing factors to this difficulty included poorly done research and writing. The participants cite a report from a group of researchers from Bayer HealthCare, which found that the results of research done outside the company, when compared with “in house” research, agreed with each other in only about a quarter of comparisons. Experiences such as these may discourage “casual”, let alone critical reading.
Indeed, some experts question how much focused reading is actually done. In 2018, Milton Packer claimed that many people do not even thoroughly read articles before commenting on them[6]. At the Feinberg School of Medicine at Northwestern University in a series of offerings called Medical Decision Making (MDM), we promote critical reading skills, as we seek to provide our students with the tools needed to review and assess the appropriateness of what they might read and what sources might be most useful to them. We don’t necessarily suggest that students “take” or “review” any specific journals (or at least browse Table of Contents). I can remember years ago when we suggested that medical students and young doctors subscribe to one or two “general” journals (JAMA, New Eng J Med, BMJ, or Lancet) and maybe a general medical journal (Annals of Int Med, JAMA Int Medicine). This apparently is no longer the default recommendation
While some continue to question whether science in general is reliable, most would say that it is and that, properly done, science should be “self-correcting”. While there will be “rogue” scientists who falsify data like Bezwoda[7] and Wakefield[8] and there have been a few missteps in research sponsored by drug companies and their sponsored researchers true “bad apples” appears to be “rare”[9].
With the mass of medical information available today, we are left with the question of how does one decide which journals are trustworthy and can be relied on as solid sorces for general information?
There appear are several factors to evaluate, including, but not limited to:
- The journal’s reputation which is developed as the editors and publishers are held responsible for ensuring “good” peer review to promote trust among the journal’s readers.
- Peer or Teacher recommendations
- The Galter Library at the Feinberg School of Medicine at Northwestern University has at least 13 resources, some of which are unique to Galter, for databases to help with literature searches.
- Other university medical library sites also list several ways to access the medical literature.
- Impact Factor (IF) as defined by various raters. The IF is basically a measure of how widely articles in a journal are referred to by authors. IF appears to be a complex criterion with several purveyors each publishing a unique version. Their relative rankings are generally fairly consistent but do vary somewhat.
Table: Some Top 10 Listings of Recommended Medical Journals by selected sources:
One series of recommendations for Medical Journals | ||||||||
Journal | 4 yr. IF [10] | IF (Kolab)[11] | Top 10 Scijournal[12] | InCites | ||||
1 | Ca Cancer J for Clinicians | 225.1 | 120.83 | 206.846 | 282.278 | |||
2 | NEJM | 41.7 | 74.699 | 37.909 | 74.699 | |||
3 | The Lancet | 43.2 | 60.392 | 43.377 | 60.390 | |||
4 | JAMA | 15.0 | 51.273 | 45.540 | ||||
5 | J Clinical Oncology | 19.5 | 32.956 | 32.956 | ||||
6 | Nature Medicine | 29.0 | 32.621 | 36.130 | ||||
7 | BMJ | 5.5 | 30.223 | 30.131 | ||||
8 | Cancer Cell Journal | 19.1 | 22.84 | 26.602 | ||||
9 | European Heart J | 12.1 | 22.673 | 22.673 | ||||
10 | Annals of Internal Medicine | 11.1 | 21.317 | 21.317 | ||||
- In terms of evaluating sources for continual learning, a physician could choose a select group of five to seven journals that she/he has learned to trust. An electronic Table Of Contents (e-TOC) for most journals is available at no charge on line. Scanning the offerings of this select group should allow one to decide which articles might be of interest and worth at least reviewing the abstract.
- In addition, physicians might consult sites that review other sites and caution readers, such as Retraction Watch[13]
- Unfortunately, it turns out that a retraction has little to do with how often a piece of research is actually cited after it has been discredited[14].
It IS and will continue to be difficult for physicians to “keep up” with the ever-expanding universe of available information. However, consulting a few general journals[15] through a process such as e-TOC review should then be helpful in “staying up to date” and provide a knowledge base to help inform one’s navigation of the world of medical science. We should also hope physicians will develop enough of an understanding of biostatistics and literature critique that they will remain appropriately skeptical throughout their careers. This should help them help themselves, their patients, and the public understand and appreciate the scientific process. Physicians can help the “lay press” understand the process well enough to report appropriately[16]. This may not be an easy charge, but it should help physicians and others be better equipped to provide “optimal” care to patients and populations.
Endnotes & References
[1] These included: Canadian Medical Association Journal (I was from Vancouver, BC and was planning on returning to Canada to practice); New Engl J Med; Annals Internal Med; Am J. Med (“The Green Journal); American Heart Journal, Circulation, American Journal of Cardiology and Chest.
In 2020 Circulation had 13 “offspring”
[2] International Association of Scientific, Technical and Medical Publishers. https://www.stm-assoc.org/2018_10_04_STM_Report_2018.pdf accessed 12/4/2020
[3] My initial review of this site suggested closer to 4,000 but the listing contained many duplicates. Removing these duplicate listings, suggests that there are around 3800 unique journals listed
[4] https://acmedsci.ac.uk/file-download/38189-56531416e2949.pdf
[5] Horton, R: “Offline: What is medicine’s 5 sigma?”: The Lancet 2015, 385, 1380 https://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736%2815%2960696-1.pdf
[6] Does anyone read Medical Journals anymore? Milton Packer: March 2018 MedPage Today. https://www.medpagetoday.com/blogs/revolutionandrevelation/72029 Accessed 11/22/20
[7]Bezwoda 1985 Breast Cancer Transplant Study Fraudulent: Oncology NEWS International, Oncology NEWS International Vol 10 No 6, Volume 10, Issue 6 May 31, 2001
[8] Wakefield’s article linking MMR vaccine and autism was fraudulent: BMJ 2011; 342, c7452: doi: https://doi.org/10.1136/bmj.c7452
[9] Ready, Tinker: 2002 Poor reports dog industry-academia liaison: Nature Medicine volume 8, 1338–1339
[10] https://www.scijournal.org/categories/medicine accessed 11/3/20
[11] https://www.kolabtree.com/blog/top-20-medical-journals-for-physicians-to-publish-in/ accessed 11/2/2020 and 11/27/20 Kolab Blog doesn’t cite source for IF. Their rankings are clearly different from the Scijournal.org listings These seem to be similar, to InCites Journal Citation Reports. Accessed 11/19/20
[12] The Sci Journal top 10 doesn’t agree with the others from the same site in the first column: https://www.scijournal.org/articles/top-10-medicine-journals accessed 11/5/2020. None of the rest of the original listing on this one table is here: #1 is International Review of Neurobiology at 426; #10 is “The Lancet Infectious Diseases at 21.196
[13] Retraction Watch: https://retractionwatch.com/ accessed 11/15/20200000
[14] Candal-Pedreira C, Ruano-Ravina A, Fernández E, et al: Does retraction after misconduct have an impact on citations? A pre–post study: BMJ Global Health 2020;5:e003719.
[15] One set of recommendations might be: JAMA; New Engl J Med; and/or The Lancet (for a global perspective) and one Internal Medicine journal (Annals of Internal Med, or Journal of General Internal Medicine) supplemented by one or two specialty journals
[16] Remember the citation from the STM report that agreement between researchers on some drug reports from a company and outside research may be discordant in over 60% of instances.