Viewpoint
Abstract Factory—Research Culture Harming Medical Education
Samer Al Hadidi, Hira Mian, Rajshekhar Chakraborty, et al
JAMA 2026;335;(3):213-214. doi:10.1001/jama.2025.23320
The academic landscape of medical training and early career development has long emphasized research participation as a cornerstone of professional advancement. Residents, fellows, medical students, and junior faculty are encouraged—often expected—to engage in scholarly activities to enhance their curriculum vitae (CV), develop critical thinking skills, and contribute to medical knowledge. However, a troubling trend has emerged that threatens to undermine the very foundation of academic medicine: the proliferation of mass abstract submissions that prioritize quantity over quality.
On social media platforms, trainees celebrate extraordinarily high numbers of conference abstracts—numbers difficult to achieve through meaningful scholarly engagement. Although such figures might appear impressive, they raise fundamental questions about the nature, rigor, and authenticity of contemporary medical research. This phenomenon represents a warning sign of a broken system that may harm those it purports to help. Prior research has demonstrated that a resident’s publication record during training is a poor predictor of future scholarly productivity during fellowship, calling into question the value of using publication metrics as selection criteria.1
Abstract acceptance at conferences was once a meaningful metric of scholarly contribution. However, when individuals routinely submit dozens of abstracts to a single meeting, the academic currency becomes inflated, much like hyperinflation. This inflation creates cascading problems throughout medical education. When residency program directors review applications featuring candidates with exceptionally lengthy lists of conference presentations, those with fewer high-quality research projects—conducted with appropriate rigor and meaningful contribution—appear comparatively less accomplished. This forces trainees into a numbers game they neither created nor benefit from, compelling them to adopt similar mass-production strategies to remain competitive.
These high-volume submissions often share concerning characteristics. They may have minimal investigator involvement beyond data entry, represent multiple analyses of the same dataset presented as separate studies, or consist of superficial medical record reviews lacking depth for meaningful contribution. Some abstracts emerge from research “factories” where trainees have tangential involvement in studies yet receive authorship credit. This practice, known as salami slicing, fragments knowledge and makes it difficult for clinicians to synthesize evidence effectively.
Research participation during medical training serves purposes beyond CV enhancement. It teaches critical appraisal skills, scientific methodology, ethical reasoning, and the patience required for rigorous inquiry. These educational objectives are defeated when research becomes a production line rather than a learning experience. Trainees in the quantity race often miss the transformative aspects of research: struggling with difficult methodological questions, discovering unexpected findings, experiencing the humility of peer review, or developing expertise in a focused area. These formative experiences cannot occur when involvement in numerous projects is superficial.
The proliferation of low-quality submissions also impacts conferences. With limited presentation slots and reviewer bandwidth, scientific meetings struggle to maintain standards when inundated with numerous abstracts, many of questionable merit. This dilutes the conference experience because session time is consumed by presentations that contribute minimally to knowledge advancement.
Driven by financial considerations, many conferences now accept virtually all abstract submissions because presenters must pay registration fees. This shift has eliminated critical quality control, transforming abstract acceptance from a meaningful scholarly achievement into a near-guaranteed outcome. When acceptance becomes the default, the metric loses its value yet continues to drive behavior because selection committees have not adjusted their evaluation criteria.
The problem is compounded by what has been termed the shadow economy of medical education.2,3 Pass/fail grading and making Step 1 of the US Medical Licensing Examination pass/fail have eliminated traditional metrics for distinguishing candidates. Without grades, students forego coursework and clinical experiences to pursue research, using abstracts as their primary means of differentiation. This has created a cascade effect: medical students accumulate abstracts to secure residency positions, and residents amass more abstracts for fellowship applications, particularly in competitive specialties. Although implemented with good intentions to reduce test anxiety, these changes have intensified the research race because students feel compelled to demonstrate excellence through the one remaining quantifiable metric: research productivity.
Addressing this crisis requires cultural transformation. The current system is sustained not by individual bad actors, but by misaligned incentives. Academic institutions must lead by example, explicitly valuing quality over quantity in evaluation criteria. Rather than counting publications and presentations, committees should assess the depth of scholarly contribution, rigor of methodology, and impact on the field. Department chairs and program directors must resist the temptation to judge applicants primarily by research quantity, instead implementing structured evaluation of quality metrics: first authorship on peer-reviewed publications and sustained involvement in longitudinal projects.
Senior faculty bear responsibility for modeling appropriate research behavior and mentoring early career physicians toward meaningful scholarship rather than CV padding. This includes having honest conversations about the quantity-quality tension, declining to coauthor papers to which one has not meaningfully contributed, and prioritizing trainee education over productivity metrics. Mentors should help trainees develop focused research programs rather than encourage diffuse involvement in multiple superficial projects. However, faculty face genuine institutional pressures. Mentors who discourage trainees from submitting lower-quality abstracts may be perceived as undermining trainee success, particularly when candidates at other institutions continue mass-production strategies. This creates a prisoner’s dilemma, where faculty who uphold higher standards may disadvantage their mentees unless systemic change occurs. Department chairs must provide explicit support for faculty who prioritize quality mentorship, protecting them from accusations of harming trainee competitiveness.
Academic societies and conferences should implement stricter acceptance criteria and consider limiting the number of abstracts individuals can submit or present within a given time frame. Although such restrictions may seem heavy-handed, they send an important message about values and help level the playing field. Abstract review processes should prioritize methodological rigor, novelty of contribution, and potential impact rather than merely filling presentation slots. Professional societies can lead cultural change by addressing this issue through editorials, town halls, and policy statements.
Trainees and junior faculty themselves must resist the pressure to adopt questionable practices, even when they perceive competitive disadvantage. Speaking honestly about the problem, refusing to participate in superficial projects, and advocating for systemic change represent a form of professionalism that serves the field better than inflated CVs. Early career physicians should seek mentors who value quality scholarship and confidently articulate their commitment to rigorous, meaningful research.
The celebration of extraordinarily high abstract acceptance numbers represents a symptom of deeper dysfunction in academic medicine’s reward structures. Although individual trainees may benefit in the short-term from mass-submission strategies, the collective impact damages medical education, undermines research integrity, and creates unrealistic expectations for future generations. We must recommit to the principle that research during medical training exists primarily for education and knowledge advancement. Quality—not quantity—should define academic achievement.
Research should be explicitly recognized as optional rather than a de facto requirement. Although most residency programs do not formally mandate research, program directors often recommend—or trainees perceive they must pursue—research activities to remain competitive for fellowship positions, particularly in limited slots. This pressure is most acute for residents seeking competitive fellowships, where research productivity has become an implicit screening criterion. However, one does not need to participate in research to be an excellent clinician. Given severe time constraints, hours devoted to superficial research projects represent time diverted from direct patient care and skill development. Time at the bedside is increasingly limited, and reduced clinical exposure has been associated with increased rates of physician burnout.4
The solution begins with honest acknowledgment of the problem and collective action to prioritize substance over appearances. This requires courage from individuals at all career stages and coordinated policy reform from institutions and professional societies. Most importantly, we must create an environment where trainees feel empowered to question the status quo and make choices aligned with their values.