COVID-19 Chatbot

Enabling patients to get answers to their COVID-19 questions instantly


When the COVID-19 pandemic hit, people had a lot of questions – What are the symptoms? What should I do if I have been exposed? Where can I get tested?

Call centers and care practices across Penn Medicine bore the brunt of the increased volume of inquiries, resulting in long wait times for patients and an increased burden on frontline clinicians.

In analyzing a database of COVID-19-related telephone encounters and patient-submitted secure messages to our patient portal, we found that many of the questions coming in could be addressed with standardized answers reflecting the most up-to-date knowledge and guidelines about the virus.

Across many industries, chatbots are leveraged as an efficiency-enhancing way for institutions to interact with target audiences. Chatbots leverage machine learning and natural language processing to understand user intent and reply with appropriate answers.


We designed a public-facing chatbot in partnership with Google and Verily that was equipped to provide appropriate and timely responses to questions about COVID-19 and triage symptomatic patients to the right level of care at Penn Medicine.

Content for the chatbot came from the COVID-19 FAQ, an interactive app continuously updated by students at the Perelman School of Medicine and validated by experts in infectious disease, occupational medicine, women's health, operations, and oncology.

Two weeks after our initial planning meeting, the chatbot went live. 


The chatbot and patient triage tool we created enabled patients to get answers to their COVID-19 questions instantly, 24 hours a day, seven days a week during the height of the pandemic. It also reduced call center volume and offloaded work from frontline clinicians. In early 2021, when COVID-19 cases in Philadelphia and demand for the chatbot began to decline, we decided to retire the resource.
While building the tool, we learned more about mapping patient language to intent, introducing logic to disambiguate intent, and inserting automated, self-serve tools into workflows to address call volume and clinical workloads. These insights will help inform ongoing algorithmic care efforts at Penn Medicine.
Phase 2: It does work

Vivian Lee, MD, PhD, MBA
Vindell Washington, MD
Maguire Herriman
Elana Meer
Susan Day, MD, MPH
Anna Morgan, MD
Nancy Aitcheson, MD
Erica Weinstein, MD
Anne Norris, MD
Sam Takvorian, MD, MS
Amy Behrman, MD
Glenn Fala, BSEE, MS
Kevin Van Horn
Todd Kirkes
Tim Jones
Aaron Johnson, PhD, ESE
Jake Moore, MBA
Colleen Mallozzi, MBA, BSN RN, BSIS
John McGreevy, III, MD
Christine VanZandbergen, MPH, MS PA-C
Philynn Hepschmidt, MS.Ed
Susan Regli, PhD
Ann Huffenberger, RN, DBA
Danielle Werner, MHA
Bill Hanson, MD

Innovation leads

Roy Rosin, MBA
Kevin Volpp, MD, PhD
Krisda Chaiyachati, MD, MPH, MSHP
Mohan Balachandran, MA, MS
David Asch, MD, MBA
Kathleen Lee, MD
Vivian Williams

Innovation Methods

High fidelity learning can come from low fidelity deployment.
Mini-pilots will allow you to learn by doing, usually by deploying a fake back end. You might try a new intervention with ten patients over two days in one clinic, using manual processes for what might ultimately be automated.
Running a "pop-up" novel clinic or offering a different path to a handful of patients will enable you to learn what works and what doesn't more quickly. And, limiting the scope can help you gain buy-in from stakeholders to get your solution out into the world with users and test safely.

Early on, we launched a mini-pilot engaging medical students, faculty, non-clinical staff, and the patient and family advisory council to test the chatbot. We collected more than 500 feedback submissions in three days. This helped us identify gaps in content, edits that needed to be made to wording, and opportunities to improve accuracy.