In the Virtual Reality in Medical Education (VRiME) lab at The Ohio State University College of Medicine, they’re taking virtual reality learning to a whole new level. By using artificial intelligence and machine learning to create a high-tech experience, they not only eliminate the need for patient actors, but provide students with a high-fidelity, state-of-the-art system to practice their clinical skills.

virtualreality_full

“Our teaching strategies need to go beyond passive training,” says Douglas Danforth, PhD, an academic program director at the Ohio State College of Medicine. “By incorporating artificial intelligence (AI) into virtual reality simulations, we can create interactive scenarios so the student gets as close to a real-world scenario as possible.”

“Advancements in technology have opened up new pathways for students to learn and become the best healthcare providers possible,” says K. Craig Kent, MD, dean of the College of Medicine and vice president for Health Sciences. “Our College of Medicine is at the forefront of this movement with artificial intelligence that allows students to perfect vital clinical skills.”

Few universities are using virtual reality this way, and Ohio State is expertly positioned for the collaboration needed to make this technology successful. One of the lab’s learning modules is designed to teach trainees how to better communicate with patients who have limited English proficiency and may struggle to understand complicated diagnoses or the physician’s instructions. Students, residents, fellows and other physicians can have a freeform, in-depth conversation with the virtual patient to gather their medical history and diagnose their condition.

AI and machine learning — the technology used to drive Apple’s Siri or Amazon’s Alexa — are the engines that give the student a real-world experience. 

“The challenge is that we don’t have millions of conversations to train the AI, as Apple or Amazon has,” Dr. Danforth says. “So we have to use additional strategies and approaches to train the machine to understand the natural language of a doctor-patient interaction. This way, the app can intelligently react to any phrase the student asks and have a fluid conversation.”

That’s where cross-college collaboration comes into play. To build the right technology, Dr. Danforth collaborated across campus with colleagues in Ohio State’s departments of Linguistics, Computer Science and Engineering, and the Advanced Computing Center for the Arts and Design to create the virtual patients.

“It was great having them just a few blocks away,” Dr. Danforth says. “The team has worked for nearly seven years together. Access to all these brilliant minds in their fields made the process possible.”

Perfecting the functional language capabilities and making the scenarios as natural as possible were key. Ohio State went beyond and made sure the application incorporated non-verbal communication, as well. Motion-capture suits helped designers record movements of medical student volunteers, who modeled patients reacting with true-to-life emotions. Their motions were then applied to the virtual characters, says Kellen Maicher, a College of Medicine VR developer and a learning and development consultant for The Ohio State University Comprehensive Cancer Center –James Cancer Hospital and Solove Research Institute (OSUCCC – James).

“We made sure the movements were natural and matched the emotions a patient might feel during an exam,” explains Maicher. “Patients’ body language, too, is so important when performing an exam.” 

Improved learning

“Without this app, students have to practice with one another or with community members trained to act in patient roles,” Dr. Danforth says. “That’s expensive and time-consuming.”

Those methods also limit students’ opportunities for feedback. With the app, students receive a report at the end of the conversation, immediately assessing whether they asked necessary questions.

The extra practice is especially important when learning to communicate with patients with limited English proficiency, like Mr. Martinez, one particular virtual patient in the training module.

Unlike other virtual patients, whose voices are synthetic and can string together various sounds to create infinite dialogue possibilities, Mr. Martinez’s accented voice is provided by a local actor, who recorded the answers for many conversation paths. Creating a custom synthetic voice for Mr. Martinez is one of VRiME’s next goals.

Soon, the module featuring Mr. Martinez will add extra prompts and scoring sections aimed at training students to know when it’s appropriate to use interpreters, and Mr. Martinez will begin to ask students to explain things in simpler language that he can understand.

“As part of the training, we also introduce techniques such as ‘teach-back,’ which help trainees ensure that non-native English speakers understand their diagnoses and treatment plans,” Dr. Danforth says. “We’re excited about all the things artificial intelligence learning can do to improve the educational experience for our students.”

Share this page