Virtual reality and robot-assisted surgery may seem dramatically different, but they share a core design challenge: the need to maintain a connection between real and virtual worlds.
In minimally invasive robotic-assisted surgery, you create small incisions and insert instruments, including a 3D camera, to complete a procedure inside a patient’s body. The surgeon views a live, hi-resolution 3D video feed from inside the patient’s body and manipulates the robotic system’s master controls to translate her hand, wrist, and finger movements into precise, real-time movements of the instruments attached to robotic arms.
The idea is that it’s more effective and less invasive than traditional surgery where you make large incisions and expose internal anatomy. As a surgeon, you have to be able to focus on the procedure while overseeing other people in the OR and remaining alert to the patient’s condition. The connection between the real and the virtual isn’t just desirable. It’s vital.
After working on several projects with the robot-assisted surgery company Intuitive Surgical, we identified four key principles to help manage the tension between virtual and real OR environments. With a few modifications, they can easily be applied to the sort of virtual reality we’re all more accustomed to, like gaming and film:
1. Create a lifeline between real and virtual worlds.
Before you embark on a VR project, ask yourself two questions: 1) What is the intent of the experience? 2.) What is the mechanism, or lifeline, that ultimately establishes the connection between real and virtual worlds?
In the case of robot-assisted surgery, the environment is inherently collaborative, and the intent is to maintain broader and persistent awareness of context to ensure the surgeon remains connected to the surgical assistant, anesthesiologist, and others in the OR–all in service of the patient on the table. Opening up an audio channel to preserve the ambient awareness of sounds (beeps and pings) in the OR and to encourage communication between the surgeon and the rest of the team is an effective mechanism to maintain a connection without disrupting the primary task at hand.
In VR gaming, the objective may be to make the experience as immersive as possible (it’s just you at home with your VR gear). Here you may want to limit any lifeline to the real world so that it only emerges at key points in the game–to prevent the user from getting hurt while playing a game, for instance. The mechanism in this case could be wireframe outlines of the physical space of the room or objects in the room to help keep the user safe.
2. Use contrasts to distinguish between the experience and user controls.
The better designers are able to create realistic-looking objects and environments, the more important it is that we appropriately distinguish between what is real and what isn’t. Contrast can help make the distinction.
In the surgical case–an environment that uses a high-fidelity live 3D video feed of the abdominal cavity to immerse the surgeon in a procedure–basic graphic elements are key. Simple shapes, primary colors, and planar surfaces all distinguish the environment (the body, surgical tools) from the UI (controls and settings).
In a high-fidelity, photo-realistic virtual environment–whether this be VR gaming or VR storytelling–contrast between geometrically complex and realistically rendered environments, creatures, or objects and simple wireframes or primitive shapes is an effective way to differentiate the real from the virtual. For instance, a two-dimensional mesh plane or cube can help delineate the confines of a room. Floating planes or spheres rendered in a color that is alien in the VR environment may be a good way to present a UI component. You can also use “proximity and persistence”: changing an environment ahead of the user, in the distance (like in a first-person shooter game) while keeping the UI components nearby and in one location.
3. Be subtle and give space to manage the transition between the virtual and physical worlds.
The more immersive the environment, and the more deeply engaged someone is in what she is doing, the more jarring and intrusive any interruption. Being subtle and giving space for someone to transition between the virtual reality and the physical one will create a better experience. Prompts that slowly emerge or gradually increase in intensity give the person time to adjust.
In a surgical setting, visual prompts and alerts should appear in the periphery rather than at the center of the surgeon’s field of view. They are clear without being intrusive. Since the surgeon has a persistent audio feed, changes in that feed–like increased urgency in someone’s voice, or the change in tone, tempo, or volume of an alert–are other effective ways to transition between virtual and physical worlds.
Walking up to a person immersed in a VR game and tapping him on the shoulder is shocking (often for both people), but much less so if the player is aware that someone else is in the room and approaching. Similarly, an outline of a wall or piece of furniture gradually emerging is less disruptive than if it suddenly appears at the last moment. Spatialized audio could be used to signal where objects and other people are in a space. Light could also be used to indicate another person’s presence (imagine someone opening a door to a brightly lit corridor behind the player).
4. Be transparent to people in the physical space about the context of the person in the virtual one.
In the same way that it is beneficial for the person immersed in virtual reality to be aware of others in his physical space, it is also important for people in the physical world to be aware of the context of the person in virtual reality.
Surgical teams operate in a highly synchronized manner. Awareness of each other’s actions and anticipating next steps are key to safe, efficient procedures. Displaying the surgeon’s view on a monitor (and in many cases more than one) helps the team maintain awareness of the surgeon’s actions. This helps the team distinguish between the parts of the procedure that are routine versus the parts that are highly complex and require particular concentration, and helps avoid an ill-timed interruption. When a surgeon requests that an instrument be swapped or replaced, it is natural for her to point at or move the instrument. But her intent isn’t obvious to others in the OR unless what the surgeon is looking at appears on a monitor that everyone can see.
Similarly, showing what someone is experiencing in VR helps bystanders determine if, when, and how to engage. In the home having the user play in front of a video screen that displays the VR environment is an effective way to do this, and audio can also play a role. Many games are designed with a spectator mode, which allows non-players to immerse themselves in the action and follow along.
It’s a trickier problem with mobile VR–when an external screen or peripheral speakers aren’t available–and especially in a social or public environment. VR is new enough that we have not yet established social norms. It took a while for us to adjust to publicly using headsets for our mobile phones, and at least there we had the distinct advantage of being able to make eye contact.
These challenges become even more complex when we start thinking about augmented reality. A person wearing a VR headset is obviously experiencing something in the virtual world–and likely detached from the real world. In AR, it’s more ambiguous. The user has digital content added on top of the physical world, and it’s entirely invisible to the bystander. How do you correct that asymmetry?
As technology advances and our ability to create convincing virtual experiences improves, navigating the transitions between worlds will pose even greater design challenges. Surgical robotics can’t help solve all of them, but it’s a good place to start.