Better Behaviors for Healthcare: #VoiceFirst Drives Incremental Improvements
After 1.5 years of growth and monthly podcast recording, the Voice of Healthcare hosted an unprecedented event at Harvard Medical School culminating in a bright conference attended by its growing community. The Voice of Healthcare Summit was attended by an incredible array of talented technologists, developers, practitioners, thought leaders, and innovators pursuing applications and use cases in the evolving #VoiceFirst market for healthcare.
What’s notable about the event is its accessibility, intimate size, and openness to conversations and ideas. In fact, the organizers designed the conference in this manner. Instead of a labyrinth of oversold venues run amok, the VoH Summit established itself as an event with real connectivity, real learning, and lasting impact on a growing community committed to changing healthcare for the better. Not only that, the attendees were top notch. From Orbita to Pullstring to Yext to Teladoc (amongst many others), attendees were a verifiable collection of industry heavyweights in a growing field.
For IONIA, establishing an important connection for the transition from screen to voice was in important one to make, especially for driving actions of users in the healthcare space. In this case, it’s a move beyond simple information exchange we’ve become accustomed to with screen-based tools for providers & practitioners.
Distillation: The drive to action. We've only begun to successfully design digital tools and environments which frequently, without lapse, drive human action. Voice is different. Voice interfaces can drive action by incorporating authentic, accessible, connected, private, and interactive features immersing patients in serendipitous, emotionally integrated narratives.
Ultimately, we’ve got to get beyond iteration of information and move towards an integrated instrument of daily living and use for patients & providers. Otherwise, we’ve got a new toy instead of an integrated instrument of behavior change & efficiency.
Getting beyond iteration
Screen interfaces have evolved fantastically well for exchanging information with the user, but we’ve seen its limitations for driving human action with providers & patients. Screen interfaces require the will of the user for exchange of information, but voice interfaces can exist with the user in integrated ways screens cannot. The movie Her is a favorite example including the change in form factor and hearables.
We’ve got to make the job #VoiceFirst does more attractive than a screen interface or tele-conversation. Creating voice interfaces to replace iterative exchange of information from the screen is the problem a user is trying to hire #VoiceFirst tech to solve; the solution offered her with #VoiceFirst tech must be substantial enough to motivate her to act, nagging her with prompts and notifications only motivates her to seek workarounds (e.g., the dreaded “Flag” from EMR jargon lamented by nearly all physicians). The solution to her “job” must help the patient & provider live a better life and include features paving the way to true care innovation & life integration. EMRs haven't done that for the medical community.
Voice is Different:
Storytelling is imperative to healing.
Narratives create pathways for human connection; we evolved sitting around fires telling stories and sharing what we’ve learned, it is a unique human quality. Because of our unique brain and ability to imagine and make predictions of consequence in our minds, narratives connect deeply to our emotional selves. Stepping too close to the edge of a cliff, peering into the deep abyss of the ocean, Beowulf, vampires, mythology, Paul Bunyan, Odysseus, dragons, heroes, villains, all imprint within us an emotional response we either emulate or repudiate.
Importantly, the narrative mind develops our socio-emotional capacities: self-confidence, trust, identity & empathy. This attribute of mind also includes language for expression & drives our cognitive curiosities. For driving human action with #VoiceFirst tech, narratives reign supreme for building relatedness, cooperation, self-control, confidence, and interest; all necessary for influencing human behaviors.
Behavioral dynamics matter more with #VoiceFirst
Because we are moving from an epoch of screens to an epoch of voice (or the combination thereof), our opportunity to design tools for providers & patients ought to heavily focus on influencing human action. At IONIA, we live by 4 immutable rules of human behavior: perception is everything, reciprocity really matters, the law of least human effort is real, and creating human habituation of behavior is the holy grail of our work.
Without a design focus on human action relying only on a human will of information exchange, we’ll be creating a new toy instead of an influential instrument of change & integration. We’re not going to make people healthier with a fancy toy, but we’ll certainly make lasting impacts by connecting with them emotionally as a protagonist in their own life story.
For #VoiceFirst tools, like any other digital design for human action, there’s an incredible array of evidenced based guidelines we can use within layers of our #VoiceFirst interfaces. Below are a few simple rules we can incorporate & utilize to influence behaviors:
Create emotional integration with narrative instruments
We’ve got to synthesize the above to complete the message: #VoiceFirst tech, narratives of the mind, and tools for behavioral influence. Combining it all creates an emotional connection of the mind with the patient as central figure; ultimately, the patient influences herself by linking her own emotions to the story we create for her. Arguably, the endowment we create for her is stronger, longer lasting, and more behaviorally influential than screens have ever been. If we’re going to create an instrument in which a patient initiates its use, doesn’t lapse in interactivity, and habituates behaviors reducing exacerbation of disease, forming a deep relationship with our #VoiceFirst design should include the patient as protagonist instead of just an information source.
Instead of the #VoiceFirst instrument being an information exchange machine, it’s becoming an authentic, accessible and connected piece of the patient’s life story; hopefully, making life easier along the way.
Our stories must include similar, relatable characteristics for the patient to connect with (e.g., Bella from Twilight, Batman, Odysseus, Superwoman, Woody from Toy Story, all include features audience members can relate to) and should include a sense of serendipity to drive interactivity using what’s common in nature and our own life stories, randomness. With intermittent stimulation and randomness, the influence upon the patient (even for provider-side voice tools) is incredibly strong. Instead of the #VoiceFirst instrument being an information exchange machine, it’s become an authentic, accessible and connected piece of the patient’s life story; hopefully, making life easier along the way.
By synthesizing the narrative of the mind, the #VoiceFirst instrument, and behavioral science tools, we will have created an emotionally integrated instrument capable of fostering pro-social behaviors with a sense of nurturing & care; all essential for healing. In other words, emotions cannot be ignored in the process.
None of this is going to matter without democratization and trust
The solution for emotional connectedness generating human action is “to double down on being human” in our design instead of placing patients into manipulation algorithms like we’ve done with social media interfaces and retail. Our known human desire to connect in emotional and social ways presses upon us the need to design our #VoiceFirst interfaces for patients & providers beyond simple conversational qualities. A singular focus on conversational design is just splitting hairs without an emotionally connected narrative. Endowing patients to the instrument itself using narratives to connect emotionally helps the patient to autonomously influence herself, getting us closer to habituation and health (check out Tellables, for example).
Nonetheless, betraying the trust of patients by a lack of transparency and use of their data for anything other than their (and others with similar needs) health & wellbeing erodes the patient’s willingness to connect with #VoiceFirst instruments, reducing effectiveness for all.
To that end, organizations like Houndify, PullString, and Robin offer platforms for creating #VoiceFirst instruments with a differentiator of privacy and trust. With the nightmarish events of compromised user data and trust we’ve sadly become accustomed, trust & privacy features are a prevailing feature consumers desire and are the future of interactive design for #VoiceFirst in healthcare.