PAGING DR. ALEXA: Re-envisioning Primary Care with Artificial Intelligence

Millennials and “Generation Z” face more complex health and development challenges than their parents ever did—and depression leads the way among complicating factors (1).

 
Gen Z, a cohort that has never known a world without social media and smartphones, now makes up a quarter of the U.S. population. Reports from the Center for Disease Control and Prevention overwhelmingly point to widespread anxiety and depression across demographic groups, and the American Psychological Association’s latest “Stress in America” survey — focused specifically on Gen Z in its latest iteration — found that hot-button issues such as sexual harassment and gun violence are prominent culprits (2). The same survey found that 91 percent of Gen Zs between ages 18 and 21 reported having experienced at least one physical or emotional symptom due to stress in a one-month period compared to 74 percent of adults overall. 
 
These symptoms come at a high price. The WHO estimates that mental health illnesses cost the global economy around $1 trillion a year, and since 2012, the rate of emergency department (ED) visits among children and youth for mental health concerns has risen dramatically in hospitals across the country (3). Despite the provisional and often chaotic nature of emergency wards, the prevalence of stress-related ED admissions has forced emergency room clinicians with limited training in mental health to triage patients in crisis intervention. Not nearly enough pediatric hospitals have mental health specialty services available onsite in their EDs and even fewer have adequate procedures for following up with patients (4). Patients who require mental health attention can disrupt the routine and flow of emergency wards and likely need more resources than medical or trauma patients in general. 
 
With mental disorders topping the disease burden for younger generations, the healthcare industry is forced to reckon with mental health as an increasingly pressing public health crisis that will demand an overhaul of chronic disease management in primary care training, reforms in ED operations designed to address acute medical illnesses, and the application of emerging health technology. Most primary care and ED clinicians do not routinely screen for mood disorders or suicide risk, and studies have shown that over 80 percent of individuals who attempt suicide are not identified as a danger to themselves by healthcare providers, even if they visited their primary care physicians in the months leading up to their attempt (4-6). Moreover, primary care providers often lack formal psychiatric training despite the fact that they are often the de-facto principal, most accessible, and least stigmatized healthcare provider for children and adolescents (4).
 
Like any other chronic disease, mental disorders require a long-term and systematic approach to continuous management. As mental health— sometimes called behavioral health — becomes increasingly accounted for in conversations of holistic care, collaboration among mental health specialists, physicians, and community resources are paramount. Living up to its name, primary care should give center stage to the seamless integration of physical and mental care as a cost-effective and personalized strategy for streamlining mental health care (7).
Primary care is the most logical site for integrated behavioral care given the links between mental illness, persistent substance abuse, and other chronic medical diseases such as hypertension and diabetes (8). Integrated care can start simply with the addition of mental health professionals to primary care offices or revisions in medical school curricula to train primary care clinicians with basic problem-focused psychotherapy. An integrated system should feature sophisticated patient registries so that providers can track mental health care and outcomes in addition to basic vitals.
 
Fortunately for healthcare providers, members of Gen Z are far more attuned to their own mental health than previous generations, and, as the first cohort to grow up with the Internet and generally changed attitudes towards communication about mental health, Millennials are also more likely to seek help from mental health professionals  (2).

…artificial emotional intelligence capable of interpreting the emotional content of human speech and drawing conclusions about someone’s mood based on their vocal features could facilitate early detection of mental health disorders from the comfort of the home.

As healthcare reform efforts increase demand for improved patient-centered care, providers must understand intergenerational differences in patient engagement. The need for more youth-friendly primary-care services requires that we take a serious look at health information technology, as younger generations prefer to engage through integrated health IT tools. A Salesforce report published in 2015 found that 60 percent of Millennials support the use of telehealth and 71 percent would like a mobile health application to be incorporated into their healthcare (9). Our younger populations are already donning wearable or “mHealth” applications, and they will continue to shape a more tech-savvy health industry. Even so, Gen Z and Millennials also show a preference for deep, genuine relationships with their providers (10). Primary care must find a way to do both for our youth, now and as they age. 
 
But patients should consider how easing burdens off of their primary care providers could allow them to spend more meaningful and productive time with their physicians, rather than the typical 15-minute visit during which only basic questions can be answered at best. The current state of physician-patient relationships is such that patients are impatient and doctors are overwhelmed (11). Both parties are frustrated, and the relationships are transactional and impersonal. Minimizing the long waits for quick, impersonal visits could also promote a shift in the current reimbursement system that awards volume of care provided rather than quality of care.
 
Today’s race to find solutions to curb the growth in healthcare spending and to restructure healthcare business models to promote streamlined care delivery and reduce overall costs has put the spotlight on primary care physicians. Primary care workforce estimates show that nurse practitioners are the largest and fastest growing group of non-physician primary care clinicians, especially in low-income and rural areas where primary care physician shortages persist and demand for primary care is high (12). But the U.S. is the only industrialized country without a national strategy to improve the infrastructure of primary care delivery, from service distribution to workforce, from technology policy to information systems, and from vaccine administration to financing (13). Unlike much of the non-U.S. world where universal healthcare is the standard, extending health services to underserved areas in the US is especially difficult. Sending nurse practitioners cannot be the only way to get there.
 
 
 
 
Enter Amazon’s Alexa. The “smart speaker” was pioneered as the ideal personal assistant: efficient, unbiased, knowledgeable, and ubiquitous. More than 50 million Americans — nearly a quarter of the country — own smart speakers possessing artificial intelligence from Amazon, Apple, Google, Microsoft, or Samsung (14). Children in those households are being raised to think that the smartspeaker is just another member of the family, a familiar voice that has sung them to sleep, ordered their parents an Uber, and told them what the weather’s like.
 
Alexa’s most attractive feature is her accessibility. The elderly, people with certain debilitating conditions, the frail, and those with poor eyesight have trouble using their hands to interact with more traditional digital interfaces, like a computer or smart phone. With its ability to track patterns of speech, mood changes, and sleep, dietary, and exercise habits, artificial intelligence could be particularly valuable in prompting patients to make doctor’s appointments upon recognizing that someone’s day-to-day behavior is unusual.
 
As an interlocutor, Alexa has a ways to go before she masters the detection of emotions in voices, or what MIT engineering professor Rosalind Picard calls “affective computing” (15). But artificial emotional intelligence capable of interpreting the emotional content of human speech and drawing conclusions about someone’s mood based on their vocal features could facilitate early detection of mental health disorders from the comfort of the home. A number of companies are looking to use voice-based analytics in healthcare. Ellipsis Health, an early-stage San Francisco startup, is one such company hoping to develop AI software for physicians and caregivers that can identify biomarkers of depression and anxiety through speech (16). One day, voice recordings captured by Alexa and processed though Ellipsis Health for mechanical, acoustical, and word-based features pertinent to behavioral health could be measured against clinical metadata to produce assessments for healthcare providers to then determine the appropriate course of action.
 
Speaking to Alexa about depression or bulimia would allow the patient to escape human judgement and the stigma of mental illness or eating disorders—the smartspeaker does not judge you the way a person might. In theory, Alexa-the-medical-assistant would not offer diagnoses—after all, she does not hold a medical degree. She would merely offer guidance and would act as a readily accessible, ever-present point of contact for professional help. The real doctor would only be a click away. If AI can learn to assess potentially at-risk mentally ill populations to recognize signs and symptoms of psychological distress, we could see better regulation of mental health resources and more immediate mental health disease management (17). Personal, AI-powered medical assistants could learn to manage suicide risk, to identify where to refer at-risk patients, and to know how to point them to specific resources. 
 
HealthTap, an app-focused health tech company, has experimented with voice-activated interfaces linking customers to U.S.-licensed physicians (18). Their services allow users to access a knowledge base from over 100,000 physicians to find answers to questions about health and wellbeing. The doctors behind HealthTap peer-review their diagnoses to create statistical models for the system. Not only can AI provide a wealth of knowledge, but it can also be trained to recognize patterns and context of the person’s health records, compute your conditions, and send patients to the right level of care. Providing a few words such as “dizziness” or “fatigue” will be enough for Alexa to go through tens of thousands or even millions of patient cases and medical opinions.
 
At present, Alexa has limited expertise of the medical world. Her built-in search tool is for people too lazy to even type up their symptoms in a search bar. The Amazon voice user interface provides access to approximately 1,000 health-related “skills,” programmable tasks available for download that can be developed and submitted for review by anyone. Amazon’s written developer policies for health-related skills forbid collection of personal medical information and require that they come with a disclaimer that they do not offer professional opinions (19). Among the skills currently available, many offer health trackers to remind users to take their medication and log the number of bathroom visits in a day. Some suggest treatment options, claiming to be able to distinguish between and offer treatment advice for minor ailments. A broad survey of these skills quickly reveals inconsistent and poorly compiled research presented in perplexing jargon and uncertain final results (20). Furthermore, skills currently available from the Mayo Clinic are meant to be “first-aid”-focused and not designed with chronic disease treatment in mind.
 
Alexa’s ability to interpret symptoms and pool together web searches for how to treat common colds has already been tested. A Pew Research Center survey from 2013 found that over a third of Americans had self-diagnosed using the internet at least once (21). But looking up possible ailments on WebMD has become a punchline; selecting “fatigue” as a symptom on the site’s Symptom Checker can “diagnose” you with anything from clinical depression to hypothyroidism. Well, which is it?
 
But the biggest issue remaining outside of logistics is whether Americans would be willing to entrust their healthcare to corporate tech giants. The HealthTap app, for example, logs medical information like what kind of medications a person is taking. Is that data safe from getting into the wrong hands? Advertisers, companies, and governments could be waiting to capitalize on health data. The US government does not need a search warrant in most cases to access the trove of personal information that is shared voluntarily with smart homes in the same way it does not require a warrant to access information voluntarily shared with a bank or an internet provider (22). Ron Gutman, founder and CEO of HealthTap, said in an interview with Maria Bartiromo on Fox Business that the app runs on a health operating system that is compliant with the Health Insurance Portability and Accountability Act, HIPAA, and that the company is also externally audited for privacy. But it’s still a hard sell for most Americans that Big Tech genuinely cares for their health and will eventually not harm them with the privileged information they hold. 
 
Cynics of the Digital and AI revolution suspect virtual assistants and their corporate manufacturers of eavesdropping, but evidently, privacy concerns have not stymied the flood of smart speakers into our homes. Broadly speaking, few people would lose sleep over the government knowing what they had for dinner or their marital status, for example. How many of us take the time to read through privacy policies and pay attention to our data settings? We’ve grown accustomed to our favorite online retailers offering customized shopping experiences. We eat at nearby recommended restaurants, and we appreciate that it takes just a few clicks to pull up the route home or find where we want to go next.
 
The anxiety over Big Tech exploiting our personal data reached its most recent peak when it was revealed that the consulting firm Cambridge Analytica had harvested personal data of more than fifty million Facebook users and the information had gotten in the hands of clients like the Trump campaign. The scariest part was that all of the information was legally accessed, and no hacking was involved. (Alexa has dealt with her fair share of unwelcome break-ins: In 2017, security researchers discovered a vulnerability in Echo models developed before 2017 which allowed a third party to attach a malicious SD card and turn the Echo into a listening device and gain access to the owner’s entire Amazon account) (23).

But the biggest issue remaining outside of logistics is whether Americans would be willing to entrust their healthcare to corporate tech giants.

Health records are, of course, a whole different story. A person’s medical history or prescription records are much more sensitive than their car rental history. The issues we have with data sharing have to do with our relative lack of power and the sense of helplessness that we have when our personal data goes into the abyss of the Internet of Things and networks the average person doesn’t understand.
 
With health records, the amount of privacy traded away is greater, but the potential returns are immeasurable. Perhaps our immediate inclination is to recoil from the thought of opening up just one more aspect of our lives to technology simply because we are not used to the idea. Every new technological development seems to promote the idea of the imminent death of privacy. But we are so often inconsistent about the kinds of intrusions we will tolerate and the kinds we will not.
 
That’s not to say we shouldn’t be unduly stringent on Amazon’s collection of health data. Most Americans dislike the NSA’s surveillance programs because of the veiled nature of the information processing. People should take issue with secrecy and the power imbalance between individuals and the government. But now that there is a collective awareness of how corporations can mishandle personal data, the law should not be playing catch-up as it often has when it comes to innovations in the tech industry. Every generation believes that the threat of privacy is unprecedented in its time, but just as we have reevaluated how we value security over privacy, we should think long and hard about the potential good “smart health” can do. 
 
As we have learned, we often wield the power to decide what to do with our personal information, for example, each time we make a new account or log into a new website or application. We have not lost the freedom to decide what we put out there. The real danger of health data collection is that the information can so easily fall into the hands of malicious parties. Insurance companies are a particularly relevant example. Last year, a ProPublica investigation revealed that insurers and data brokers were predicting health costs based on personal data like race, marital status, how much time you spend watching TV, and even clothing size based on online purchases (24). The report warned about a future in which everything we do, from what we buy to what we eat, may help determine how much we pay for health insurance. Women who wear plus-size clothing are considered at risk of depression, data brokers say. Context gleaned from an individual’s home address could offer clues to health risks related to violence or environmental conditions. HIPAA protects medical information, but what qualifies as medical information is more ambiguous.

The challenge of overcoming legal and ethical considerations is far more onerous than any technical development.

Electronic health records (EHRs), however, do not require any guesswork on the part of data vultures. EHRs make it terrifyingly easy for insurers to analyze enormous amounts of data and combine it with personal details collected by data brokers (24). It’s not hard to imagine how insurance companies could assess risks posed by individual patients and adjust their policies and pricing plans or revise contracts with providers to protect themselves from losses. The ProPublica investigation suggests that they are already doing these things. If insurers get their hands on recordings of conversations with Dr. Alexa, they could move from inferred data to actual health data.
 
As the health insurance industry comes under increasing pressure to be transparent about the tactics it uses to maximize profits, the U.S. should closely examine Europe’s data protection regulations which cite privacy protection as a fundamental right. Taking a leaf from the EU’s book, it is not impossible for American legislation to take a patient- or consumer-first point of view. The General Data Protection Regulation forces companies to provide consumers with a variety of ways to control, monitor, check, and delete any of their information, in addition to enforcing pseudonymization, anonymization, and encryption (25). Most importantly, the EU punishes non-compliant companies with an iron hand. Infractions can cost up to 20 million euros or up to four percent of the corporation’s annual global turnover. The government can require third-party evaluators to ensure compliance, or mandate thorough audits. The point is, if we really wanted to prevent insurance or pharmaceutical companies from mishandling patient data, we could. 

As the health insurance industry comes under increasing pressure to be transparent about the tactics it uses to maximize profits, the U.S. should closely examine Europe’s data protection regulations which cite privacy protection as a fundamental right.

But the reality is that we have yet to experience our family doctors using an AI system to fit our medical records and act based on recommendations from software. The challenge of overcoming legal and ethical considerations is far more onerous than any technical development. Firm regulatory and legal frameworks are needed to govern new technologies created from machine learning. Approving Alexa as a medical assistant is not the same as approving a new vaccine. Algorithms evolve and adapt as more data is collected, and any sort of regulation must develop with them. Alexa’s role as an at-home watchdog, however, could be relatively low-stakes for doctors themselves who would confirm her observations in in-person visits. The real challenge remaining is data privacy, and given the fragmented U.S. healthcare system, regulatory and legal compliance issues will likely have to be addressed by state legislatures.
 
The medical world is still extremely fractured, but technology presents an opportunity to mend it together. Legislators, tech companies, and medical professionals must think about deploying human and technological resources differently to help us move away from a one-size-fits all approach and towards more personalized care. Part of the primary care physician’s responsibilities is to manage relationships with other clinicians, and thus, it is also on them to protect patients and our health system from the harms and costs of overdiagnosis and overtreatment. They must meet the needs of the younger generations who are desperately calling for mental health care. By instituting a middleman like Alexa, the primary care physician-patient relationship can be vastly improved. If regulators can see the advantages of opening up mental health care to artificial intelligence, they may be encouraged to take a serious look at AI regulations and allay fears over the dangers of exposing patients’ healthcare data.
Simply imagine: 
 
“Alexa, play ‘I’m so Tired’ by the Beatles.” 
 
*beep*
 
“Okay. How are you feeling today? Talking to a friend, listening to music, or taking a walk may help if you aren’t feeling like yourself. Let me know if you’d like me to make an appointment with your doctor.” 
 
 
Works Cited:
  1. Raphael D. (1996). Determinants of health of NorthAmerican adolescents: Evolving definitions, recent findings, and proposed research agenda. Journal of Adolescent Health, 19(1), 6-16.
  2. American Psychological Association (2018). Stress in America: Generation Z. Stress in AmericaTM Survey.
  3. Grupp-Phelan J, Harman JS, Kelleher KJ. (2007). Trends in mental health and chronic condition visits by children presenting for care at U.S. emergency departments. Public Health Rep, 122(1), 55-61.
  4. Horowitz LM, Ballard ED, Pao M. (2009). Suicide screening in schools, primary care and emergency departments. Current Opinion in Pediatrics, 21(5), 620-627.
  5. Clark D. (1993). Suicidal behavior in childhood and adolescence: recent studies and clinical implications. Psychiatric Annals. 23, 271-283.
  6. Frankenfield DL, Keyl PM, Gielen A, Wissow LS, Werthamer L, Baker SP. (2000). Adolescent patients—healthy or hurting? Missed opportunities to screen for suicide risk in the primary care setting. Arch Pediatr Adolesc Med, 154(2), 162-168.
  7. Goodrich DE, Kilbourne AM, Nord KM, Bauer MS. (2013). Mental Health Collaborative Care and Its Role in Primary Care Settings. Current Psychiatry Reports, 15(8), 383.
  8. Player MS, Peterson LE. (2011). Anxiety Disorders, Hypertension, and Cardiovascular Risk: A Review. The International Journal of Psychiatry in Medicine, 41(4), 365-377.
  9. Mcaskill R. (2015, February 23). Millennials Pushing for the Adoption of Mobile Health Technology.
  10. MacCracken L, Pickens, G, Wells M. (2009). Research Brief: Matching the Market: Using Generational Insights to Attract and Retain Consumers. Thomson Reuters Healthcare.
  11. Linzer M, Bitton A, Tu S, Plews-Ogan M, Horowitz KR, Schwartz MD. (2015). The End of the 15-20 Minute Primary Care Visit. J Gen Intern Med, 30(11), 1584-1586.
  12. Xue Y, Smith JA, Spetz J. (2019). Primary Care Nurse Practitioners and Physicians in Low-Income and Rural Areas, 2010-2016. JAMA: The Journal of the American Medical Association, 321(1), 102-105.
  13. Swanson RC, Mosley H, Sanders D, Egilman D, De Maeseneer J, Chowdhury M, Lanata CF, Dearden K, Bryant M. (2009). Call for global health-systems impact assessments. The Lancet, 374(9688), 433-435.
  14. National Public Radio, Edison Research. (2018). The Smart Audio Report.
  15. Picard RW. (1997). Affective Computing. M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 321.
  16. Ellipsis Health. (2018). Retrieved from https://www.ellipsishealth.com/
  17. Goldstein CM, Minges KE, Schoffman DE, Cases MG. (2017). Preparing tomorrow’s behavioral medicine scientists and practitioners: a survey of future directions for education and training. J Behav Med, 40(1), 214-226.
  18. Wiggers K. (2018, August 21). HealthTap’s platform uses AI to dispense treatment advice. Retrieved from https://venturebeat.com
  19. Amazon.com, Inc. Policy Testing. Retrieved from amazon alexa Website: https://developer.amazon.com
  20. Foley KE, Zhou Y. (2018, July 12). Alexa is a terrible doctor. Retrieved from https://qz.com
  21. Fox S, Duggan M. (2013). Health Online 2013. Pew Internet & American Life Project.
  22. Zwerdling D. (2018, May 24).Your Home is Your…Snitch?. Retrieved from The Marshall Project.
  23. Barnes M. (2017, August 1). Alexa, are you listening? [Web log post]. Retrieved February 16, 2019 from MWR InfoSecurity.
  24. Allen M. (2018, July 17). Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates. Retrieved from ProPublica.
  25. European Commission. (2016). Data protection in the EU. Retrieved February 16, 2019 from the European Commission.