Mental Health Surveillance: There’s an App for That
What happens to all the data trails we leave in our digital wake? What kinds of precautions should we take with our health data? How do we bake a ‘Do No Harm’ Hippocratic ethic into health technologies? And what are the trade-offs between improving care and increasing surveillance?
I didn’t know that my friend was having mental health issues until I strode through the waiting room of Mental Health Services on our university campus. He looked up from his seat, startled and a bit embarrassed to see me in an unfamiliar space, outside of the context of our usual hang-outs. It was the kind of semi-walled room that was only abstractly communal, with empty chairs lining the walls and a looming clock framing the doorway, ticking away the minutes until a counselor appeared to relieve students from their perches. Pamphlets were left there featuring kindly, diverse faces offering all sorts of connective tissues, resources: phone numbers, websites and informational leaflets to browse through or to stuff hurriedly into backpacks. Counseling keepsakes for wallflowers.
After I startled my friend, I wondered why the area wasn’t better designed to minimize casual encounters amongst peers with sensitive health issues. The stigma of mental health carries its own special humiliation; people are wary of discussing it, particularly outside of private or confidential settings.
My graduate research centered on Canadians who are turned away from visiting their friends, families, or colleagues in the U.S. when a border guard identifies that they have, or have a history of, mental health issues. The expectant traveler, flushed from doing the shoes-off, belt- off, security line-dance to the tune of a conductor waving an electromagnetic wand over their pockets in search of some metallic bauble, discovers, incredulously, that they are risks to national security, much to their chagrin and sometimes, their outrage. They struggle with the dawning realization that their national identity, and their freedom of mobility, are wrapped up in their mental health status. Leaving the security gate in a daze, they fumble with their phone to call the people waiting for their arrival and explain that they can’t come, after all. No, they’re not sure if, or when, they can reschedule. Later, they face the additional bureaucratic hurdle of arguing with their insurance company for a refund.
Photo CC-BYÂ Medill DC, filtered.
We create all sorts of health information outside the bounds of a confidential relationship with our physician, and the way it’s collected and disseminated is hard to track. A Canadian woman I met in the course of my research, Lois Kamenitz, made a failed suicide attempt many years ago, and her partner called 911. The police recorded the incident into her police record (not a criminal record), and, like most police records, it was routinely entered into the Canadian Police Information Centre’s (CPIC) national database. U.S. border guards gained access to most of CPIC’s database in the aftermath of September 11th, 2001 as Canada and the U.S. ramped up cross-border information sharing practices. Information about Kamenitz’s incident surfaced four years after it took place, on the screen of an alert U.S. customs agent, and she was turned away from entering the U.S. because of her archived mental state, even though she had traveled freely in the previous four years.
In essence, she was labeled as a possible risk to national security and turned away from her flight because of latent database linkages. Was she less or more of a risk the last time she traveled to California to spend Thanksgiving with her family?
I know others like her. Some have been caught up in the widening security net because they carry their medication on them; others have been identified from old police records; for some, it’s because they say their intention is to seek medical treatment. When former Toronto Mayor Rob Ford was advised to take back his request to travel into the U.S. at the border (in order to avoid being formally denied entry, a courtesy not commonly extended to non-mayors), where he was trying to attend rehab for his heavily publicized drug addictions, I knew, or strongly suspected, that he was being turned away under the same auspices as Kamenitz. If an individual is rejected on the basis of their mental health, they can opt to be evaluated by a U.S. panel-appointed physician, who charges $250 a pop (and $500/hour if the physician has any follow-up questions) – one of the many costs of mental health issues. If the physician determines that they are eligible to be admitted to the U.S., the physician faxes their recommendation over to the U.S. border guards, and the traveler-patient hastens to the airport to try their luck again, hoping that the fax came through.
When he was Minister of Justice, and before he became the 15th Prime Minister of Canada, Pierre Trudeau famously declared that “there’s no place for the state in the bedrooms of the nation.†He was referring to the impending decriminalization of homosexuality. When the neighboring state uses your intimate mental health information as a tool of exclusion under the banner of security politics, the walls come down. It’s like having the state inspect you as though you were dressed in a gaping hospital gown, exposed. It colors and warps everything you thought you knew about the intersection of health privacy and international mobility. Uncle Sam, in the form of a custom’s agent, seems to morph into a beguiling Peeping Tom, raising a clinical eye above his military-esque badge, a weapon holstered to his hip. He uses a computer and documents to broker your identity, sitting back on a raised seat from behind a tall desk so he can meet your ruffled gaze evenly at eye-level while you stand at attention, shifting your weight from foot to foot to keep from pacing. How do you measure up against your data-double? Did the fax even come through?
Photo CC-BY Sébastien Launay, filtered.
Mental health has long been a categorical reason for denying people entry to the U.S.; from 1917, it was actively used to screen out potential immigrants with mental “impairments,†which were commonly evaluated with literacy tests. This discriminatory screening process was used to constrict the flow of immigrants from impoverished, “low-educated,†and “undesirable†groups. Particularly at the border, “mental health†as a category has since contemporarily expanded to encompass a huge range of behaviors, including post-partum depression, Alzheimer’s, and drug and alcohol addiction.
There is a huge range of issues that fall under the mental health category, and yet the stigma endures beyond any degree of specificity. When does mental health become a national security issue? How do we anticipate the consequences of linking databases and sharing information on vulnerable populations?
A few years ago, a friend of mine recruited me to his start-up. The stated goal: cure depression, partly through an app. The app would track a user’s mood and activities over time so that they could generate a chart of their mood changes, and how they were affected by shifts in their routines. How much they slept, ate, socialized, worked, texted – all of that data could be logged and linked to the user’s mental state. With the app, the user could (hopefully) identify what triggered their depression, or improvements in their mood. They could also generate a chart of their daily, weekly, or monthly experiences and show it to their physician. It’s hard to remember how one’s mood fluctuates over time, or between doctor’s appointments. If a patient was trying out a new medication, this could be a great way to track its efficacy. Alternatively, the behavioral profile of a patient could help the physician refine their diagnosis.
Photo CC-BYÂ BitterScripts, filtered.
In some ways, this type of technology seemed like a great opportunity to improve treatment regimes. On the other hand, I was wary of anyone keeping digital logs of mental health information on their smartphone. Who’s to say your mental health information won’t be used against you at some later date, out of context? What if it becomes a factor in an acrimonious custody battle, for instance? What if the platform hosting your health data is hacked, or released publicly? For example, earlier this year several news outlets reported sexual activity data from FitBit users was leaking into Google’s search results.
I left the start-up at early stages because I couldn’t bring myself to test the app, or to recruit people to it. Not because the company lacked integrity or privacy ethics, but because I felt that the implications of exposure created risks for users that were impossible to future-proof. The experience prompted me to think more broadly about the value of health information outside the scope of treatment regimes. What are the implications of context collapse when it comes to health information privacy? For instance, PatientsLikeMe.com offers forums for members to discuss their health conditions, treatments, and other information and experiences. A representative of Nielsen, a data aggregator company, registered for the site with a fake profile to scrape users’ data, presumably to build profiles on members that it could sell to its client base, which includes pharmaceutical companies. [Andrews, Lori. “I Know Who You Are And I Saw What You Did: Social Networks And The Death Of Privacy.†Free Press: New York, 2011, p. 34] How might health data collected on members be used in ways they did not meaningfully consent to?
We generate huge amounts of health data outside of areas that have legal and ethical protections encoded in their contexts. Our consumer habits feed into all sorts of scoring systems that can predict our health habits from our browsing or purchasing histories. FICO, the credit-scoring bureau, developed a Medication Adherence Score that predicts how likely a consumer is to adhere to their prescribed medication regime based on their consumer activities and habits. Carolinas HealthCare is experimenting with predicting where its physicians should intervene in their patients’ care based on the purchases they make. For example, a patient with asthma who purchases cigarettes could trigger an alert for their physician. Yet the hunger for tracking seems blithely disconnected from the issues of data ownership, autonomy, or accountability.
What happens to all the data trails we leave in our digital wake? What kinds of precautions should we take with our health data? How do we bake a ‘Do No Harm’ Hippocratic ethic into health technologies? And what are the trade-offs between improving care and increasing surveillance?
These are questions I struggle with as I try to navigate the possibilities of new technologies, with the often inarticulate and speculative risks they carry. It only takes one jarring incident to throw seemingly innocuous or neutral technologies into sharper relief.
This article was originally published as part of Model View Culture Quarterly #4, 2014.