Science

Will having longer, healthier lives be worth losing the most basic kinds of privacy? | John Harris


The deal has yet to be approved by the relevant regulators, but Google has got most of the way to buying Fitbit – the maker of wearable devices that track people’s sleep, heart rates, activity levels and more. And all for a trifling $2.1bn (£1.6bn).The upshot is yet another step forward in Google’s quest to break into big tech’s next frontier: healthcare.

Last month, in a Financial Times feature about all this, came a remarkable quote from a partner at Health Advances, a Massachusetts-based tech consulting company. Wearables, he reckoned, would be only one small part of the ensuing story: just as important were – and no guffawing at the back, please – “bedside devices, under-mattress sensors, [and] sensors integrated into toilet seats”. Such inventions, it was explained, can “get even closer to you than your smartphone, and detect conditions such as depression or heart-rate variability”.

Advanced societies are ageing at speed, and the pressures this will exact on already stretched public services are obvious. Tech offers some of the answers: huge troves of medical data can power the kind of artificial intelligence whose capabilities, particularly for early diagnosis, far outstrip those of human beings. This could lead in turn to huge computing power monitoring people’s every heartbeat and sneeze. Tech also opens up new revenue possibilities for the providers of care. Trade huge sets of patient data for either services or cash, and the returns could be enormous: some estimates suggest that the NHS might make almost £10bn a year.

In the US, there are already examples of the inevitable linking of tracking to health insurance, whereby premiums are discounted in return for access to personal data gathered by wearables (which companies also use for in-house “corporate wellness programmes”). Here, politicians now enthuse about the application of AI to medicine, and there are regular stories about the blurring of the boundaries between big tech and the NHS.

The health secretary, Matt Hancock, is very pleased about a deal with Amazon that allows the company access to the NHS’s information about “symptoms, causes and definitions of conditions” (not to mention “all related copyrightable content and data and other materials”). It stops short of the use of patient records, but the fact that NHS know-how will be accessible via Echo home assistants underlines where things may be heading: people will get in the habit of sharing health information, while Amazon gathers even more data about them.

In 2017, a partnership between London’s Royal Free NHS Trust and Google’s DeepMind division was found to have breached UK data protection law when 1.6 million people’s medical data was used to develop a system focused on acute kidney injury. Last week, news broke about the Spanish tech and telecoms giant Telefonica – which trades in the UK as O2 – being given access to thousands of people’s medical records by Birmingham and Solihull Mental Health NHS Foundation Trust. The information was anonymous, but Telefonica wants to develop an algorithm that will flag up supposed signs of possible mental illness by closely monitoring people’s everyday behaviour via their phones.

Last week I spoke to a Cambridge-based data specialist, Sam Smith, one of the prime movers in self-styled “privacy NGO” Med Confidential. In the UK, it points out, companies “get to use patient data if they are working with the NHS to provide care, or do research, or anything that is ‘for the purposes of the promotion of health’”. Yes, the NHS has an opt-out policy on such uses of medical records but, as Smith told me, it has barely been publicised, and in any case, it only offers a blanket “yes” or “no”, with no chance to select who or what has access.

Most people will do what human beings usually do: either not hear about the possibility of opting out, or not care – either of which means the blithe surrendering of personal information to endless companies and organisations. Insistences that it is always anonymised (or “de-identified”) do not close down anxieties about data breaches or the fact that AI can trace even supposedly anonymous data back to individuals. And questions remain about the deeply invasive innovations that some companies will use the data to develop. If you want one vision of the future, think about the deal Google has struck with the US provider Ascension, and the fact that it now has access to the medical records of up to 50 million Americans, complete with names and dates of birth. Google insists it will enforce a strict separation between this information and its activities in advertising and consumerism, but that does not allay some people’s deep unease.

Fitbit’s Alta HR device



Fitbit is about to be acquired by Google in a further foray into the healthcare market by the tech giant. Photograph: Mark Lennihan/AP

Besides, some of the most intimate personal information now goes way beyond the organisations who treat us when we are ill. There are well over 300,000 health apps available worldwide. For millions of us, the future may well feature an alarmingly regular experience: reaching for our phones and finding that an algorithm beyond our understanding suddenly deems us at risk of any number of illnesses, ailments and conditions, and is not just marketing us drugs, food and lifestyle choices, but telling our doctors and employers. One of the central questions of the near future is already obvious: will the price of longer and healthier lives be losing the most basic kinds of privacy?

Fitbits and toilet sensors represent one aspect of the so-called Internet of Things. Aided by 5G technology, tens of billions of devices – cars, cookers, heating systems – will soon be connected. In theory, it will not be very hard to cross-reference, say, people’s alcohol consumption, exercise rates and friendship patterns – not to mention their medical records – and then either nudge or strongly push their lives in supposedly beneficial directions.

The utilitarian arguments for doing so, bound up with managing health spending, advancing medical knowledge and maximising life expectancy, are obvious. But the ethical case against much of this seems equally clear. The only thing that can balance these two sets of imperatives is the state, and so far, most British politicians seem barely aware of this new set of issues, which demand new rules and laws.

Elsewhere, things are a little more advanced. In the US, a handful of activists in Congress are now regularly proposing legislation on data, privacy and consent. Last year, along with a Republican colleague, the Democratic senator and presidential hopeful Amy Klobuchar co-authored a Protecting Personal Health Data Act. In Europe, the essential logic of the General Data Protection Regulation (GDPR) and the so-called “right to be forgotten” points in a similar direction. The European angle, of course, only highlights yet another way that Brexit and the government’s belief in regulatory “divergence” may well have consequences that have so far barely registered.

The biggest problem, perhaps, is that even the most trailblazing steps towards regulating the relationship between tech and health are just a start, and that governments and legislators will only firmly grasp the problem once the public pushes them. Therein lies one of the central tensions of our time: that more and more innovations are quietly revolutionising the most sensitive aspects of how we live, while most of us look the other way.

John Harris is a Guardian columnist



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.