How AI and advanced analytics could change healthcare
Though doctor-patient relationship has been a tenet of medical practice for millennia, many commentators now wonder whether AI will disrupt healthcare, the way the web has disrupted businesses like retail and travel. Is a new age of medicine, where ‘the AI will see you now’, at hand?
On balance, I think that’s unlikely. Aside from the limitations of the technology available to us today, the simple fact is that disease is part of the human experience. And we know people who are sick are likely to respond positively to other people. Having said that, I do believe that AI and other technologies can play a bigger role in care delivery, alongside patients and their doctors. Epilepsy – a lifelong condition where sufferers experience unpredictable and sometimes uncontrollable seizures affecting over 60 million people worldwide – is a very good example of how the digital health revolution could unfold.
Despite having been first documented in 400BC, epilepsy is still a challenging condition to understand, let alone treat. Doctors must rely on patient-reported data and anecdotal evidence and often will need EEGs, MRIs and scans for proper diagnosis, and it can take up to 4-5 years to find the right treatment.
The way in which a patient experiences epilepsy is severely affected by a vast range of factors. This includes their sleep, diet, mood, and any comorbidities. Up to 80% of patients suffer from at least one comorbidity, which limits their treatment options. Yet only 50% of diagnosed patients will have seen a neurologist in the past year. Only two thirds of the global population living with epilepsy have access to a treatment plan that works for them. The remainder must live with uncontrollable seizures.
For those burdened by the disease, and especially for the final portion of patients who can’t currently be treated, life is characterised by lack of control and unpredictability. They can’t control when they will have a seizure, how people around them might respond when it happens, or what the impact on their health might be. In studies we have sponsored, many epileptics tell us they find themselves marginalised from mainstream life, as many find it hard to keep jobs, and spend large amounts of time in healthcare systems that aren’t oriented towards their needs.
Epilepsy is a complex condition, therefore, and one where there are many unmet patient needs. We think meeting those needs will involve both medicines and data science, working together. Here are a few examples of how this could work in practice:
1. Learning the lessons of the past: there is a wealth of experience buried in medical records and other data sources but gleaning meaningful insights from it has been challenging. Big data science now makes this kind of data mining a real possibility. We’re working with for instance Georgia Tech in the U.S. to harness the power of big data and computational science and couple it with our established clinical expertise in epilepsy in an effort to find ways to identify the right solution for patients faster.
2. ‘Streaming’ patient data: we don’t need to use expensive machinery and hospital beds to track epilepsy symptoms. We can track all the signs we need – heart rate, brain activity, movement, and other modalities – using sensors. As an example, we’re part of a consortium called SeizeIT, which aims to produce a wearable device which will fit around the ear just like a hearing aid. By helping patients and doctors see when seizures occur there are likely to be fewer surprises, and eventually less time spent in hospital.
3. Sensors on pills: ingestible event markers (IEMs) are capsules containing printed circuits made from digestible materials. They make it possible to follow medication through patients’ digestive systems, right up to the point where the capsule is broken down and the drug enters the bloodstream. Since factors like diet and eating habits can have a significant impact on medicines’ effectiveness, knowing exactly when they will take effect is extremely important. “Take three times a day with food” is easy enough to follow, but not accurate enough for today’s sophisticated medicines – or for those whose epilepsy defies conventional treatment.
4. Hacking epilepsy: Hackathons have been around for some time and they remain remarkably successful at surfacing new ideas which can be quickly translated into patient benefits. An example of one of the projects that came out of our hackathon is Helpilepsy, a digital open platform linked to IoT devices and social features to track epilepsy, to empower the patient and to generate actionable insights for both patient and physician.
Towards a new ecosystem for digital health
Though we understand epilepsy and patients’ unmet needs better than anyone else, we recognise we don’t have all the answers when it comes to the technology solutions, What the initiatives outlined here all have in common is their focus on unmet patient needs. And this, for us, is the acid test: if digital innovations can help improve patients’ lives – and in the case of epilepsy, meet the needs of the final section of patients who don’t yet have treatments that work – then they are worth embracing. So long as technologies like AI meet unmet patient needs, they must be given due consideration.
Chinese Firm Taigusys Launches Emotion-Recognition System
In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’.
Who Will Use the Data?
As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.
It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens.
How Does It Work?
Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse.
Can We Really Detect Human Emotions?
This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state.
In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others.
Is China the Only One?
Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous.
‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU).
Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’.