Computers are increasingly directing elder care decisions, tracking everything from toilet trips to whether or not someone has washed.
Human caretakers are being replaced by care bots, which are on the rise. Computers are influencing decisions about elder care in rich countries, pushed by a shortage of caregivers, an aging population, and families that want their elderly to stay in their own homes for longer. Over the last few years, a slew of so-called “age tech” companies have sprung up, with the goal of monitoring older persons, particularly those with cognitive deterioration. Their ideas are now making their way into home care, assisted living, and nursing homes.
While the technology has the potential to provide safety for elderly persons and relief for carers, some are concerned about its potential hazards. They raise concerns about the systems’ accuracy, as well as privacy, consent, and the kind of future we want for our seniors.
For a long time, technology has been used to help keep seniors safe, such as life alarm pendants and so-called “nanny cams” set up by families concerned that their loved ones may be abused. However, including systems that use data to make judgments – what we now refer to as AI – is novel. Sensors that are becoming increasingly affordable collect many terabytes of data, which are then processed by computer programs to infer actions or trends in daily activities and determine whether something is wrong.
A fall, “wandering behavior,” or a change in the amount or duration of bathroom trips that may indicate a health condition such as a urinary tract infection or dehydration are just a few of the things that cause caregivers to receive warnings. To monitor spaces, the systems use everything from motion sensors to cameras to lidar, Light Detection and Ranging, a sort of laser scanning used by self-driving automobiles. Others use wearables to track people.
CarePredict, a watch-like device worn on the dominant arm, may track a person’s likely activity by analyzing patterns in their movements, among other data. A caregiver is notified if recurrent feeding gestures are not identified as expected. If the system recognizes someone in the bathroom and detects a sitting posture, it can be deduced that the individual “is using the toilet,” according to one of its patents.
SafelyYou is a fall prevention system that with a single camera, placed unobtrusively high on each bedroom wall, continuously monitors the area. Staff is notified if the technology detects a fall after being trained on SafelyYou’s ever-expanding database of falls. The footage, which is only saved if an event triggers the system, can then be viewed in a control room by paramedics to help decide whether someone needs to be taken to the hospital and by designated staff to analyze what changes could prevent the person from falling again.
Companies must consider bias. AI models are frequently trained on databases of historical subject behavior, which may not accurately reflect all people or situations. Gender and racial biases have been thoroughly documented in other AI-based technology, such as facial recognition, and they may occur in these systems as well.
2. Behavioral changes
The system’s expectation of routine can disturb the elders’ activities and force them to adjust their behavior in order to avoid needless alarms that may bother family members.
3. Conflict of interests
Signs of benevolent coercion by social workers and family members to get the elderly to use such technology have already been witnessed on various occasions.
4. Privacy and consent
Another issue is securing agreement from seniors, as well as potentially from all other people affected by surveillance activities.