Reign of the Techno-Nanny

 

Nanny and Baby by Igor Grabar with Actroid-DER

Generation Smartphone,” the September 2012  cover story of  technology magazine IEEE Spectrum, imagines a near future (2020, to be exact) where we each have a “SmartPhone 20.0” that takes care of everything for us. The author, Dan Siewiorek, provides a laundry list of things that future smartphones will make possible. Are you the parent of a newborn child and worried about Sudden Infant Death Syndrome? Well, just download the right suite of apps and worry no more. Afraid that little Tommy will get kidnapped on his way home from school? Well, the smartphone will use facial recognition apps to I.D. that creepy old man trying to coax him into a van and will whisper into Tommy’s ear that he should run to a safe house, 0.3 miles south.

Lest you think that these gadgets will only be useful to kids, plenty of apps will fix adults’ lives as well, enabling us to drive better, remember business partners’ names, never mess up a recipe, and automatically quantify all our vital signs. Maybe we’ll even get the chance to acquire apps like those that Siewiorek talks about; for many of his predictions, he tries to link them to similar technologies that already exist or are being developed.

It’s tempting to cheer on a future where we rely on apps to improve our lives. Not only can they address everyday annoyances like getting lost on the way to the store or forgetting your shopping list, but apps also might be a way of overcoming the cognitive biases we are prone to according to behavioral economics, as detailed in such books as Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein and Thinking, Fast and Slow by Nobel laureate Daniel Kahneman. It turns out that people aren’t as rational and autonomous as economic theory once predicted. For instance, we experience ego depletion: mental or physical tasks drain our willpower and increase the likelihood that we’ll make poor choices — say, breaking a workout routine or cheating on a test. We are also influenced by the way choices are framed: if deserts are the first thing you encounter in a cafeteria, for instance, then chances are higher that you’ll put one on your tray than if they were last. If vegetables and fruits are placed at eye level, you’re more likely to grab a serving.

What this means is that living up to our acknowledged preferences can be much more difficult than we think. Maybe a helping hand is what we need. Mobile technologies seem to offer a way to counter the biases when those bias threaten to hinder our decisionmaking.

But before we eagerly welcome such technologies, we should consider the consequences of apps that serve as our moral guide, guardian, caretaker, and assistant. To focus entirely on all the ways that these future smartphones will make our lives better belies any of the serious problems that may accompany the ubiquity of personal technologies — problems like the phones directing our actions and harming our moral faculties.

Of all things, the 1999 Disney film Smart House can help illustrate the problem of embracing life-enhancing technology uncritically. The movie tells the story of a single dad with two kids, one pre-teen and one teenager. The family strains to juggle all the aspects of normal life — chores, work, school, relationships — until one day they win a competition and move into a new, cutting-edge "smart house." The home’s software and hardware helps the family lead the life they always wanted: It takes cares of cooking, cleaning, relaxation, and everything else. What’s more, the smart house has advanced artificial intelligence, so it learns about the occupants' preferences, daily routines, and personalities. It’s a cyber-utopian’s dream come true.

That is, until the smart house goes from assistant to techno-nanny. Or, as the movie’s tagline puts things: “When the computer at home has opinions of her own!” Eventually the AI decides that the world outside is too dangerous for the family, and it traps them in its safe, pristine, technologically enhanced environment. The movie ends with a surreal scene where the AI manifests itself as a hologram of a 1950s housewife. The dad confronts the hologram and has to explain to it that it is not real and will never be a human, causing the AI to experience an existential crisis and subsequent reboot.

The movie illustrates how the feedback loops of a technologically overempowered "smart" device can go absurdly awry, turning what we seem to want and what is seemingly good for us into an inescapable prison. Whereas the filmmakers could only imagine a smart house that morphs into a literal prison, mobile technology is much more subtle than that. Little do we know, we’re willfully tethered to our prison at all times.

Cyber-utopians laud the coming of the “Internet of Things” where all our electronics will be networked together. I mean, how great would it be if my fridge can notice that I’m running low on groceries and then communicate this information to my phone, which would subsequently tell me to stop by the store? Well, that is, until my fridge decides I should be eating wheatgrass patties and a specific brand of ad-sponsored frozen dinners instead of what I normally purchase. At that point I’m acting on what the techno-nanny presumes my preferences should be, rather than what I actually want.

This is a problem. In theory, the reason we use apps is so that we can do what we already want but just do it better. This all changes when smartphones prompt us to guide us to act in unanticipated ways. Albert Borgmann, a pre-eminent philosopher of technology, worries in Real American Ethics that "we will slide from housekeeping to being kept by our house."

With a smartphone, the subtlety of its suggestions will be far less noticeable than a house that locks out all the danger of the real world. Unlike the smart house where suddenly our physical space is commandeered by technology, the infiltration of phones into more and more everyday activities will be a slow and less overt process that occurs in an intangible conceptual space

The questions posed by Michael Schrage, a fellow at MIT’s Sloan School, for instance, are good ones. “Would you want your iPad or Android to tell you, politely, to shut up or wake up? Should you want that?” But here’s the rub: By the time your phone is prompting you to act in ways you may otherwise not, you won’t be surprised or hostile toward the suggestions. Once we start on the trajectory of letting phones take care of mundane tasks, inertia takes over and it becomes easier to just export larger and larger problems to the appropriate app.

In Nudge, Thaler and Sunstein warn us to “never underestimate the power of inertia,” by which they mean our strong bias is to stick with whatever we are already doing. Inertia is a natural part of our psyche, but technology use can abet it by setting us on a path that's hard to diverge from. By becoming accustomed to listening to our smartphone’s prompts, we enter into a state of autopilot or, as Thaler and Sunstein call it, “mindless choosing,” and end up following along with the phone’s suggestions as they pertain to a broader scope of decisions. It becomes the case that the cognitive pitfalls that lead us to turn to apps in the first place soon enough cause us to fall under the spell of the techno-nanny.

We’ve all heard stories of people driving into rivers because their GPS told them to. We might laugh at them for so blindly following directions, but that’s only because it’s an over-the-top example of what we do every day. We tend to be quite adept at following the technological prompts we’re presented with. Not driving into the river is one thing, but knowing when to defer to the phone is not so as simple as resolving to make prudent, conscious decisions about the technology's limitations. This response falls for an overly rationalistic conception of choice and ignores the fact that inertia is so strong we should expect a significant amount of users to be unable to resist the allure of the digital siren. In a recent article in the Huffington Post, Evan Selinger argues that even a cool and seemingly innocuous app like the iPhone’s Siri habituates us to engage with our phone on a regular basis. And it doesn’t stop there: Siri is helping change our social sensibilities by encouraging us to want answers immediately and want them from her.

What’s at stake in this is that reliance on the techno-nanny to make our decisions and monitor our lives invites us to outsource moral character. Many of the apps that Siewiorek describes in “Generation Smartphone” are in the business of making ethical decisions for us — like having a cyborg-angel on your shoulder or, as Selinger and Thomas Seager describe in a recent Slate article, your own “Digital Jiminy Cricket.”

By  externalizing our ethical decisionmaking to apps and gadgets, we not only risk the practical problem of total dependence — if you lose your phone, you'll become disoriented and helpless — but the deeper issue of sacrificing moral agency. Philosophers going back to Aristotle have argued that a key aspect of human flourishing is putting in the work to develop a virtuous character. In the Nicomachean Ethics, Aristotle explains, “that it is by doing just acts that the just man is produced, and by doing temperate acts the temperate man; without doing these no one would have even a prospect of becoming good.” In other words, we develop a mature moral character — in this case, the ability to make context-sensitive decisions and exercise good judgment — through practice and experience.

Even if we assume that our handy ethics app always tells us to act in a 100% objectively good way, we should still concern ourselves with the affect such deference has on the type of people we become. If we export our moral capacity to technology then our character suffers because we’re never called on to think through a situation or work out the process of making a good ethical judgment. Instead, we behave as vehicles for the app’s decisions and not like moral agents. With morality, “More of our fundamental humanity hangs in the balance,” as Selinger and Seager argued.

Writing on a similar theme, Will Lowe, a research fellow at University of Nottingham, discusses in his book chapter “Identifying Your Accompanist” the consequences of artificial, digital companions (like, in this case, smartphones) observing our actions. He draws comparisons between the panopticon — a structure designed to allow a guard to watch all inmate without them knowing whether or not they’re being watched — and the ever-present companion. The social theorist Michel Foucault pointed towards the use of panoptical surveillance in society as a method of instilling discipline in people. If only he could see that there’s no need for intricate architectural design when the panopticon is in our pockets.

Now, to be sure, this isn’t to say that using an ethics decisionmaking app will inevitably lead to robotic actions, amoral people, and enforced discipline. But, combined with the inertia to defer to the app for everything, the risk is real enough that we should stop and seriously consider the possible harm before proceeding forward into “Generation Smartphone.”

So, contrary to Smart House, the danger of the techno-nanny won’t manifest in a brash, sudden way. It will, instead, arise through the slow creep of inertia where we incorporate Siri and other apps into our normal daily routine. The symbiotic relationship between us and our apps will be seamless; we won’t even be aware of our diminishing ethical capabilities. In the worst case, smartphones will let us live without self-awareness and self-control.