Tech

With the development of generalized AI, what’s the meaning of a person?

For the next installment of the informal TechCrunch book club, we are reading the fourth story in Ted Chiang’s Exhalation. The goal of this book club is to expand our minds to new worlds, ideas, and vistas, and The Lifecycle of Software Objects doesn’t disappoint. Centered in a future world where virtual worlds and generalized AI have become commonplace, it’s a fantastic example of speculative fiction that forces us to confront all kinds of fundamental questions.

If you’ve missed the earlier parts in this book club series, be sure to check out:

Some questions for the fifth story in the collection, Dacey’s Patent Automatic Nanny, are included below.

And as always, some more notes:

  • Want to join the conversation? Feel free to email me your thoughts at danny+bookclub@techcrunch.com or join some of the discussions on Reddit or Twitter.
  • Follow these informal book club articles here: https://techcrunch.com/book-review/. That page also has a built-in RSS feed for posts exclusively in the Book Review category, which is very low volume.
  • Feel free to add your comments in our TechCrunch comments section below this post.

Thinking about The Lifecycle of Software Objects

This is a much more sprawling story than the earlier short stories in Exhalation, with much more of a linear plot than the fractal koans we experienced before. That wider canvas offers us an enormous buffet of topics to discuss, from empathy, the meaning of humanity, and the values we vouch for to artificial entities, the economics of the digital future, and onwards to the futures of romance, sex, children, and death. I have pages of notes from this story, but we can’t cover it all, so I want to zoom in on just two threads that I found particularly deep and rewarding.

One core objective of this story is to really interrogate the meaning of a “person.” Chiang sets up our main character Ana as a mother of a digital entity (a “digient”) who was a zookeeper in a past life. That career history gives us a nice framing: it allows us via Ana to compare humans to animals, and therefore to contextualize the personhood debate around the digients throughout the story.

On one hand, humans uniquely value themselves as a species, and even the most dedicated digient owner eventually moves on. As one particularly illuminating passage discusses when a digient’s owner announces that his wife is pregnant:

“Obviously you’re going to have your hands full,” says Ana, “but what do you think about adopting Lolly?” It would be fascinating to see Lolly’s reaction to a pregnancy.

“No,” says Robyn, shaking her head. “I’m past digients now.”

“You’re past them?”

“I’m ready for the real thing, you know what I mean?”

Carefully, Ana says, “I’m not sure that I do.”

“…Cats, dogs, digients, they’re all just substitutes for what we’re supposed to be caring for.”

This owner has made a clear distinction: there is only one form of entity worth caring for, only one thing that a human can consider a person, and that is another human.

Indeed, throughout this short story, Chiang constantly notes how the tastes, values, norms, rules, and laws of human society are designed almost exclusively with humans in mind. Yet, the story never takes a definitive stance, and even Ana is not at all convinced of any one point of view, even right up to the end of the story. However, the narrative does offer us one model to think through that I thought was valuable, and that’s around experience.

What separates humans from other animals is that we base decisions on our own prior experiences. We collect these experiences, and use them to guide our actions and drive us toward the right outcomes that we — also from experience — desire. We might want to make money (because experience tells us that money is good), and so we decide to go to college to get the right kind of learning in order to compete effectively in the job market. Essential to that whole decision is lived experience.

Chiang makes a very clear point here when it comes to a company called Exponential, which is interested in finding “superhuman AI” that comes without the work that Ana and the other owners of digients have put in to raise their entities. Ana eventually realizes that they can never find what they are looking for:

They want something that responds like a person, but isn’t owed the same obligations as a person, and that’s something that she can’t give them.

No one can give it to them, because it’s an impossibility. The years she spent raising Jax didn’t just make him fun to talk to, didn’t just provide him with hobbies and a sense of humor. They were what gave him all the attributes Exponential is looking for: fluency at navigating the real world, creativity at solving new problems, judgment you could entrust with an important decision. Every quality that made a person more valuable than a database was a product of experience.

She wants to tell them that Blue Gamma was more right than it knew: experience isn’t merely the best teacher; it’s the only teacher … experience is algorithmically incompressible.

Indeed, as the owners start to think about when they might offer their digients independence to make their own decisions, experience becomes the key watchword. Their ability to make their own decisions in the context of past experiences is what defines their personhood.

And so when we think about generalized artificial intelligence and the hope of creating a sentient artificial life, I think this litmus test starts to get at the real challenge what this technology can even be. Can we train an AI purely through algorithms, or will we have to guide these AIs with their open but empty minds every step of the way? Chiang discusses this a bit earlier in the story:

They’re blind to a simple truth: complex minds can’t develop on their own. If they could, feral children would be like any others. And minds don’t grow the way weeds do, flourishing under indifferent attention; otherwise all children in orphanages would thrive. For a mind to even approach its full potential, it needs cultivation by other minds.

Indeed, Ana and the other main character Derek are forced to keep pushing their digients along, assigning them homework and guiding them to new activities to continue propelling them to get the kind of experience they need to succeed in the world. Why should we assume a generalized AI wouldn’t be any less lazy than a child today? Why would we expect that it can teach itself when humans can’t teach themselves?

Speaking about children, I want to head over to the other thread in this story I found particularly trenchant. Clearly, there is a whole parallel to real-life human childrearing that is sort of intrinsic to the whole story. I think that’s obvious, and while interesting, a lot of the conclusions and meanings from that concept are obvious.

What’s more interesting is what affection and bonding signifies in a world where entities don’t have to be “real.” Ana is a zookeeper who had deep affection for the animals under her care (“Her eyes still tear up when she thinks about the last time she saw her apes, wishing that she could explain to them why they wouldn’t see her again, hoping that they could adapt to their new homes.”) She vigorously defends her relationship with those animals, as she does with the digients throughout the story.

But why are some entities loved more than others if they are all just code running in the cloud? The main digients featured in the book were literally designed to be attractive to humans. As Blue Gamma scans through the thousands of algorithmically-generated digients, it carefully selects the ones that will attract owners. “It’s partly been a search for intelligence, but just as much it’s been a search for temperament, the personality that won’t frustrate customers.”

The reason of course is obvious: these creatures need attention to thrive, but they won’t get it if they are not adorable and desirable. Derek spends his time animating the avatars of the digients to make them more attractive, generating spontaneous and serendipitous facial expressions to create a bond between their human owners and them.

Yet, the story pushes so much harder on this theme in layers that connect with each other. Derek is attracted to Ana throughout the story, even as Ana stays focused on developing her own digient and keeping her relationship with her boyfriend Kyle going. Derek eventually realizes that his own obsession with Ana has become untenable, which is a subtle parallel to Ana’s own obsession with her digients:

He no longer has a wife who might complain about this, and Ana’s boyfriend, Kyle, doesn’t seem to mind, so he can call her up without recrimination. It’s a painful sort of pleasure to spend this much time with her; it might be healthier for him if they interacted less, but he doesn’t want to stop.

Indeed, the book’s strongest thesis may be that this sort of love just isn’t reproducible. Ana wants to join a company called Polytope in order to raise funding to port her digient to a new digital platform. As part of the employer agreement, she is expected to wear a “smart transdermal” called InstantRapport that uses chemical alterations in the brain to rewire a human’s reward centers to love a specific individual automatically. Ana’s love for her digient pushes her to consider rewiring her own brain to get the resources she needs.

And yet, the digients eventually develop similar thought processes. Marco and Polo, two digients owned by Derek, eventually agree to be copied as sex toys, in order to provide funding for the port. Their clones will have their “reward maps” rewired to make them love the customer that purchases them.

The story gives us a haunting reminder that we are ultimately a bunch of neurons that respond to stimuli. Some of that stimuli is under control, but much of it is not, instead programmed by our experiences without our conscious intervention. And there we see how these two threads come entwined together — it is only through experience that we can create affection, and it is precisely affection and therefore experience that creates a person in the first place.

Some questions for Dacey’s Patent Automatic Nanny

  • Can machines play a meaningful role in childrearing?
  • Did the scientific method work in this instance?
  • Connecting this story to the Lifecycle of Software Objects, what is Chiang trying to say about childrearing? Are there similarities or differences between these two stories’ conceptions of children and parents?
  • Should we be concerned if a child only wants to talk to a machine? Do we care what entities a human feels comfortable socializing with?
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top