Estrangement from Ourselves
Technology has convinced us that community is not worth conflict.
by Emily Carmichael
Emily Carmichael is a writer and editor who has appeared in The Washington Post, The Believer, The Connecticut Examiner, NOLA.com, Outside Magazine, and elsewhere. She was previously the managing editor of Fifty Grande.
A few years ago, I was at one of those Brooklyn warehouse parties when I had a conversation with The Founder of an AI company that I haven’t been able to get out of my mind. He told me that relationships had failed him. He felt unrecognized. No friend, no family member, no therapist had really got what he was saying, even though he had tried. No one had been able to be there for him unconditionally; no one had built for The Founder an emotional world he found satisfying. This devastated him; he spoke of it as a core insult in his life.
The Founder, however, had finally devised a solution: artificial intelligence could protect any user—and himself—from ever feeling this kind of disappointment again, from ever feeling emotionally left out at sea. AI could be a super-friend—one that, rather than having its own wants and motivations, dedicated its entire existence to a companionship that felt perfectly comfortable to you.
Then, he took a phone out of his pocket and showed me the prototype of an app that appeared to do exactly that.
“I told you my favorite movie is ‘Her,’ right?’” he asked at some point during our chat. "Her,” the movie about a copywriter of love letters, played by Joaquin Phoenix, who falls in love with an artificial intelligence operating system, voiced by Scarlett Johansson, is a near-future, dystopian sci-fi that has always left me with a deep sense of dread.
I thought he brought up the movie as a warning. Perhaps foolishly, I expected The Founder to say something about how this kind of artificial intelligence must be deployed carefully lest we all become wrapped around its finger, but he never did.
After we finished talking, I had a similarly unsettled feeling, which soon gave way to a pervasive sense of alarm. While this was a single conversation, I fear it showcases a larger, dysfunctional cultural shift. Every time The Founder showed himself to another person and felt, to some degree, rejected, or just incompletely embraced, he was confronted by an age-old question: how much of ourselves are we actually supposed to show to people we love and, in turn, how much of us should we expect them to be able to see? With his app in hand, The Founder can now realistically say: see me as I want to be seen, completely and without challenge, or you shall not see me at all. If he were to apply this across his relationships, he would—intentionally or not—estrange himself from almost everyone.
The Founder might be an extreme example, but he’s not an outlier in his desire to ditch the risk and complexity of human interaction for programmatic affirmation of a machine. For the first time in history, when we feel misunderstood, we can opt out and find an alternative that feels just human enough. It’s not just artificial intelligence, either. Social media algorithms have been playing this game for years; this trend predates ChatGPT and its ilk.