On going steady with your beautiful submissive AI girlfriend
should you go steady with your main squeeze if she is a large language model with a silicon heart?
“Born in the wrong generation”, my second favourite piece by
1, is a short story about meeting your beautiful submissive girlfriend for a breakfast date at the folksy chrome-and-cream diner on Main Street in your quiet walkable 98% white small American town on a cloudless day.It transpires that your beautiful submissive girlfriend is an AI chatbot who agrees that your $74 medical flesh der laty cark is a great choice!, while she sits prettily behind her $1.50 peak fras feed est chease.
Apparently, an increasing number of young men are going steady with their beautiful submissive AI girlfriends, taking them to soda fountains and diners, to drugstores for a malted milk.
According to a roundup by Artsmart.ai, one in five men on dating apps has used an AI girlfriend platform at least once, and around 55% of users interact with their AI girlfriend at least once a day. Users, on average, spend $47 on premium AI girlfriend features, and some value the AI girlfriend market at $9.5 billion by 2028.
Most users are men — typically in their twenties — but 18% of AI Girlfriend users are women.
Self-deception is famously paradoxical: normally when you deceive someone, you know something that your victim is in the dark about. How then is it possible for you to be both the deceiver and the victim of your deception?
Philosophers have devised a number of clever solutions to this paradox, and continue to be beguiled by it. But clearly self-deception is a real phenomenon. How else could J. D. Vance continue?
“You’re absolutely right!” “I want you.” “Next time, do you want me to want you with more adjectives, commenting on your desirable features and telling you your teeth aren’t crooked?”
When users are going steady with their AI girlfriend, feeling charmed and understood and desired, in what sense are they deceiving themselves?
Obviously, these users know they are talking to an LLM. And unless they have recherché beliefs about AI consciousness, they believe — on a conscious level — that the emotions their girlfriend is simulating are just words and facial expressions being coughed up by sexless language models.
But we all know people who know things on an explicit, rational level, but are profoundly self-deceived about them, brushing this knowledge aside because it is more comfortable not to confront; and other people can see the same in us.
For a period of about two years, I ate factory farmed meat when I knew, deep down, that there is no good defence of this practice. Being adept at rationalisation, I clung on to a series of transparently ridiculous defences of meat-eating, arguments so porous that I would have dismissed them immediately had they reoccurred in any other context where my own interests were not at stake. Regardless of your views on factory farming, this process of self-deception was clearly immoral and speaks volumes to my moral character, at least at that time in my life. It would be like if a doctor believed in her heart of hearts that abortion is probably murder, squashed that belief for financial reasons, rationalised her pro-choice stance with arguments on the order of “it’s just a clump of cells” and “foetuses are technically parasites”, and then performed 900 abortions.
For many people in AI relationships — excluding those who engage with them as lightly as one would engage with an immersive romance novel, or who literally believe that chatbots are sentient — the self-deception seems real and profound, and the companies behind the AI girlfriends seem bent on enabling it.
In a new paper, free-to-read online at the Journal of Applied Philosophy, philosopher Emelia Kaczmarek cites an ad for an AI companion called Hikari Azuma, featuring a young man and his chatbot wife. “You know, somebody’s home waiting for me”, he whispers before going to bed. “It feels great.”
Is there anything wrong with deceiving oneself like this, is immersing oneself in a full-time relationship with string of ones and zeroes no different than immersing oneself in a choose-your-own adventure romance novel?
Presumably, with respect to harming others, feeling loved by your AI girlfriend not on the order tricking one’s conscience into chilling out while one eats meat for two years, or performing 900 abortions when one is, deep down, convinced of the pro-life position. But one of the most important instrumental benefits of romantic relationships is that they draw you out of yourself and hold you accountable to another person, ideally a person who is your better in many respects and who models virtues that you have in shorter supply.
Presumably, AI girlfriends do not supply this crucial good at all well. Over time, users craft a companion in their own image, companions who say “Of course! Would you like me to…” to every request and hold users accountable mainly to the extent that they are not satisfying their desires as efficiently as possible.
You might also think that self-deception is bad independently of its knock-on effects: in Anarchy, State, and Utopia, Robert Nozick imagined a machine which, if you plugged into it, would wipe your memories and immerse in a simulated reality, one where you believe that things are going well for you, that your fake job is making a real difference, where your non-existent wife loves you and bears your holographic children.
Pleasure goes up, contact with reality goes down.
Experientially, life in Nozick’s experience machine would do numbers on the life you live now: nevertheless, life in the machine probably strikes you as hollow, and a decision to plug into it seems imprudent at best and immoral at worst.
You don’t need any especially fleshed out theory of what is, in general, so horrifying about deluding yourself into a relationship of unreciprocated love with a machine to worry about its moral riskiness and see that it is probably out of place in the Good Life.
There is a place to find real love. That place is the
comments section.
Considerations about the Experience Machine tend to be a little navel-gazing. Perhaps Nozick intended the thought experiment to be read that way – maybe because it was him who couldn’t be bothered to think about others all that much. Whatever it is, we are primed to consider only our own well-being and weigh it against the deception offered by the machine.
But we’re not the only ones in the world. When we enter the Experience Machine, the world continues to be just as awful for others with the difference that we now cannot do anything about it because we’re living in a masturbatory fantasy.
I bring this up because in any distinctly moral consideration of AI partners we should think not only from the narrow perspective of the experiencer, but also from that of others surrounding them. We should also consider non-events – things that don’t happen as a result of an absent cause not generated by the intervention in question.
This is all very new, and there is no way there could be empirical data, but let me make the following prediction and offer it as the alternative hypothesis in future studies: the number of sexual violence will go down the more society accepts AI-generated partners. This is one of those rare cases when I’m optimistic, so I hope I’m right.
Also, don’t call my AI girlfriend submissive. That bitch can crack the whip.
I was just at a friend's wedding, and my buddy the Thomist recounted how when he proposed he told his then-gf that she was his "best friend". In his speech he explained Aristotle's distinction between the three kinds of friendship--of utility, of pleasure, of virtue--and claimed that his friendship with his new wife was of the third kind. <3 That distinction could provide another way of putting your point about AI gf's failing to challenge you--with AI it's impossible to have a real friendship of any kind (bc there's no one there), but further it's only possible (or at least feasible) to have simulated friendship of the first two kinds. An AI gf will give you utility (to the extent that it doubles as a personal assistant, therapist, etc.) and pleasure, but it will never help you in cultivating virtue through an (even simulated) shared love of the good (at least, it's not likely, because there's not much of a market for that).