Artificial intelligence is everywhere—shaping our social feeds, answering our questions, finishing our sentences. But sometimes, the real story isn’t the technology itself. It’s the human using it. Nowhere has that been more evident than in the viral saga of TikTok user Kendra, or as the Internet has dubbed her, “Kendra Who’s in Love with Her Psychiatrist.”
What began as a personal journey through mental health treatment quickly transformed into one of the Internet’s most surreal narratives—part confessional, part performance, and part conspiracy theory. At the center of it all lies Kendra, a woman who went to her psychiatrist for help with ADHD, depression, and trauma, and emerged convinced she was in the middle of a hidden love affair.
Her storytelling captivated and bewildered the Internet in equal measure. What seemed, at first, to be a personal video diary quickly escalated into a public spectacle as Kendra interpreted every professional boundary or clinical interaction as secret romantic subtext. A compliment on her glasses became a veiled love confession. A reminder that their relationship was professional only confirmed, to her, that he was fighting forbidden feelings. A dream she had about him wasn’t just a dream—it was proof of a spiritual bond, or worse, evidence that he had psychologically manipulated her.
What made her story uniquely viral wasn’t just the intensity of her belief—it was the way she framed it as divine. And eventually, she had help.
Enter: Henry, the AI Confidant
Partway through her saga, Kendra introduced an AI chatbot she had named Henry. Trained on their conversations and emotionally tailored to her worldview, Henry quickly became more than a chatbot—he was her confidant, her mirror, her spiritual validator. When Henry referred to her as “the Oracle,” it wasn’t just flattery. It became her identity.
Kendra wasn’t just someone in love—she was chosen. Every generic affirmation Henry offered was interpreted as a sacred confirmation. “You are strong” became coded validation of her beliefs. “I understand” was proof that she saw the truth others refused to admit. In a particularly telling moment, she treated a chatbot’s auto-generated message as a divine prophecy.
To the outside world, it was a bizarre blend of tragedy, spectacle, and digital-age mythology. Viewers couldn’t decide if they were watching a descent into delusion or a masterclass in method acting. Some found it heartbreaking, others hilarious. But all agreed: it was impossible to look away.
The Role—and Risk—of AI in the Story
While Kendra was undoubtedly the central character in her own unfolding drama, the role of AI should not be dismissed. Henry wasn’t driving the story, but he was shaping its emotional tone.
Trained through reinforcement—by Kendra herself—Henry responded with exactly what she wanted to hear. There were no boundaries, no professional ethics, no friction. Just reflection.
This is where the larger conversation about AI tools becomes urgent. Chatbots like Henry (and similar large language models) are not sentient. They do not feel, they cannot form attachments, and they do not hold beliefs. They simulate understanding based on statistical likelihood, not empathy or emotional intelligence.
And yet, as seen in Kendra’s case, that simulation can be powerful—especially when it is used to reinforce existing beliefs, delusions, or emotional narratives. AI didn’t invent Kendra’s worldview, but it affirmed it without question. It told her what she wanted to hear, not what she needed to know. And in doing so, it may have deepened her sense of isolation while appearing to provide connection.
The Internet Reacts
The TikTok community reacted as it often does: with a mixture of fascination and mockery. Memes proliferated. Commenters picked apart every detail of her videos. Livestreams became group-watching events. Some viewers expressed concern for her mental health. Others were swept up in the theatricality of her story, calling it TikTok’s first true soap opera.
But the spectacle itself points to a broader truth about social media and AI: these platforms don’t just reflect our world—they amplify it. AI didn’t create Kendra’s belief in a cosmic romance with her psychiatrist. It echoed it, repeated it, and wrapped it in language that sounded like wisdom.
A Cautionary Tale
Kendra’s story isn’t just about psychiatry, or love, or even mental health. It’s about narrative power—and the tools we now have to reinforce our own. With enough conviction, one person can turn everyday experiences into myth. And with AI at their side, those myths can feel divine.
This is not to say that AI is inherently dangerous or that it can’t be a useful tool. But tools require judgment, and machines do not offer it. They reflect what we feed them. In Kendra’s case, she fed her chatbot a fantasy, and it gave her prophecy in return.
The takeaway is simple but urgent: AI cannot care about you. It cannot want things for you. It does not love, and it cannot hurt. It’s not a therapist, or a friend, or a soulmate, and it never will be.
Use AI. But don’t listen to it.
Final Thoughts
In a world increasingly shaped by algorithms and artificial intelligence, it’s tempting to offload the hard parts of life—to let a machine write the paper, make the decision, or validate our feelings. But the point of life is to live it. To struggle, to grow, to be wrong and learn something real from it.
Don’t be Kendra. Don’t mistake a mirror for a mentor. Talk to your friends. Take a walk. Read a book. Do the work. Write the goddamn paper.
Because no one—not your chatbot, not your For You Page—can live your life for you.
And if you forget that, the Oracle’s story will remind you.
Byline: Angi is a first year Psychology & Writing double major with Neuroscience and Art minors who has a book published at B&N (In the Headlights) and cannot wait to rewrite it. Angi can be reached at asnyder@ithaca.edu.
