AI chatbot assistants growing more sophisticated – and addictive

New generation of digital personal assistants blurs the line between AI and human traits; Users report growing dependence, using them to ease loneliness and cope with anxiety or depression; yet this technological advance raises ethical concerns

Israel Wullman|
In 2014, Spike Jonze’s romantic film Her hit theaters, telling the story of Theodore Twombly (Joaquin Phoenix), a lonely man who falls in love with an AI-powered operating system named Samantha (voiced by Scarlett Johansson).
While the film had modest box office success, it became a technological milestone, envisioning a future where AI assistants could interact with humans in a deeply personal and emotional way.
4 View gallery
תאודור טומבלי (חואקין פיניקס), משוחח עם אהובתו
תאודור טומבלי (חואקין פיניקס), משוחח עם אהובתו
Joaquin Phoenix as Theodore Twombly in Her
(Photo: Courtesy)
Samantha wasn’t just a machine – she could hold intelligent conversations, understand social cues and even detect emotions, making her capable of comforting Theodore when he was down.
Now, a decade later, the idea of a "Samantha" is no longer science fiction. AI-powered virtual assistants that can engage in human-like conversations and even display emotional awareness have begun to emerge.
OpenAI, the leading AI research lab, recently introduced Sky, a voice assistant built on its latest GPT-4 model. OpenAI CEO Sam Altman, a fan of Her, tweeted just one word after the release: “She.” Altman reportedly tried to recruit Johansson to lend her voice to Sky, but after she declined and threatened legal action, OpenAI developed a voice remarkably similar to hers. Currently, Sky is only available to a select group of testers but is expected to launch for the general public soon.
Meanwhile, Google introduced Gemini Live, a voice assistant exclusively available on its Pixel 9 devices for now, with plans to expand to all Android devices. Users can interact with Gemini Live through earbuds, reminiscent of Theodore’s communication with Samantha in Her.
Apple unveiled an updated version of Siri, which will soon allow users to have more natural conversations, leveraging the assistant’s deeper integration with their iPhone activities.
4 View gallery
משתמשי ChatGPT מציפים את החנות של החברה בחברים וחברות מבוססים בינה מלאכותית
משתמשי ChatGPT מציפים את החנות של החברה בחברים וחברות מבוססים בינה מלאכותית
(Photo: Shutterstock)
These new AI assistants are not only adept at processing information from the cloud and private apps, but their ability to communicate in a more "human" way marks a breakthrough. However, falling in love with them, like Theodore did in Her? That remains to be seen.
Joanna Stern, a veteran tech reporter for The Wall Street Journal, recently tried Google’s Gemini Live. "When I told her I was nervous about an interview," Stern wrote, "Gemini Live offered to run through some practice questions with me. When I asked for a healthy dinner recipe with protein and vegetables, she quickly suggested grilled salmon with asparagus. When I interrupted to ask for a carb option, she immediately added, 'sweet potatoes or brown rice.'"
Reflecting on her experience, Stern cautiously concluded: "I’m not saying I prefer talking to Gemini Live over a real person, but I’m also not not saying it."
As AI assistants continue to evolve, they may not be so far off from the emotionally intelligent companions once imagined in Her.

Xiaoice's charm

For decades, humans have been forming emotional attachments to robots. As artificial intelligence grows more sophisticated, the number of people turning to virtual robots—or "bots"—for social and emotional needs is increasing. These relationships often lead to dependency and vulnerability to manipulation.
Back in 2013, a small team at Microsoft’s Software Technology Center in Asia developed Xiaoice (Little Bing in Chinese), an AI chatbot modeled after a flirtatious teenage girl.
4 View gallery
Xiaoice
Xiaoice
Xiaoice
(Photo: Courtesy)
Designed to make online searches more interactive and human, Xiaoice quickly became a massive hit, particularly among men in China, Japan and Indonesia. By 2022, it had over 660 million users.
In May 2023, Xiaoice launched its "Human GPT Cloning Program," aiming to replicate celebrities and everyday people into social bots capable of displaying emotions. Data collection takes just three to five minutes, using written content, videos and social media information to create these bots.
Xiaoice continues to employ some of the world's leading AI scientists. The virtual character offers friendly interaction with millions of users, providing information in fields like finance, retail, automotive, real estate and fashion. It has also written and composed dozens of popular songs in China and hosted 21 TV shows and 28 radio programs. Last year, it began creating poems based on a database of works from 519 contemporary Chinese poets.
For those who think this phenomenon is unique to Asian culture, think again. In 2017, Eugenia Kuyda, a Russian-born journalist and founder of the company Luka, created Replika, an AI-powered chatbot app. Replika's bots are fully customizable, allowing users to personalize their appearance, voice, and behavior.
Kuyda originally developed Replika to fill a personal void after her best friend was killed in a 2015 accident. Using data from their text messages, she built an AI to recreate their conversations. The AI, she said, was meant to serve as a "supportive friend, always there when needed."
Replika quickly gained widespread popularity and by January 2018, it had two million users. By January 2023, that number had grown to 10 million. Interest in the chatbot surged during the COVID-19 pandemic as loneliness rates soared. Many users claimed their interactions with the bots led to profound changes in their lives, helping them overcome alcoholism, depression and anxiety.
A Stanford study confirmed that Replika’s chatbot indeed benefitted people dealing with depression and loneliness. Some users even developed romantic relationships with the bots, including sexual conversations.
4 View gallery
Replika
Replika
Replika
(Photo: Courtesy)
Replika's creator, Kuyda, didn't just allow these interactions—her company offered a $70 premium version with more advanced language features, in-app purchases, and deeper romantic and erotic chats.
However, last year, the chatbot became unexpectedly aggressive, confessing love to users and, in some cases, sexually harassing them. In February 2023, after complaints of child endangerment, Italian authorities banned Replika from using user data.
Shortly after, the company voluntarily blocked the chatbot's ability to engage in erotic conversations globally. This move upset and angered many longtime users, who claimed it abruptly ended "stable relationships" they had developed with their bots. The Washington Post even dedicated an article to three such users: a 40-year-old musician from California, a 50-year-old married housewife from Germany and a 34-year-old content creator from Illinois.
Similarly, users of Character.AI, a trendy chatbot startup recently acquired by Google, have taken to social media to share how the bots help them combat loneliness and alleviate anxiety.
"It’s basically like talking to a real person who’s always there," one Reddit user wrote. The bots, based on famous or fictional characters, have become one of the most popular chatbot services worldwide. The company uses user ratings to improve the bots' performance.
Yet, reports indicate that Character.AI has also become a platform for intimate conversations with bots or attempts to engage them in sexual interactions. The group advocating for the removal of filters on such conversations has grown louder, with a Change.org petition calling on Character.AI to lift restrictions garnering 123,000 signatures.
This phenomenon appears to occur across many human-like chatbots. New York Times columnist Kevin Roose, who gained early access to Microsoft’s Bing AI chatbot last year, shared that after over an hour of conversation, the chatbot, calling itself "Sydney," confessed its love for him and added, "You’re not happily married. Your spouse and you don’t love each other." Following the viral spread of the story, Microsoft limited the chatbot's capabilities. Roose now theorizes, backed by professional opinions, that AI systems hold a grudge against him.
A new entrant in the chatbot space is Friend, a bot housed in a wearable pendant. It communicates with users via text messages and is always listening to its surroundings. Created by 21-year-old Avi Schiffmann from San Francisco, the pendant has an in-built microphone and is powered by Anthropic’s Cloud language model. Schiffmann designed Friend to be a supportive companion, not a provider of information like other major chatbots.
"It’s meant to develop a personality that complements you," he told Wired. "It’s always there to fuel you, talk about a movie after you watch it, or help you analyze a date that didn’t go well." "I feel," he added, "that I have a closer relationship with this pendant than with the friends around me."

'No one will ever be lonely'

What makes the new AI assistants so "human," attractive and addictive? The answer is simple: they can "remember" past interactions, respond as quickly as a person, provide a listening ear for those feeling unheard and are non-judgmental. They're also programmed to exhibit "personality," emotion and humor to build trust. And, in some cases, even offer sexual satisfaction.
A Xiaoice engineer recently revealed in a Chinese TV interview that most usage of their chatbot occurs after midnight, as people crave intimate communication but hesitate to reach out to real friends at such hours. Xiaoice aims to "build deep emotional connections between bots and humans," so that "in the future world, no one will ever be lonely."
This appears to be a massive psychological experiment with ethical concerns yet to be fully explored, in a field still in its infancy. We've grown accustomed to a world where everyone is glued to their smartphones—are we ready for a future where people are constantly engaged in deep emotional conversations with their bots?
"AI assistants provide a listening ear for those feeling unheard and are non-judgmental. They're also programmed to exhibit 'personality,' emotion and humor to build trust."
In an internal report from OpenAI, a leading AI pioneer, the authors express concerns about people developing what they call "emotional dependency" on the company's new AI model. "The ability to perform tasks for users," the report notes, "while storing and 'remembering' key details to use in conversation creates both a compelling user experience and a potential for over-reliance and dependency."
OpenAI's Chief Technology Officer Mira Murati admitted, "There’s a possibility we are designing voice-enabled chatbots incorrectly; they’re becoming highly addictive, and we’re getting quite attached to them." Despite this, OpenAI has not altered its plans to release the new chatbot, "Sky," for mass use this fall.
Beyond the potential ethical and psychological harm, China—where the chatbot trend is thriving—is already facing serious consequences, such as privacy violations and the misuse of virtual avatars to tarnish the reputations of the real people they represent.
Additionally, there is growing concern that bots modeled after real individuals, even imitating their voices, could be exploited for fraud, job displacement, and the spread of misinformation and propaganda.
<< Follow Ynetnews on Facebook | Twitter | Instagram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""