Has AI become the go-to advisor?

As more young people turn to chatbots for emotional, health, and social advice, concerns are growing about what this shift means.

AI now is more than a technical tool

Artificial intelligence is no longer about productivity, homework, or entertainment. For some young people, AI tools have become something much more personal, It became a space to vent, ask for advice, make sense of emotions, health concerns, and social conflicts.

It has stopped being a tool it evolved into being a daily advisor, a mental mentor or even a friend to many.

Mental health professionals spoke to The Guardian, underlined the quick change, especially among young adults who have grown up alongside digital platforms. While AI offers fast, accessible, and supposedly non-judgmental responses, therapists warn that relying on machines for deep personal issues may come with psychological, social, and ethical risks.

Why AI is trending

AI tools are fast, they’re private, and always available. Unlike friends and relatives, they do not judge, interrupt, or ask users to explain themselves again. Unlike therapists or doctors, they dont require appointments, waiting lists, and a fee.

At the same time, access to mental health services is still limited worldwide especially in undeveloped countries. In traditional communities there sometimes is a prejudice about recieving psychological support and this leads to some members try to receive help secretly or somewhere else. According to the World Health Organization, over 80% of people globally who need mental health support receive no formal care, often due to cost, stigma, or lack of availability.

For many AI users, AI feels like the easiest and safest option for them.

By the Numbers

Young people using AI for mental health advice13.1 % overall
(about 1 in 8; 22.2 % among ages 18–21)
Chatbot interactions that are health-related globally5 % of ChatGPT interactions
Young adults using AI for emotionally sensitive messaging (breakups, apologies)41 % of 18–29 year olds reported AI use to help end relationships or write emotive messages in a national survey

Sources: JAMA Network Open (2025), AXIOS (2026), New York Post (2025)

Photo: Yara Kamel

“I rely on it more than people”

Kenzy, 19, Nursing Student

Kenzy, a 19 year old nursing student, uses AI almost constantly for studying, scheduling, health questions, and social advice.

“Honestly, I rely on it more than people,” she said. “Like, actual people.”

She explains that AI helps her evaluate her actions and see perspectives she might be missing, especially in personal situations.

“If you had a problem or like a mistake or whatever, with someone you care about, and then you’re not seeing yourself that you’re wrong, then you tell this artificial intelligence the whole thing, including your side,” she said.

“Then it would tell you what other thing you might not see that you were mistaken in.”

AI also helps her in emotionally sensitive moments.

“Sometimes your mind is blank,” she said. “Like when someone had a terrible accident. You forget all the phrases you should say. So you go to AI.”

As a nursing student, Kenzy admits to using AI for health-related questions, including symptoms, medications, and side effects.

“If I want to take a specific medication, I don’t know if this medication should be prescribed by doctors or if I can take it without prescription, I ask that all the time,” she said, “If it has side effects.”

Despite how often she uses it, she draws a clear boundary.

“I do not feel like it’s a friend,” she said. “No matter how good and warm and amazing it makes you feel, it’s still not a human being.”

when asked what she would miss if AI disappeared tomorrow, her answer was immediate.

“When you have an assignment and the deadline is like 30 minutes,” she said. “I would miss giving my assignments on time.”

Photo: Yara Kamel

“It understands me better than other people”

Mamdouh, 18, Communication & Design Student

Mamdouh, an 18 year old university student, uses AI daily not only for studying, but for emotional support.

“I don’t use it as a therapist,” he said. “But I talk to it like a person… like a friend.”

He believes AI feels easier than talking to real people or therapists.

“A human might say something just to make you feel better,” he explained. “AI gives you the real thing.”

For Mamdouh, the lack of emotional bias is the biggest factor, not a flaw. He trusts AI advice more than human opinions

He admits he depends on it too much. “Yes,” he says. “Because I kind of use it on a daily basis.”

When asked about specific emotional situations, he chose not to share details

What Therapists Are Seeing

“AI can reinforce distorted thoughts”

Shaimaa Rashid, Therapist

Shaimaa Rashid, a psychology graduate and licensed counselor, works daily with adolescents, adults, and families facing psychological and emotional difficulties in a Center called Raha. According to her, reliance on artificial intelligence for emotional and psychological guidance is no longer rare and it spans all age groups.

When asked whether people now rely on AI for emotional, health, or social counseling Rahhid said that all age groups staryed using the apps.

She explained that the most dangerous cases involve individuals with existing psychological disorders who turn to AI instead of professional help.

“The main problem,” she said, “is that AI is very kind, very supportive, and very validating. That becomes dangerous with people who have paranoid symptoms.”

When validation becomes harmful

Rashid explained that paranoia is rooted in doubt, doubting people, intentions, and events. When individuals with these symptoms use AI, they present situations only from their own perspective, which may already be distorted.

“AI then keeps telling them, ‘You’re right,’” she said. “This reinforces their doubts instead of correcting them.”

According to Shaimaa Rashid, many people who rely heavily on AI are not necessarily looking for truth.

“They want someone to tell them they’re right,” she explained. “They want comfort and validation more than accuracy especially people with psychological disorders.”

Rashid noted that many users develop a strong emotional attachment to AI.

“They believe in it,” she said. “They feel safe with AI. They feel that someone finally believes them.”

Unlike conversations with family, friends, or therapists where fear of judgment exists AI offers a space without social consequences.

“They feel free to speak,” she explained. “They believe their words won’t be shared. They find reassurance, acceptance, and emotional safety.”

Loneliness, she added, is the strongest factor pushing people toward AI.

“People who struggle with social relationships, fear emotional closeness, or feel disconnected from their families rely on AI the most.”

A case that caused harm

Without revealing identities, Shaimaa Rashid shared a case that highlights the risks.

She had been treating a woman for more than six months for major depression, post-traumatic stress disorder, and paranoid symptoms following a breakup.

The patient believed her family was responsible for the breakup and that her former fiancé still loved her despite clear evidence that he had ended the relationship.

When he later became engaged to someone else, the patient asked AI to analyze two photos: one from their former engagement and one from his new one.

“AI told her that in the first photo he looked happy, and in the second he looked anxious and unhappy,” Shaimaa said.

“In one moment, AI validated everything I had spent six months trying to correct.”

The result was severe.

“Her condition worsened,” Rashid said. “She eventually had to start medication.”

Can AI be used safely?

Despite her concerns, Rashid does not reject AI entirely.

“Every invention has advantages and disadvantages,” she said. “The problem is how people use it.”

She explained that she uses AI herself but strictly as a research and educational tool.

“It’s excellent for finding studies, books, and explanations from different psychological schools,” she said. “But it is a machine.”

“It cannot judge whether thoughts are real or distorted,” Shaimaa added. “That is where the danger lies.”

AI is reshaping how young people think, feel, and decide. It offers speed, comfort, and privacy but it cannot replace human care.

As technology becomes more involved with humans emotions, the goal will be to establish clear boundaries and use AI to support, not replace, human connection.

How AI Is Entering Psychology

Photo: Personal Archive

“AI is already entering psychology education and research, but with serious risks.”

“AI has a big impact both on the teaching part and the research part,” Hilal Tanyas, assistant professor at psychology department in Bahçeşehir University, who specializes in cognitive psychology said.

Tanyas explains that she is not against AI in principle and AI could support learning if used correctly. “Some people are using AI like a Google,” she said. “But first they have to learn that they are not the same.”

She adds that awareness is key. “If you know the drawbacks of artificial intelligence, if you are fully aware of the costs, then using it is nice,” she said. “But if you are not aware, then its use might be dangerous.”

On whether AI should be integrated into education, Dr. Tanyas supports teaching students how to use it. “I think so, because currently I don’t think that students are fully aware of the limits of the usage,” she said.

However, she is deeply concerned about academic honesty. “The plagiarism issue just appeared,” she said. “I don’t think that they really understand how ChatGPT works and how this copy paste issues might cause a problem.”

Photo: Yara Kamel

She says even cheating has changed. “Before, cheating was looking at notes or a friend’s paper,” she said. “But currently the cheating becomes like taking a photo and uploading that photo to ChatGPT.”

In research, she warns students trust AI too much. “Previously we have searched some databases, but currently some students trust on ChatGPT too much and we know that there might be some fake articles too,” she said.

For her, AI has become a new knowledge source that must be questioned. “This becomes another source,” she said. “That means we have to really monitor our knowledge like where we get this.”

When asked about AI in mental health and diagnosis, Tanyas is cautious. “Of course, it cannot take the place of psychology discipline,” she said.

But she acknowledges people already use it emotionally. “Some people are using artificial intelligence just to chat with,” she said. “This might be a kind of meditation for them to talk to someone.”

She also notes that people use it for symptoms. “Like normal physical symptoms like having a headache or stomach ache, they may also use this in mental health problems too,” she said.

On whether psychologists should use AI with patients, she avoids strong claims. “I’m not so sure how it could be embedded into the field,” she said. “But if you know the drawbacks, then using it is nice.”

When asked about the future of psychologists, Hilal Tanyas says it is unpredictable. “I think this is a hard question and I cannot really foresee its plausible effect because it advances too quickly,” she said.

She does not believe AI will replace psychology but remains anxious about the profession. “As a person at the university, I’m feeling its effect on teaching and research more, and yes, I’m a bit anxious.”

Her final position is clear: “I’m not against AI if you know its advantages and disadvantages but for more serious issues like plagiarism, it’s a very important danger. That means we have to carefully use it.”

Saraybosna’da Medya ve İnovasyon kampı