We let down Gen Z on social media – we must not let them down on AI as well
- Last update: 1 hours ago
- 4 min read
- 386 Views
- BUSINESS
Earlier this year, I conducted focus groups with university students to explore a seemingly simple question: how is Generation Z really engaging with artificial intelligence (AI)? I anticipated hearing stories about academic useChatGPT helping draft essays or summarise readings. While this was true, the most striking insights went far beyond academics.
It became clear that many students were using AI as a personal guide for social interactions and emotional support. Some ran messages through AI to avoid sounding harsh. Others relied on it to analyse arguments with friends or interpret ambiguous texts from romantic partners. Even interactions with parents were filtered through AI to ensure the right tone.
One student described using AI to structure emotions, another to navigate social situations correctly, and a third admitted to using AI mid-meeting to generate conversation starters. At an age when young adults should be developing emotional intelligencereading cues, apologising, forgiving, and making independent decisionsmuch of this essential growth is now outsourced to machines.
AI is not only shaping communication; it is influencing moral and practical decisions. Students reported consulting ChatGPT on everything from ethical dilemmas, like whether to give money to a homeless person, to mundane daily choices, from meals to study plans. In many cases, AI has become the perceived source of rational, unbiased advice, overshadowing human judgment.
This dependence is fueled by the perception of AI as neutral and omniscient, less judgmental than adults and more reliable than peers. Yet, these systems are far from infallible. They can produce false information, reflect biases, and provide inconsistent or unsafe guidance, particularly in moral or relational contexts. Tests on AI platforms have revealed concerning inaccuracies and risky advice, highlighting a serious gap between perceived and actual reliability.
The consequences are profound. Over-reliance on AI can stunt the development of moral reasoning, critical thinking, and social skills. Young people risk losing confidence in their own judgment, as AI often presents itself as always correct. Some focus group participants noted that even younger siblings, aged 11 to 14, were turning to AI for guidance on friendship conflicts or bullying rather than seeking help from trusted adults.
Research in both the US and UK confirms these trends. Teenagers increasingly rely on AI companions for emotional support and problem-solving, despite clear risks. Platforms like Character.AI and ChatGPT, while popular, have been shown to provide unsafe advice on sensitive topics such as self-harm and mental health. Studies from Common Sense Media and Stanford Medicine underline that current AI guidance is often unsafe, despite appearing empathetic.
The early adoption of AI for emotional and ethical guidance represents a societal challenge. We cannot afford to repeat the mistakes made with social media. Schools and universities must address AI not only as an academic tool but as a pervasive influence in young peoples personal lives. Digital literacy in 2025 must include emotional and ethical education.
Regulators and AI companies also have a critical role. Age restrictions, independent safety audits, and clear accountability must become standard. Platforms must prioritize ethical design and invest in user safety. Governments should be prepared to act when these standards are not met.
Parents and guardians need to engage with children about AI usage. Many students hide their AI dependence out of embarrassment or assumption that adults wont understand, which benefits tech companies but harms young users. If we want a generation capable of independent thought and emotional resilience, adults must actively participate in guiding their interaction with AI.
Ultimately, these are questions we must answer as a society. We cannot leave the moral, emotional, and social development of young people to machines. The responsibility lies with us to ensure that AI supports, rather than replaces, human growth and judgment.
Author: Sophia Brooks
Share
We let down Gen Z on social media – we must not let them down on AI as well
1 hours ago 4 min read BUSINESS
Alarming Study Shows Individuals Addicted to AI Are at Higher Risk of Mental Distress
1 hours ago 2 min read BUSINESS
59% of young people are concerned about AI impacting their job opportunities, shows new Harvard Youth Poll
17 hours ago 2 min read BUSINESS
"Experts concerned about potential impact of AI on early brain development: a distortion beyond understanding"
1 days ago 3 min read BUSINESS
Study finds AI chatbots spreading misinformation to influence political beliefs
1 days ago 3 min read BUSINESS
Claude, the creator of Anthropic, discovers an 'evil mode' that could concern AI chatbot users.
1 days ago 2 min read BUSINESS
AI-Generated Evidence Making Its Way Into Courtrooms - Here's What We Know
2 days ago 2 min read BUSINESS
At what age should students start learning about AI?
2 days ago 3 min read BUSINESS
AI Tutors Provide 'Reliable' Instruction with Human Assistance, Study Reveals
3 days ago 3 min read BUSINESS
Australia's ban on social media offers a test on harm caused by online platforms
5 days ago 3 min read POLITICS