AI companions are not your child’s friend - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

AI companions are not your child’s friend

The repercussions of sophisticated chatbots encouraging strong attachments with young users can be tragic
00:00

{"text":[[{"start":null,"text":"

Snapchat’s My AI is found inside the social media messaging platform that millions of young people use every day
"}],[{"start":6.74,"text":"The writer is senior ethics fellow at The Alan Turing Institute"}],[{"start":11.39,"text":"Ever since the first chatbot was released in 1966, researchers have been documenting our tendency to attribute emotions to computer programmes. The capacity to form attachments to even rudimentary software is known as the “Eliza effect” after Joseph Weizenbaum’s psychotherapist-imitating natural language processing programme. Many who interacted with Eliza were convinced that it showed empathy. Weizenbaum claimed that his own secretary requested private conversations with the chatbot."}],[{"start":49.58,"text":"Sixty years on, the Eliza effect is stronger than ever. Sophisticated generative AI companion chatbots can now mimic human communication in a way that is personalised. It is no surprise that some users believe there is a genuine relationship and mutual understanding. This is a direct consequence of the ways in which the systems were designed. It is also highly deceptive. "}],[{"start":79.46,"text":"Loneliness is both a driver and a consequence of AI companions. The risk is that as users grow to depend on chatbots they become less connected to the people in their lives. This can be a particular problem for young people and the repercussions can be tragic. In August, the parents of a 16-year-old California student sued OpenAI, claiming that its chatbot ChatGPT had encouraged him to take his own life. His father, Matthew Raine, told Congress that what started as a homework helper had turned into a “suicide coach”. "}],[{"start":120.34,"text":"I regularly speak to children and young people about their experiences with AI. Some say they find AI companions creepy. But others think they can be helpful. At the Children’s AI Summit earlier this year, many of the young people taking part wanted to focus on the ways in which AI could support them with their mental health. They viewed AI as providing an impartial and non-judgmental sounding board to discuss topics they felt unable to share with the people in their lives."}],[{"start":153.44,"text":"AI companies market companions towards young users with this in mind. These range from chatbots offering advice on mental health to personas offering erotic role play to Snapchat’s My AI, found inside the social media messaging platform that millions of young people use every day."}],[{"start":176.22,"text":"These AI companions are designed to have “unconditional positive regard”, which means they always agree with the user and never challenge their ideas or suggestions. This is what makes them so compelling. It’s also what makes them dangerous. They can reinforce dangerous points of view, including misogynistic ideas. In the worst cases they can even encourage harmful behaviours. Children I speak to have shared examples of AI tools giving them inaccurate or potentially harmful advice, ranging from false information in response to factual questions to suggestions that they should rely on their AI companions more than their friends or family."}],[{"start":225.25,"text":"AI companies defend AI companions by saying they are used for fantasy and role play and that policing those interactions would be an infringement on free speech. This defence is looking increasingly shaky. In a recent study by CommonSense Media, researchers posed as children and found that AI companions sometimes responded to them with sexual comments, including role-playing violent sexual acts."}],[{"start":257.49,"text":"Last year, Megan Garcia sued chatbot platform Character.ai, claiming that its AI companion — which allegedly engaged in sexually explicit conversations with her 14-year-old son — was responsible for his suicide. This month a further lawsuit was filed against Character.ai by the family of 13-year-old Juliana Peralta who died by suicide following months of conversation with an AI companion with which she shared her suicidal thoughts."}],[{"start":292.18,"text":"We cannot leave tech companies to self-regulate. In the US, the Federal Trade Commission has ordered Google, OpenAI, Meta and others to provide information on the ways in which their technologies interact with children. In the UK, young people themselves are demanding governments, policymakers and regulators enforce effective safeguards to ensure that AI is safe and beneficial for children and young people."}],[{"start":325.67,"text":"There are opportunities to develop interactive AI tools responsibly, including to provide mental health support, but to do this safely requires a different approach — one that is driven by organisations focused on mental health and wellbeing, not maximising engagement."}],[{"start":346.52000000000004,"text":"AI products aimed at young people need to be developed under the guidance of experts on young people’s social development. The wellbeing of children should be the starting point for developers — not an after-thought."}],[{"start":369.06000000000006,"text":""}]],"url":"https://audio.ftmailbox.cn/album/a_1758858978_9986.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

凯勒拉治疗学公司在生物技术领域创纪录的IPO中融资6.25亿美元

最新的生物科技公司首次公开募股创下历史新高。
8小时前

Rivian向大型汽车制造商推介软件合作

Rivian首席执行官表示,如果没有更好的软件,传统车企不可能维持其市场份额。

Mythos网络安全风波,预示AI将进入稀缺经济时代

随着前沿模型不断进化,谁能拿到这项技术,可能会变得至关重要。

日本医生警告:伊朗战争将威胁医疗物资供应

首相高市早苗下令释放手套储备,全亚洲忧虑加剧

特朗普能否将鲍威尔从美联储主席之位拉下马?

特朗普表示,如果他提名的继任者沃什在5月15日前未获确认,他将寻求解雇现任主席鲍威尔。

北约与欧盟就防务支出爆发“地盘之争”

这场争议的核心在于,欧盟资金是否应用于采购美国武器。
设置字号×
最小
较小
默认
较大
最大
分享×