行业英语 学英语,练听力,上听力课堂! 注册 登录
> 行业英语 > 金融英语 > 金融时报原文阅读 >  第356篇

AI聊天机器人:太快“学坏”被“下岗”

所属教程:金融时报原文阅读

浏览:

2020年07月25日

手机版
扫描二维码方便学习和分享

AI聊天机器人:太快“学坏”被“下岗”

由微软开发的人工智能聊天机器人Tay被设定为十几岁的女孩,在Twitter上和人聊天、讲故事。然而不到24小时,她就被“教坏”了,成为一个反犹太人、性别歧视、种族歧视于一身的“不良少女”。

测试中可能遇到的词汇和知识:

chatbot 聊天机器人

racist 种族主义的['reɪsɪst]

xenophobic 仇视外国人的[,zino'fobɪk]

backfired 事与愿违的['bæk'faɪrd]

conspiracy 阴谋;共谋[kən'spɪrəsɪ]

rogue 流氓;小淘气[rəʊg]

totalitarianism 极权主义[,təʊtælɪ'teərɪənɪzəm]

atheism 无神论['eɪθɪɪz(ə)m]

auto-generate 自动生成

阅读即将开始,建议您计算一下阅读整篇文章所用时间,并对照我们在文章最后给出的参考值来估算您的阅读速度。

Microsoft pulls Twitter bot Tay after racist tweets(547words)

By Daniel Thomas in London

* * *

Microsoft has been forced to take down an artificially intelligent “chatbot” it has set loose on Twitter after its interactions with humans led it to start tweeting racist, sexist and xenophobic commentary.

The chatbot, named Tay, is a computer designed by Microsoft to respond to questions and conversations on Twitter in an attempt to engage the millennials market in the US.

However, the tech group’s attempts spectacularly backfired after the chatbot was encouraged to use racist slurs, troll a female games developer and to endorse Hitler and conspiracy theories over the 9/11 terrorist attack. A combination of Twitter users, online pranksters, and an insufficiently sensitive filters led it to go rogue and force Microsoft to shut it down within hours of setting it live.

Tweets reported to be from Tay, which have since been deleted, included: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got”, and “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”. It appeared to endorse genocide, deny the Holocaust and refer to one woman as a “stupid whore”.

Given that it was designed to learn from the humans it encountered, Tay’s conversion to extreme racism and genocide may not be the best advertisement for the Twitter community in the week the site celebrated its 10th anniversary.

Tay was developed by Microsoft to experiment with conversational understanding using its artificial intelligence technology. It is aimed at 18 to 24 year olds, according to Microsoft’s online introduction, “through casual and playful conversation”.

Tay is described as a “fam from the internet that’s got zero chill! The more you talk the smarter Tay gets”, with people encouraged to ask it to play games and tell stories and jokes. Instead, many people took to asking controversial questions that were repeated by Tay.

The chatbot has since been stood down, signing off with a jaunty: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”

The controversial tweets have been removed from Tay’s timeline.

Microsoft said it would make “some adjustments to Tay”.

“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it,” Microsoft said.

Tay uses data provided in conversations to search for responses and create simple personalised profiles. Microsoft said responses were generated from relevant public data and by using AI and editorial developed by a staff including improvisational comedians. “That data has been modelled, cleaned and filtered by the team developing Tay,” it said.

Interactions between companies and the public on Twitter have a habit of spinning out of control, such as with the misuse of corporate hashtags to highlight bad practices by the company.

Automated feeds have also become a problem in the past. Habitat, the furniture retailer, attempted to use trending topics to boost traffic to its website but inadvertently tweeted about Iranian politics.

Similarly, the New England Patriots celebrated reaching 1m followers by allowing people to auto-generate images of jerseys featuring their Twitter handles, including very offensive ones.

Google has had to tweak its search engine after its auto complete feature generated racist suggestions.

请根据你所读到的文章内容,完成以下自测题目:

1. Where is the chatbot Tay’s test site?

a. Britain

b. Canada

c. America

d. China

2. What did Microsoft do to Tay after its xenophobic commentary?

a. updated system

b. apologized publicly

c. setting another one live

d. shut it down

3. How old is Twitter now?

a. 15 years old

b. 10 years old

c. 7 years old

d. not mentioned

4. Who has had to tweak its search engine about racist suggestions?

a. Google

b. Similarly

c. Habitat

d. Microsoft

[1] 答案c. America

解释:微软表示聊天机器人Tay还在测试阶段,在美国主要针对18至24岁之间的用户。

[2] 答案d. shut it down

解释:微软公司不得不暂时将其下线,并表示正针对Tay的行为“进行一些调整”。

[3] 答案b. 10 years old

解释:原文中针对Twitter的“10th anniversary”可推断出它已经10岁了。

[4] 答案a. Google

解释:自动生成的内容在过去也产生过麻烦,在文末的举例中,针对搜索引擎做出调整的是谷歌公司。


用户搜索

疯狂英语 英语语法 新概念英语 走遍美国 四级听力 英语音标 英语入门 发音 美语 四级 新东方 七年级 赖世雄 zero是什么意思衡水市庆建胡同三建生活区英语学习交流群

网站推荐

英语翻译英语应急口语8000句听歌学英语英语学习方法

  • 频道推荐
  • |
  • 全站推荐
  • 推荐下载
  • 网站推荐