国产一级a片免费看高清,亚洲熟女中文字幕在线视频,黄三级高清在线播放,免费黄色视频在线看

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
雙語閱讀|終結(jié)者機器人不會很快降臨


Advances in artificial intelligence have many fearing that we could soon see Terminator-style killer robots.

人工智能在不斷發(fā)展,我們恐怕很快就能見到“終結(jié)者”式的殺人機器人了。


But according to Chris Bishop, director of Microsoft Research in Cambridge, such dramatic views could scupper advances in AI.

淡過,劍橋大學微軟研究院的院長克里斯·畢曉普(Chris Bishop)認為,這種戲荒謬的想法會阻止人工智能的發(fā)展。


He claims that fear of intelligent robots will cause humanity to lose out on the benefits of robots.

他聲稱對智能機器人的恐懼會讓人們無法享受其帶來的益處。


'The danger I see is if we spend too much of our attention focusing on Terminators and Skynet and the end of humanity,' Bishop told the Guardian ahead of a discussion about machine learning at the Royal Society.

“如果我們過于關(guān)注電影《終結(jié)者》、天網(wǎng)(電影《終結(jié)者》中的人工智能防御系統(tǒng),后自我意識覺醒,視全人類為威脅)以及人類的結(jié)局,是不利于科技發(fā)展的?!?畢曉普在英國皇家學會中討論機器學習時對英國衛(wèi)報如是說道。


'[We are] generally just painting a too negative, emotive and one-sided view of artificial intelligence – we may end up throwing the baby out with the bathwater.'

“我們用過于消極、情緒化且片面的態(tài)度看待人工智能了——這會導致我們喪失最重要且珍貴的東西?!?/p>


He added that he 'completely disagreed' with likes of Tesla founder, Elon Musk, and physicist, Stephen Hawking, who claim that AI could 'spell the end of the human race.'

他還表示,他非常不贊同特斯拉創(chuàng)始人埃隆·馬斯克(Elon Musk)以及物理學家斯蒂芬·霍金(Stephen Hawking)的觀點。原因是他們認為人工智能可能會預示著人類的毀滅。


'Any scenario in which [AI] is an existential threat to humanity is not just around the corner,' said Bishop.

“人工智能對人類不存在任何威脅。” 畢曉普說道。


'I think they must be talking decades away for those comments to make any sense.

“我認為他們是在喋喋不休的給出一些毫無意義的評論。


'Right now we are in control of that technology and we can make lots of choices about the paths that we follow.'

“我們現(xiàn)在可以掌控技術(shù)并且決定未來科技的發(fā)展走向?!?/p>


Last year, Bishop was one of the co-signatories on an open letter calling promising to ensure AI research benefits humanity

在2015年,畢曉普成為一封關(guān)于成諾保證人工智能研究有益于人類的公開信的簽署人。


The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.

這封信警告人們,如果在智能機器上沒有保障措施,人類的未來將陷入一片黑暗。


The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.

這是一份由未來生命研究所(Future of Life Institute)起草的文件。文件中還提到,科學家們應該想辦法阻止會讓人類毀滅的危險發(fā)生。


The authors say there is a 'broad consensus' that AI research is making good progress and would have a growing impact on society.

信的作者還稱,人們有一個廣泛的共識,那就是人工智能正在向好的方向發(fā)展,并將對社會產(chǎn)生越來越大的影響。


It highlights speech recognition, image analysis, driverless cars, translation and robot motion as having benefited from the research.

這封信還強調(diào),語音識別、圖像分析、無人駕駛車、翻譯以及機器人運動都得益于這項研究。


'The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,' the authors write.

“人工智能潛在的好處是巨大的,因為人類文明不得不提供給我們的任何東西都是人類智慧的產(chǎn)物。當我們運用由人工智能產(chǎn)生的工具時,我們的智慧被放大了。因此,我們無法預測我們會取得什么樣的成就,但消滅疾病與貧窮不是不可能的?!弊髡哌@樣寫道。


But it issued a stark warning that research into the rewards of AI had to be matched with an equal effort to avoid the potential damage it could wreak.

但它也忠告,在研究人工智能并為人帶來益處時,也要用同等的努力去避免其潛在的破壞性。


For instance, in the short term, it claims AI may put millions of people out of work.

比如,就眼前來看,人工智能可能會讓上百萬的人失業(yè)。


In the long term, it could have the potential to play out like a fictional dystopias in which intelligence greater than humans could begin acting against their programming.

從長遠角度看,其可能會像小說中反烏托邦的情節(jié)一樣,超越人類智慧的機器將反人類行動。


'Our AI systems must do what we want them to do,' the letter says.

“人工智能系統(tǒng)必須按我們的想法行動。”信里說道。


'Many economists and computer scientists agree that there is valuable research to be done on how to maximise the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment.'

“許多經(jīng)濟學家和計算機科學家認為,在減輕負面影響(增長的失業(yè)率與不平等)的同時,運用人工智能手段使經(jīng)濟效益最大化是一項有價值的研究。”


Other signatories to the FLI's letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute and Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate.簽署這封由未來生命研究所起草的信的簽署人還有:機器智能研究所的執(zhí)行主任盧克·穆豪瑟爾(Luke Muehlhauser)、麻省理工學院的物理學教授、諾貝爾獎獲得者弗朗克·韋爾切克(Frank Wilczek)。


The letter came just weeks after Professor Hawking warned that AI could someday overtake humans.

這封信是在霍金教授剛剛警告人工智能可能會在某天超越人類的幾周之后發(fā)表的。



編譯:郭碩,北京信息科技大學外國語學院英語本科

審校&編輯:欽君

英文來源:每日郵報

本站僅提供存儲服務,所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報。
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
人工智能將成為人類終結(jié)者? 會取代人類的全部嗎?
應該允許機器假裝人類嗎?畢竟這樣它們的效率最高|科學60秒
放一百個心,機器人不會反攻人類
zz 霍金:人工智能或成為人類“終結(jié)者”
貓奴AI的自白(上)| 雨果獎最佳短篇
真讓人頭大!大作文突然讓你寫人工智能,你說怕不怕?
更多類似文章 >>
生活服務
分享 收藏 導長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服