【英中同传】234、三大可能带来世界灾难的技术

Cora CI会议口译 2019-07-22

声音资源加载中...

到中

先观看英文视频,可脑记复述、笔记交传或进行同传。翻成中文后可参考Cora的笔记与中文音频。听不懂的部分,可查看词汇与讲解。非口译专业者,可用作精听练习素材。中文音频为同传而非交传,同传练习建议一次完成,交传练习建议分两次完成。

难度:basic

欢迎大家在留言区给出自己认为的难度。

时长:11'27"
词汇

strain, antidote, caste, menial, template, scenario, augment, asteroid, corollary, automation, outlier, outlandish

讲解
  • Cassandra:卡珊德拉,是古希腊神话中特洛伊的公主,她在神话中的形象是一位不被世人相信的女先知。

    Cassandra

  • Jennifer Doudna:珍妮弗·杜德纳,美国生物化学家,是基因编辑技术的开拓者之一。

    Jennifer Doudna

  • CRISPR technology:CRISPR是原核生物基因组内的一段重复序列,该技术利用一种酶即Cas9,把一段作为引导工具的RNA切入DNA,就能在此处切断或做出其他改变。作为基因编辑技术,CRISPR具有精准、廉价、易于使用等优势。

    CRISPR

  • Frankenstein出自玛丽·雪莱创作长篇小说《弗兰肯斯坦》,又名《科学怪人》。原文比喻Jennifer担心人类将肆意使用CRISPR技术,就如同小说中不负责任的科学家违背自然规律创造出弗兰肯斯坦,科学主义的泛滥最终会为人类带难以预料的灾难。

    Frankenstein

  • off the charts:超常,特别好或特别糟

  • beyond the pale:超乎寻常,不同于普通人

  • Eliezer Yudkowsky:埃利泽·尤德考斯基,是美国加州大学伯克利分校机器智能研究所的创始人之一,倡导“友好的人工智能”,即目标与人类相一致的人工智能。

    Eliezer Yudkowsky

  • around the corner:即将到来

  • Internet of Things (IoT):物联网,指物理设备(如汽车、家用电器),具有计算机化系统(如软件、传感器),通过智能感知、识别技术等通信感知技术,使得物体与物体之间可以进行信息交换和通信,从而实现智能化识别、定位、跟踪等。

    IoT

错误
  • 有几处“AI”发音变形

笔记

讲稿

The subtitle of our book is Finding Cassandras to Stop Catastrophes. Cassandra in Greek mythology was someone cursed by the gods, who could accurately see the future but would never be believed. When we say "Cassandras" throughout the book, we're talking about people who can accurately see the future. People who are right—Cassandra was right—people who are right about the future but are being ignored.

Having derived what we think are the lessons learned from past Cassandra events, we then looked at people today who were predicting things and being ignored.

And we looked at issues first and then tried to see if there was someone warning about them. So the book is about people, 14 people: Seven who we known were Cassandras, and seven who we are examining to find out if they are.

Usually Cassandras are people who are not directly involved in the thing that they worry about. There are people who observe it. Then there are people who study it. But in the case of Jennifer Doudna at the University of California Berkeley, she's the person who created it and she's also our Cassandra.

The "it" in this case is CRISPR-Cas9, a method that she invented—and I'm sure someday will get a Nobel prize for—a method of doing gene editing that allows for removal of genetic defects in the strain or addition into a strain of new capabilities. Now this is going to revolutionize human life. It's already beginning. It's going to mean that all of the genetic defects that have caused so much pain and suffering for people for millions of years, all of that could potentially be removed. 

So why does the great woman who invented this wake up in the middle of the night worrying about it? What she told us was she's afraid that she might have become Dr. Frankenstein. That the technique that she developed could be misused in horrible ways. It could be misused, for example, to create biological weapons, to create new forms of threat to human beings, threats for which we don't have any known antidote.

Or it could simply be used to create human beings of far superior capability. Not just taking genes and removing defects but adding new super capabilities. And so one scenario we discussed with her was what if the North Koreans or the Chinese decided that they would create super soldiers? Physically large people with great athletic ability designed to be soldiers, designed to be aggressive, designed to be able to fight for long periods of time.

Or what if they simply created people who were brilliant at computer programming and had IQs off the charts? What if in the process of that kind of gene editing we created a caste society where some people were genetically designed to do menial tasks and didn't have the capability of doing anything else? And other people were designed to be the rulers with huge IQs and the capability of understanding things beyond the pale for lesser humans. That’s something that scared the creator of CRISPR-Cas9 and it scared us.

When we heard Jennifer's story, we asked ourselves, "does she fit the template of a Cassandra that we developed in the first half of the book looking at the first seven?" Is she an expert? Absolutely. She is the expert. She created it. Is she data-driven? Yes. She has a wealth of data on CRISPR-Cas9 and what it can do. Is she predicting something that is first-occurrence syndrome? Something that's never happened before? And the answer to that is "yes." Is it kind of outlandish? Is the stuff of Hollywood fiction? Yes it is. What about the audience—the decisions maker? One of the things we saw with the earlier Cassandras was it wasn't always clear there was a decision maker. People always pointed at each other saying "that's your job, or at least it's not my job." And in this case, making decisions about what gene editing can happen, and can't happen, and enforcing that is a matter of law, and international law, and it's not at all clear whose job that is.

One of those issues we looked at was artificial intelligence. Now frankly my co-author R. P. Eddy and I disagreed about whether or not to do artificial intelligence. I said, "I don't think this is a problem." After all if a computer acts up, you can unplug it. Obviously I didn't understand the issue. And the way that my co-author, R.P. Eddy, convinced me that we should look for someone on this issue was by saying, "Who are the people who are talking about this today?" Not the experts in AI but the people who are generally concerned about it. 

And who are they?

Bill Gates, the founder of Microsoft. Elon Musk, the founder of Tesla. Stephen Hawking, the great physicist from Oxford. And when I heard that I said "Okay, fine. Maybe if those guys think this is a problem, maybe we should look for the expert who is predicting that this could be a future disaster."

And we found Eliezer Yudkowsky, who not only thinks this could become a disaster, he's dedicated his life and all of his work to dealing with the future threat of artificial intelligence. Because he doesn't think it's inevitable that artificial intelligence should be a problem. But he does have a scenario whereby it could be if we don't do some of the things he has in mind. What's the problem? The problem could be that artificial intelligence starts writing software. Complex software. Maybe even encrypted software that human beings do not understand. And can't deal with. That future is just around the corner. Already we have software writing software. Already at Google we have artificial intelligence writing software for further artificial intelligence.

And the Google program is getting to the point where they're afraid they don't fully understand how it's doing what it's doing. What Eliezer Yudkowsky fears most is that superintelligence will come into existence. That means artificial intelligence programs that are significantly smarter than human intelligence, and even human intelligence today augmented by computers. And what he sees as possible, looking at the rate of advance in technology, is that this will not be a linear growth in the capabilities of software. But it could be overnight. One day, artificial intelligence might be under the control of humans beings, and the next day it might have jumped into superintelligence—far more capable than anything we could possibly understand.

If you then put artificial intelligence onto networks that are running critical infrastructure—the Internet of Things, another subject we look at in the book—it's possible in the worst case scenario that human beings will lose control of the infrastructure of society. In even worse case scenarios than that, artificial intelligence will decide it doesn't need humans at all. And it is that fear that causes him to agree now as a planet, as a number of different countries and societies, to put limits on the development of artificial intelligence, and to do that by international treaty and to have observation to make sure artificial intelligence doesn't break out of pre-determined limits agreed by human beings and their governments. 

Now you've seen that plot before. You've seen that in a Hollywood movie. And that's part of the problem. With so many of the possible Cassandras that we looked at today. That humans have seen these threats before, they saw them in science fiction. So whether it's the possibility of an asteroid hitting the Earth or human beings being genetically engineered or artificial intelligence taking over, part of the reason we don't take these Cassandras seriously is we've seen it in the movies, we've seen it in science fiction.

A corollary issue to artificial intelligence is the rise of robotics. And already in this country we're hearing debates about the possibility that the next wave in automation rather than just shifting jobs from one function to another which has happened in the past with automation maybe the next wave of automation would be far more advanced and complex and actually throw humans out of work. It's a debate that's going on and we don't know who's right.

Some people say, "People will be thrown out of work and there'll be less need for humans to do work and we'll have to pay humans for doing nothing." Tax computers is one—tax robots is a proposal. And the other theory is that just as in the past when technology advances it may displace certain jobs but it will create new ones. We don't know who's right there, but we do know and all of our future Cassandras, or our present day Cassandras predicting things about the future, that they need to be listened to, and there needs to be examination of the theories that they're putting forward and the data that they're putting forward, even if they are an outlier—a minority view among experts.

长文链接

【CI毕业考】两年厚积,三天薄发(上)

【CI毕业考】两年厚积,三天薄发(下)

【CI中期考】淘汰制,是否如传说般可怕?

【CI入学考】中国最难口译入学考

【Cora】我的口语学习经历(一年级)

【Cora】和大家说一点心里话

【Q&A;】第一波粉丝十问解答

【Q&A;】第二波粉丝十问解答



记得将“CI会议口译”设为星标哦

    已同步到看一看

    发送中

    本站仅按申请收录文章,版权归原作者所有
    如若侵权,请联系本站删除
    觉得不错,分享给更多人看到